Sample records for kernel convolution technique

  1. Optimal focal-plane restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1989-01-01

    Image restoration can be implemented efficiently by calculating the convolution of the digital image and a small kernel during image acquisition. Processing the image in the focal-plane in this way requires less computation than traditional Fourier-transform-based techniques such as the Wiener filter and constrained least-squares filter. Here, the values of the convolution kernel that yield the restoration with minimum expected mean-square error are determined using a frequency analysis of the end-to-end imaging system. This development accounts for constraints on the size and shape of the spatial kernel and all the components of the imaging system. Simulation results indicate the technique is effective and efficient.

  2. Combined multi-kernel head computed tomography images optimized for depicting both brain parenchyma and bone.

    PubMed

    Takagi, Satoshi; Nagase, Hiroyuki; Hayashi, Tatsuya; Kita, Tamotsu; Hayashi, Katsumi; Sanada, Shigeru; Koike, Masayuki

    2014-01-01

    The hybrid convolution kernel technique for computed tomography (CT) is known to enable the depiction of an image set using different window settings. Our purpose was to decrease the number of artifacts in the hybrid convolution kernel technique for head CT and to determine whether our improved combined multi-kernel head CT images enabled diagnosis as a substitute for both brain (low-pass kernel-reconstructed) and bone (high-pass kernel-reconstructed) images. Forty-four patients with nondisplaced skull fractures were included. Our improved multi-kernel images were generated so that pixels of >100 Hounsfield unit in both brain and bone images were composed of CT values of bone images and other pixels were composed of CT values of brain images. Three radiologists compared the improved multi-kernel images with bone images. The improved multi-kernel images and brain images were identically displayed on the brain window settings. All three radiologists agreed that the improved multi-kernel images on the bone window settings were sufficient for diagnosing skull fractures in all patients. This improved multi-kernel technique has a simple algorithm and is practical for clinical use. Thus, simplified head CT examinations and fewer images that need to be stored can be expected.

  3. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  4. Off-resonance artifacts correction with convolution in k-space (ORACLE).

    PubMed

    Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne

    2012-06-01

    Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.

  5. A New Deep Learning Model for Fault Diagnosis with Good Anti-Noise and Domain Adaptation Ability on Raw Vibration Signals

    PubMed Central

    Zhang, Wei; Peng, Gaoliang; Li, Chuanhao; Chen, Yuanhang; Zhang, Zhujun

    2017-01-01

    Intelligent fault diagnosis techniques have replaced time-consuming and unreliable human analysis, increasing the efficiency of fault diagnosis. Deep learning models can improve the accuracy of intelligent fault diagnosis with the help of their multilayer nonlinear mapping ability. This paper proposes a novel method named Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCNN). The proposed method uses raw vibration signals as input (data augmentation is used to generate more inputs), and uses the wide kernels in the first convolutional layer for extracting features and suppressing high frequency noise. Small convolutional kernels in the preceding layers are used for multilayer nonlinear mapping. AdaBN is implemented to improve the domain adaptation ability of the model. The proposed model addresses the problem that currently, the accuracy of CNN applied to fault diagnosis is not very high. WDCNN can not only achieve 100% classification accuracy on normal signals, but also outperform the state-of-the-art DNN model which is based on frequency features under different working load and noisy environment conditions. PMID:28241451

  6. Multineuron spike train analysis with R-convolution linear combination kernel.

    PubMed

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Photon Counting Computed Tomography With Dedicated Sharp Convolution Kernels: Tapping the Potential of a New Technology for Stent Imaging.

    PubMed

    von Spiczak, Jochen; Mannil, Manoj; Peters, Benjamin; Hickethier, Tilman; Baer, Matthias; Henning, André; Schmidt, Bernhard; Flohr, Thomas; Manka, Robert; Maintz, David; Alkadhi, Hatem

    2018-05-23

    The aims of this study were to assess the value of a dedicated sharp convolution kernel for photon counting detector (PCD) computed tomography (CT) for coronary stent imaging and to evaluate to which extent iterative reconstructions can compensate for potential increases in image noise. For this in vitro study, a phantom simulating coronary artery stenting was prepared. Eighteen different coronary stents were expanded in plastic tubes of 3 mm diameter. Tubes were filled with diluted contrast agent, sealed, and immersed in oil calibrated to an attenuation of -100 HU simulating epicardial fat. The phantom was scanned in a modified second generation 128-slice dual-source CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Erlangen, Germany) equipped with both a conventional energy integrating detector and PCD. Image data were acquired using the PCD part of the scanner with 48 × 0.25 mm slices, a tube voltage of 100 kVp, and tube current-time product of 100 mAs. Images were reconstructed using a conventional convolution kernel for stent imaging with filtered back-projection (B46) and with sinogram-affirmed iterative reconstruction (SAFIRE) at level 3 (I463). For comparison, a dedicated sharp convolution kernel with filtered back-projection (D70) and SAFIRE level 3 (Q703) and level 5 (Q705) was used. The D70 and Q70 kernels were specifically designed for coronary stent imaging with PCD CT by optimizing the image modulation transfer function and the separation of contrast edges. Two independent, blinded readers evaluated subjective image quality (Likert scale 0-3, where 3 = excellent), in-stent diameter difference, in-stent attenuation difference, mathematically defined image sharpness, and noise of each reconstruction. Interreader reliability was calculated using Goodman and Kruskal's γ and intraclass correlation coefficients (ICCs). Differences in image quality were evaluated using a Wilcoxon signed-rank test. Differences in in-stent diameter difference, in-stent attenuation difference, image sharpness, and image noise were tested using a paired-sample t test corrected for multiple comparisons. Interreader and intrareader reliability were excellent (γ = 0.953, ICCs = 0.891-0.999, and γ = 0.996, ICCs = 0.918-0.999, respectively). Reconstructions using the dedicated sharp convolution kernel yielded significantly better results regarding image quality (B46: 0.4 ± 0.5 vs D70: 2.9 ± 0.3; P < 0.001), in-stent diameter difference (1.5 ± 0.3 vs 1.0 ± 0.3 mm; P < 0.001), and image sharpness (728 ± 246 vs 2069 ± 411 CT numbers/voxel; P < 0.001). Regarding in-stent attenuation difference, no significant difference was observed between the 2 kernels (151 ± 76 vs 158 ± 92 CT numbers; P = 0.627). Noise was significantly higher in all sharp convolution kernel images but was reduced by 41% and 59% by applying SAFIRE levels 3 and 5, respectively (B46: 16 ± 1, D70: 111 ± 3, Q703: 65 ± 2, Q705: 46 ± 2 CT numbers; P < 0.001 for all comparisons). A dedicated sharp convolution kernel for PCD CT imaging of coronary stents yields superior qualitative and quantitative image characteristics compared with conventional reconstruction kernels. Resulting higher noise levels in sharp kernel PCD imaging can be partially compensated with iterative image reconstruction techniques.

  8. Sensitivity Kernels for the Cross-Convolution Measure: Eliminate the Source in Waveform Tomography

    NASA Astrophysics Data System (ADS)

    Menke, W. H.

    2017-12-01

    We use the adjoint method to derive sensitivity kernels for the cross-convolution measure, a goodness-of-fit criterion that is applicable to seismic data containing closely-spaced multiple arrivals, such as reverberating compressional waves and split shear waves. In addition to a general formulation, specific expressions for sensitivity with respect to density, Lamé parameter and shear modulus are derived for a isotropic elastic solid. As is typical of adjoint methods, the kernels depend upon an adjoint field, the source of which, in this case, is the reference displacement field, pre-multiplied by a matrix of cross-correlations of components of the observed field. We use a numerical simulation to evaluate the resolving power of a topographic inversion that employs the cross-convolution measure. The estimated resolving kernel shows is point-like, indicating that the cross-convolution measure will perform well in waveform tomography settings.

  9. 3D and 4D magnetic susceptibility tomography based on complex MR images

    DOEpatents

    Chen, Zikuan; Calhoun, Vince D

    2014-11-11

    Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.

  10. SU-E-T-423: Fast Photon Convolution Calculation with a 3D-Ideal Kernel On the GPU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriya, S; Sato, M; Tachibana, H

    Purpose: The calculation time is a trade-off for improving the accuracy of convolution dose calculation with fine calculation spacing of the KERMA kernel. We investigated to accelerate the convolution calculation using an ideal kernel on the Graphic Processing Units (GPU). Methods: The calculation was performed on the AMD graphics hardware of Dual FirePro D700 and our algorithm was implemented using the Aparapi that convert Java bytecode to OpenCL. The process of dose calculation was separated with the TERMA and KERMA steps. The dose deposited at the coordinate (x, y, z) was determined in the process. In the dose calculation runningmore » on the central processing unit (CPU) of Intel Xeon E5, the calculation loops were performed for all calculation points. On the GPU computation, all of the calculation processes for the points were sent to the GPU and the multi-thread computation was done. In this study, the dose calculation was performed in a water equivalent homogeneous phantom with 150{sup 3} voxels (2 mm calculation grid) and the calculation speed on the GPU to that on the CPU and the accuracy of PDD were compared. Results: The calculation time for the GPU and the CPU were 3.3 sec and 4.4 hour, respectively. The calculation speed for the GPU was 4800 times faster than that for the CPU. The PDD curve for the GPU was perfectly matched to that for the CPU. Conclusion: The convolution calculation with the ideal kernel on the GPU was clinically acceptable for time and may be more accurate in an inhomogeneous region. Intensity modulated arc therapy needs dose calculations for different gantry angles at many control points. Thus, it would be more practical that the kernel uses a coarse spacing technique if the calculation is faster while keeping the similar accuracy to a current treatment planning system.« less

  11. Impact of deep learning on the normalization of reconstruction kernel effects in imaging biomarker quantification: a pilot study in CT emphysema

    NASA Astrophysics Data System (ADS)

    Jin, Hyeongmin; Heo, Changyong; Kim, Jong Hyo

    2018-02-01

    Differing reconstruction kernels are known to strongly affect the variability of imaging biomarkers and thus remain as a barrier in translating the computer aided quantification techniques into clinical practice. This study presents a deep learning application to CT kernel conversion which converts a CT image of sharp kernel to that of standard kernel and evaluates its impact on variability reduction of a pulmonary imaging biomarker, the emphysema index (EI). Forty cases of low-dose chest CT exams obtained with 120kVp, 40mAs, 1mm thickness, of 2 reconstruction kernels (B30f, B50f) were selected from the low dose lung cancer screening database of our institution. A Fully convolutional network was implemented with Keras deep learning library. The model consisted of symmetric layers to capture the context and fine structure characteristics of CT images from the standard and sharp reconstruction kernels. Pairs of the full-resolution CT data set were fed to input and output nodes to train the convolutional network to learn the appropriate filter kernels for converting the CT images of sharp kernel to standard kernel with a criterion of measuring the mean squared error between the input and target images. EIs (RA950 and Perc15) were measured with a software package (ImagePrism Pulmo, Seoul, South Korea) and compared for the data sets of B50f, B30f, and the converted B50f. The effect of kernel conversion was evaluated with the mean and standard deviation of pair-wise differences in EI. The population mean of RA950 was 27.65 +/- 7.28% for B50f data set, 10.82 +/- 6.71% for the B30f data set, and 8.87 +/- 6.20% for the converted B50f data set. The mean of pair-wise absolute differences in RA950 between B30f and B50f is reduced from 16.83% to 1.95% using kernel conversion. Our study demonstrates the feasibility of applying the deep learning technique for CT kernel conversion and reducing the kernel-induced variability of EI quantification. The deep learning model has a potential to improve the reliability of imaging biomarker, especially in evaluating the longitudinal changes of EI even when the patient CT scans were performed with different kernels.

  12. ASIC-based architecture for the real-time computation of 2D convolution with large kernel size

    NASA Astrophysics Data System (ADS)

    Shao, Rui; Zhong, Sheng; Yan, Luxin

    2015-12-01

    Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.

  13. A spherical harmonic approach for the determination of HCP texture from ultrasound: A solution to the inverse problem

    NASA Astrophysics Data System (ADS)

    Lan, Bo; Lowe, Michael J. S.; Dunne, Fionn P. E.

    2015-10-01

    A new spherical convolution approach has been presented which couples HCP single crystal wave speed (the kernel function) with polycrystal c-axis pole distribution function to give the resultant polycrystal wave speed response. The three functions have been expressed as spherical harmonic expansions thus enabling application of the de-convolution technique to enable any one of the three to be determined from knowledge of the other two. Hence, the forward problem of determination of polycrystal wave speed from knowledge of single crystal wave speed response and the polycrystal pole distribution has been solved for a broad range of experimentally representative HCP polycrystal textures. The technique provides near-perfect representation of the sensitivity of wave speed to polycrystal texture as well as quantitative prediction of polycrystal wave speed. More importantly, a solution to the inverse problem is presented in which texture, as a c-axis distribution function, is determined from knowledge of the kernel function and the polycrystal wave speed response. It has also been explained why it has been widely reported in the literature that only texture coefficients up to 4th degree may be obtained from ultrasonic measurements. Finally, the de-convolution approach presented provides the potential for the measurement of polycrystal texture from ultrasonic wave speed measurements.

  14. An Efficient Implementation of Deep Convolutional Neural Networks for MRI Segmentation.

    PubMed

    Hoseini, Farnaz; Shahbahrami, Asadollah; Bayat, Peyman

    2018-02-27

    Image segmentation is one of the most common steps in digital image processing, classifying a digital image into different segments. The main goal of this paper is to segment brain tumors in magnetic resonance images (MRI) using deep learning. Tumors having different shapes, sizes, brightness and textures can appear anywhere in the brain. These complexities are the reasons to choose a high-capacity Deep Convolutional Neural Network (DCNN) containing more than one layer. The proposed DCNN contains two parts: architecture and learning algorithms. The architecture and the learning algorithms are used to design a network model and to optimize parameters for the network training phase, respectively. The architecture contains five convolutional layers, all using 3 × 3 kernels, and one fully connected layer. Due to the advantage of using small kernels with fold, it allows making the effect of larger kernels with smaller number of parameters and fewer computations. Using the Dice Similarity Coefficient metric, we report accuracy results on the BRATS 2016, brain tumor segmentation challenge dataset, for the complete, core, and enhancing regions as 0.90, 0.85, and 0.84 respectively. The learning algorithm includes the task-level parallelism. All the pixels of an MR image are classified using a patch-based approach for segmentation. We attain a good performance and the experimental results show that the proposed DCNN increases the segmentation accuracy compared to previous techniques.

  15. A digital pixel cell for address event representation image convolution processing

    NASA Astrophysics Data System (ADS)

    Camunas-Mesa, Luis; Acosta-Jimenez, Antonio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2005-06-01

    Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number of neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate events according to their information levels. Neurons with more information (activity, derivative of activities, contrast, motion, edges,...) generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. AER technology has been used and reported for the implementation of various type of image sensors or retinae: luminance with local agc, contrast retinae, motion retinae,... Also, there has been a proposal for realizing programmable kernel image convolution chips. Such convolution chips would contain an array of pixels that perform weighted addition of events. Once a pixel has added sufficient event contributions to reach a fixed threshold, the pixel fires an event, which is then routed out of the chip for further processing. Such convolution chips have been proposed to be implemented using pulsed current mode mixed analog and digital circuit techniques. In this paper we present a fully digital pixel implementation to perform the weighted additions and fire the events. This way, for a given technology, there is a fully digital implementation reference against which compare the mixed signal implementations. We have designed, implemented and tested a fully digital AER convolution pixel. This pixel will be used to implement a full AER convolution chip for programmable kernel image convolution processing.

  16. ASSESSMENT OF CLINICAL IMAGE QUALITY IN PAEDIATRIC ABDOMINAL CT EXAMINATIONS: DEPENDENCY ON THE LEVEL OF ADAPTIVE STATISTICAL ITERATIVE RECONSTRUCTION (ASiR) AND THE TYPE OF CONVOLUTION KERNEL.

    PubMed

    Larsson, Joel; Båth, Magnus; Ledenius, Kerstin; Caisander, Håkan; Thilander-Klang, Anne

    2016-06-01

    The purpose of this study was to investigate the effect of different combinations of convolution kernel and the level of Adaptive Statistical iterative Reconstruction (ASiR™) on diagnostic image quality as well as visualisation of anatomical structures in paediatric abdominal computed tomography (CT) examinations. Thirty-five paediatric patients with abdominal pain with non-specified pathology undergoing abdominal CT were included in the study. Transaxial stacks of 5-mm-thick images were retrospectively reconstructed at various ASiR levels, in combination with three convolution kernels. Four paediatric radiologists rated the diagnostic image quality and the delineation of six anatomical structures in a blinded randomised visual grading study. Image quality at a given ASiR level was found to be dependent on the kernel, and a more edge-enhancing kernel benefitted from a higher ASiR level. An ASiR level of 70 % together with the Soft™ or Standard™ kernel was suggested to be the optimal combination for paediatric abdominal CT examinations. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network

    PubMed Central

    Qu, Xiaobo; He, Yifan

    2018-01-01

    Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods. PMID:29509666

  18. Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.

    PubMed

    Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di

    2018-03-06

    Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.

  19. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  20. Single image super-resolution based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia

    2018-03-01

    We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.

  1. Improved scatter correction using adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Sun, M.; Star-Lack, J. M.

    2010-11-01

    Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.

  2. Joint and collaborative representation with local Volterra kernels convolution feature for face recognition

    NASA Astrophysics Data System (ADS)

    Feng, Guang; Li, Hengjian; Dong, Jiwen; Chen, Xi; Yang, Huiru

    2018-04-01

    In this paper, we proposed a joint and collaborative representation with Volterra kernel convolution feature (JCRVK) for face recognition. Firstly, the candidate face images are divided into sub-blocks in the equal size. The blocks are extracted feature using the two-dimensional Voltera kernels discriminant analysis, which can better capture the discrimination information from the different faces. Next, the proposed joint and collaborative representation is employed to optimize and classify the local Volterra kernels features (JCR-VK) individually. JCR-VK is very efficiently for its implementation only depending on matrix multiplication. Finally, recognition is completed by using the majority voting principle. Extensive experiments on the Extended Yale B and AR face databases are conducted, and the results show that the proposed approach can outperform other recently presented similar dictionary algorithms on recognition accuracy.

  3. Producing data-based sensitivity kernels from convolution and correlation in exploration geophysics.

    NASA Astrophysics Data System (ADS)

    Chmiel, M. J.; Roux, P.; Herrmann, P.; Rondeleux, B.

    2016-12-01

    Many studies have shown that seismic interferometry can be used to estimate surface wave arrivals by correlation of seismic signals recorded at a pair of locations. In the case of ambient noise sources, the convergence towards the surface wave Green's functions is obtained with the criterion of equipartitioned energy. However, seismic acquisition with active, controlled sources gives more possibilities when it comes to interferometry. The use of controlled sources makes it possible to recover the surface wave Green's function between two points using either correlation or convolution. We investigate the convolutional and correlational approaches using land active-seismic data from exploration geophysics. The data were recorded on 10,710 vertical receivers using 51,808 sources (seismic vibrator trucks). The sources spacing is the same in both X and Y directions (30 m) which is known as a "carpet shooting". The receivers are placed in parallel lines with a spacing 150 m in the X direction and 30 m in the Y direction. Invoking spatial reciprocity between sources and receivers, correlation and convolution functions can thus be constructed between either pairs of receivers or pairs of sources. Benefiting from the dense acquisition, we extract sensitivity kernels from correlation and convolution measurements of the seismic data. These sensitivity kernels are subsequently used to produce phase-velocity dispersion curves between two points and to separate the higher mode from the fundamental mode for surface waves. Potential application to surface wave cancellation is also envisaged.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreiner, S.; Paschal, C.B.; Galloway, R.L.

    Four methods of producing maximum intensity projection (MIP) images were studied and compared. Three of the projection methods differ in the interpolation kernel used for ray tracing. The interpolation kernels include nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation. The fourth projection method is a voxel projection method that is not explicitly a ray-tracing technique. The four algorithms` performance was evaluated using a computer-generated model of a vessel and using real MR angiography data. The evaluation centered around how well an algorithm transferred an object`s width to the projection plane. The voxel projection algorithm does not suffer from artifactsmore » associated with the nearest neighbor algorithm. Also, a speed-up in the calculation of the projection is seen with the voxel projection method. Linear interpolation dramatically improves the transfer of width information from the 3D MRA data set over both nearest neighbor and voxel projection methods. Even though the cubic convolution interpolation kernel is theoretically superior to the linear kernel, it did not project widths more accurately than linear interpolation. A possible advantage to the nearest neighbor interpolation is that the size of small vessels tends to be exaggerated in the projection plane, thereby increasing their visibility. The results confirm that the way in which an MIP image is constructed has a dramatic effect on information contained in the projection. The construction method must be chosen with the knowledge that the clinical information in the 2D projections in general will be different from that contained in the original 3D data volume. 27 refs., 16 figs., 2 tabs.« less

  5. Data-based diffraction kernels for surface waves from convolution and correlation processes through active seismic interferometry

    NASA Astrophysics Data System (ADS)

    Chmiel, Malgorzata; Roux, Philippe; Herrmann, Philippe; Rondeleux, Baptiste; Wathelet, Marc

    2018-05-01

    We investigated the construction of diffraction kernels for surface waves using two-point convolution and/or correlation from land active seismic data recorded in the context of exploration geophysics. The high density of controlled sources and receivers, combined with the application of the reciprocity principle, allows us to retrieve two-dimensional phase-oscillation diffraction kernels (DKs) of surface waves between any two source or receiver points in the medium at each frequency (up to 15 Hz, at least). These DKs are purely data-based as no model calculations and no synthetic data are needed. They naturally emerge from the interference patterns of the recorded wavefields projected on the dense array of sources and/or receivers. The DKs are used to obtain multi-mode dispersion relations of Rayleigh waves, from which near-surface shear velocity can be extracted. Using convolution versus correlation with a grid of active sources is an important step in understanding the physics of the retrieval of surface wave Green's functions. This provides the foundation for future studies based on noise sources or active sources with a sparse spatial distribution.

  6. An energy efficient and high speed architecture for convolution computing based on binary resistive random access memory

    NASA Astrophysics Data System (ADS)

    Liu, Chen; Han, Runze; Zhou, Zheng; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng

    2018-04-01

    In this work we present a novel convolution computing architecture based on metal oxide resistive random access memory (RRAM) to process the image data stored in the RRAM arrays. The proposed image storage architecture shows performances of better speed-device consumption efficiency compared with the previous kernel storage architecture. Further we improve the architecture for a high accuracy and low power computing by utilizing the binary storage and the series resistor. For a 28 × 28 image and 10 kernels with a size of 3 × 3, compared with the previous kernel storage approach, the newly proposed architecture shows excellent performances including: 1) almost 100% accuracy within 20% LRS variation and 90% HRS variation; 2) more than 67 times speed boost; 3) 71.4% energy saving.

  7. Deep multi-scale convolutional neural network for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Zhang, Feng-zhe; Yang, Xia

    2018-04-01

    In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.

  8. Dose calculation algorithm of fast fine-heterogeneity correction for heavy charged particle radiotherapy.

    PubMed

    Kanematsu, Nobuyuki

    2011-04-01

    This work addresses computing techniques for dose calculations in treatment planning with proton and ion beams, based on an efficient kernel-convolution method referred to as grid-dose spreading (GDS) and accurate heterogeneity-correction method referred to as Gaussian beam splitting. The original GDS algorithm suffered from distortion of dose distribution for beams tilted with respect to the dose-grid axes. Use of intermediate grids normal to the beam field has solved the beam-tilting distortion. Interplay of arrangement between beams and grids was found as another intrinsic source of artifact. Inclusion of rectangular-kernel convolution in beam transport, to share the beam contribution among the nearest grids in a regulatory manner, has solved the interplay problem. This algorithmic framework was applied to a tilted proton pencil beam and a broad carbon-ion beam. In these cases, while the elementary pencil beams individually split into several tens, the calculation time increased only by several times with the GDS algorithm. The GDS and beam-splitting methods will complementarily enable accurate and efficient dose calculations for radiotherapy with protons and ions. Copyright © 2010 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  9. Signal dependence of inter-pixel capacitance in hybridized HgCdTe H2RG arrays for use in James Webb space telescope's NIRcam

    NASA Astrophysics Data System (ADS)

    Donlon, Kevan; Ninkov, Zoran; Baum, Stefi

    2016-08-01

    Interpixel capacitance (IPC) is a deterministic electronic coupling by which signal generated in one pixel is measured in neighboring pixels. Examination of dark frames from test NIRcam arrays corroborates earlier results and simulations illustrating a signal dependent coupling. When the signal on an individual pixel is larger, the fractional coupling to nearest neighbors is lesser than when the signal is lower. Frames from test arrays indicate a drop in average coupling from approximately 1.0% at low signals down to approximately 0.65% at high signals depending on the particular array in question. The photometric ramifications for this non-uniformity are not fully understood. This non-uniformity intro-duces a non-linearity in the current mathematical model for IPC coupling. IPC coupling has been mathematically formalized as convolution by a blur kernel. Signal dependence requires that the blur kernel be locally defined as a function of signal intensity. Through application of a signal dependent coupling kernel, the IPC coupling can be modeled computationally. This method allows for simultaneous knowledge of the intrinsic parameters of the image scene, the result of applying a constant IPC, and the result of a signal dependent IPC. In the age of sub-pixel precision in astronomy these effects must be properly understood and accounted for in order for the data to accurately represent the object of observation. Implementation of this method is done through python scripted processing of images. The introduction of IPC into simulated frames is accomplished through convolution of the image with a blur kernel whose parameters are themselves locally defined functions of the image. These techniques can be used to enhance the data processing pipeline for NIRcam.

  10. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    PubMed

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  11. Alexnet Feature Extraction and Multi-Kernel Learning for Objectoriented Classification

    NASA Astrophysics Data System (ADS)

    Ding, L.; Li, H.; Hu, C.; Zhang, W.; Wang, S.

    2018-04-01

    In view of the fact that the deep convolutional neural network has stronger ability of feature learning and feature expression, an exploratory research is done on feature extraction and classification for high resolution remote sensing images. Taking the Google image with 0.3 meter spatial resolution in Ludian area of Yunnan Province as an example, the image segmentation object was taken as the basic unit, and the pre-trained AlexNet deep convolution neural network model was used for feature extraction. And the spectral features, AlexNet features and GLCM texture features are combined with multi-kernel learning and SVM classifier, finally the classification results were compared and analyzed. The results show that the deep convolution neural network can extract more accurate remote sensing image features, and significantly improve the overall accuracy of classification, and provide a reference value for earthquake disaster investigation and remote sensing disaster evaluation.

  12. Adaptive wiener image restoration kernel

    DOEpatents

    Yuan, Ding [Henderson, NV

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  13. Convolution kernels for multi-wavelength imaging

    NASA Astrophysics Data System (ADS)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  14. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    PubMed Central

    Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.

    2013-01-01

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found that depth was the most dominant factor affecting the pattern of energy deposition; however, the effects of field size and off-axis distance were not negligible. For the material-specific kernels, we found that as the density of the material increased, more energy was deposited laterally by charged particles, as opposed to in the forward direction. Thus, density scaling of water kernels becomes a worse approximation as the density and the effective atomic number of the material differ more from water. Implementation of spatially variant, polyenergetic kernels increased the percent depth dose value at 25 cm depth by 2.1%–5.8% depending on the field size, while implementation of titanium kernels gave 4.9% higher dose upstream of the metal cavity (i.e., higher backscatter dose) and 8.2% lower dose downstream of the cavity. Conclusions: Of the various kernel refinements investigated, inclusion of depth-dependent and metal-specific kernels into the C/S method has the greatest potential to improve dose calculation accuracy. Implementation of spatially variant polyenergetic kernels resulted in a harder depth dose curve and thus has the potential to affect beam modeling parameters obtained in the commissioning process. For metal implants, the C/S algorithms generally underestimate the dose upstream and overestimate the dose downstream of the implant. Implementation of a metal-specific kernel mitigated both of these errors. PMID:24320507

  15. Laser Welding Process Monitoring Systems: Advanced Signal Analysis for Quality Assurance

    NASA Astrophysics Data System (ADS)

    D'Angelo, Giuseppe

    Laser material processing today is widely used in industry. Especially laser welding became one of the key-technologies, e. g., for the automotive sector. This is due to the improvement and development of new laser sources and the increasing knowledge gained at countless scientific research projects. Nevertheless, it is still not possible to use the full potential of this technology. Therefore, the introduction and application of quality-assuring systems is required. For a long time, the statement "the best sensor is no sensor" was often heard. Today, a change of paradigm can be observed. On the one hand, ISO 9000 and other by law enforced regulations have led to the understanding that quality monitoring is an essential tool in modern manufacturing and necessary in order to keep production results in deterministic boundaries. On the other hand, rising quality requirements not only set higher and higher requirements for the process technology but also demand qualityassurance measures which ensure the reliable recognition of process faults. As a result, there is a need for reliable online detection and correction of welding faults by means of an in-process monitoring. The chapter describes an advanced signals analysis technique to extract information from signals detected, during the laser welding process, by optical sensors. The technique is based on the method of reassignment which was first applied to the spectrogram by Kodera, Gendrin and de Villedary22,23 and later generalized to any bilinear time-frequency representation by Auger and Flandrin.24 Key to the method is a nonlinear convolution where the value of the convolution is not placed at the center of the convolution kernel but rather reassigned to the center of mass of the function within the kernel. The resulting reassigned representation yields significantly improved components localization. We compare the proposed time-frequency distributions by analyzing signals detected during the laser welding of tailored blanks, demonstrating the advantages of the reassigned representation, giving practical applicability to the proposed method.

  16. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    PubMed

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  17. Applying machine-learning techniques to Twitter data for automatic hazard-event classification.

    NASA Astrophysics Data System (ADS)

    Filgueira, R.; Bee, E. J.; Diaz-Doce, D.; Poole, J., Sr.; Singh, A.

    2017-12-01

    The constant flow of information offered by tweets provides valuable information about all sorts of events at a high temporal and spatial resolution. Over the past year we have been analyzing in real-time geological hazards/phenomenon, such as earthquakes, volcanic eruptions, landslides, floods or the aurora, as part of the GeoSocial project, by geo-locating tweets filtered by keywords in a web-map. However, not all the filtered tweets are related with hazard/phenomenon events. This work explores two classification techniques for automatic hazard-event categorization based on tweets about the "Aurora". First, tweets were filtered using aurora-related keywords, removing stop words and selecting the ones written in English. For classifying the remaining between "aurora-event" or "no-aurora-event" categories, we compared two state-of-art techniques: Support Vector Machine (SVM) and Deep Convolutional Neural Networks (CNN) algorithms. Both approaches belong to the family of supervised learning algorithms, which make predictions based on labelled training dataset. Therefore, we created a training dataset by tagging 1200 tweets between both categories. The general form of SVM is used to separate two classes by a function (kernel). We compared the performance of four different kernels (Linear Regression, Logistic Regression, Multinomial Naïve Bayesian and Stochastic Gradient Descent) provided by Scikit-Learn library using our training dataset to build the SVM classifier. The results shown that the Logistic Regression (LR) gets the best accuracy (87%). So, we selected the SVM-LR classifier to categorise a large collection of tweets using the "dispel4py" framework.Later, we developed a CNN classifier, where the first layer embeds words into low-dimensional vectors. The next layer performs convolutions over the embedded word vectors. Results from the convolutional layer are max-pooled into a long feature vector, which is classified using a softmax layer. The CNN's accuracy is lower (83%) than the SVM-LR, since the algorithm needs a bigger training dataset to increase its accuracy. We used TensorFlow framework for applying CNN classifier to the same collection of tweets.In future we will modify both classifiers to work with other geo-hazards, use larger training datasets and apply them in real-time.

  18. Bessel function expansion to reduce the calculation time and memory usage for cylindrical computer-generated holograms.

    PubMed

    Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko

    2017-07-10

    This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.

  19. Real-time dose computation: GPU-accelerated source modeling and superposition/convolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacques, Robert; Wong, John; Taylor, Russell

    Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for themore » total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3} times were 8.3 and 94 s, respectively, on an AMD (Sunnyvale, CA) Opteron 254 (two cores, 2.8 GHz). Conclusions: The authors have completed a comprehensive, GPU-accelerated dose engine in order to provide a substantial performance gain over CPU based implementations. Real-time dose computation is feasible with the accuracy levels of the superposition/convolution algorithm.« less

  20. Splenomegaly Segmentation using Global Convolutional Kernels and Conditional Generative Adversarial Networks

    PubMed Central

    Huo, Yuankai; Xu, Zhoubing; Bao, Shunxing; Bermudez, Camilo; Plassard, Andrew J.; Liu, Jiaqi; Yao, Yuang; Assad, Albert; Abramson, Richard G.; Landman, Bennett A.

    2018-01-01

    Spleen volume estimation using automated image segmentation technique may be used to detect splenomegaly (abnormally enlarged spleen) on Magnetic Resonance Imaging (MRI) scans. In recent years, Deep Convolutional Neural Networks (DCNN) segmentation methods have demonstrated advantages for abdominal organ segmentation. However, variations in both size and shape of the spleen on MRI images may result in large false positive and false negative labeling when deploying DCNN based methods. In this paper, we propose the Splenomegaly Segmentation Network (SSNet) to address spatial variations when segmenting extraordinarily large spleens. SSNet was designed based on the framework of image-to-image conditional generative adversarial networks (cGAN). Specifically, the Global Convolutional Network (GCN) was used as the generator to reduce false negatives, while the Markovian discriminator (PatchGAN) was used to alleviate false positives. A cohort of clinically acquired 3D MRI scans (both T1 weighted and T2 weighted) from patients with splenomegaly were used to train and test the networks. The experimental results demonstrated that a mean Dice coefficient of 0.9260 and a median Dice coefficient of 0.9262 using SSNet on independently tested MRI volumes of patients with splenomegaly.

  1. Exact calculation of the time convolutionless master equation generator: Application to the nonequilibrium resonant level model

    NASA Astrophysics Data System (ADS)

    Kidon, Lyran; Wilner, Eli Y.; Rabani, Eran

    2015-12-01

    The generalized quantum master equation provides a powerful tool to describe the dynamics in quantum impurity models driven away from equilibrium. Two complementary approaches, one based on Nakajima-Zwanzig-Mori time-convolution (TC) and the other on the Tokuyama-Mori time-convolutionless (TCL) formulations provide a starting point to describe the time-evolution of the reduced density matrix. A key in both approaches is to obtain the so called "memory kernel" or "generator," going beyond second or fourth order perturbation techniques. While numerically converged techniques are available for the TC memory kernel, the canonical approach to obtain the TCL generator is based on inverting a super-operator in the full Hilbert space, which is difficult to perform and thus, nearly all applications of the TCL approach rely on a perturbative scheme of some sort. Here, the TCL generator is expressed using a reduced system propagator which can be obtained from system observables alone and requires the calculation of super-operators and their inverse in the reduced Hilbert space rather than the full one. This makes the formulation amenable to quantum impurity solvers or to diagrammatic techniques, such as the nonequilibrium Green's function. We implement the TCL approach for the resonant level model driven away from equilibrium and compare the time scales for the decay of the generator with that of the memory kernel in the TC approach. Furthermore, the effects of temperature, source-drain bias, and gate potential on the TCL/TC generators are discussed.

  2. A Frequency-Domain Implementation of a Sliding-Window Traffic Sign Detector for Large Scale Panoramic Datasets

    NASA Astrophysics Data System (ADS)

    Creusen, I. M.; Hazelhoff, L.; De With, P. H. N.

    2013-10-01

    In large-scale automatic traffic sign surveying systems, the primary computational effort is concentrated at the traffic sign detection stage. This paper focuses on reducing the computational load of particularly the sliding window object detection algorithm which is employed for traffic sign detection. Sliding-window object detectors often use a linear SVM to classify the features in a window. In this case, the classification can be seen as a convolution of the feature maps with the SVM kernel. It is well known that convolution can be efficiently implemented in the frequency domain, for kernels larger than a certain size. We show that by careful reordering of sliding-window operations, most of the frequency-domain transformations can be eliminated, leading to a substantial increase in efficiency. Additionally, we suggest to use the overlap-add method to keep the memory use within reasonable bounds. This allows us to keep all the transformed kernels in memory, thereby eliminating even more domain transformations, and allows all scales in a multiscale pyramid to be processed using the same set of transformed kernels. For a typical sliding-window implementation, we have found that the detector execution performance improves with a factor of 5.3. As a bonus, many of the detector improvements from literature, e.g. chi-squared kernel approximations, sub-class splitting algorithms etc., can be more easily applied at a lower performance penalty because of an improved scalability.

  3. Robust visual tracking based on deep convolutional neural networks and kernelized correlation filters

    NASA Astrophysics Data System (ADS)

    Yang, Hua; Zhong, Donghong; Liu, Chenyi; Song, Kaiyou; Yin, Zhouping

    2018-03-01

    Object tracking is still a challenging problem in computer vision, as it entails learning an effective model to account for appearance changes caused by occlusion, out of view, plane rotation, scale change, and background clutter. This paper proposes a robust visual tracking algorithm called deep convolutional neural network (DCNNCT) to simultaneously address these challenges. The proposed DCNNCT algorithm utilizes a DCNN to extract the image feature of a tracked target, and the full range of information regarding each convolutional layer is used to express the image feature. Subsequently, the kernelized correlation filters (CF) in each convolutional layer are adaptively learned, the correlation response maps of that are combined to estimate the location of the tracked target. To avoid the case of tracking failure, an online random ferns classifier is employed to redetect the tracked target, and a dual-threshold scheme is used to obtain the final target location by comparing the tracking result with the detection result. Finally, the change in scale of the target is determined by building scale pyramids and training a CF. Extensive experiments demonstrate that the proposed algorithm is effective at tracking, especially when evaluated using an index called the overlap rate. The DCNNCT algorithm is also highly competitive in terms of robustness with respect to state-of-the-art trackers in various challenging scenarios.

  4. Magnetic field influences on the lateral dose response functions of photon-beam detectors: MC study of wall-less water-filled detectors with various densities.

    PubMed

    Looe, Hui Khee; Delfs, Björn; Poppinga, Daniela; Harder, Dietrich; Poppe, Björn

    2017-06-21

    The distortion of detector reading profiles across photon beams in the presence of magnetic fields is a developing subject of clinical photon-beam dosimetry. The underlying modification by the Lorentz force of a detector's lateral dose response function-the convolution kernel transforming the true cross-beam dose profile in water into the detector reading profile-is here studied for the first time. The three basic convolution kernels, the photon fluence response function, the dose deposition kernel, and the lateral dose response function, of wall-less cylindrical detectors filled with water of low, normal and enhanced density are shown by Monte Carlo simulation to be distorted in the prevailing direction of the Lorentz force. The asymmetric shape changes of these convolution kernels in a water medium and in magnetic fields of up to 1.5 T are confined to the lower millimetre range, and they depend on the photon beam quality, the magnetic flux density and the detector's density. The impact of this distortion on detector reading profiles is demonstrated using a narrow photon beam profile. For clinical applications it appears as favourable that the magnetic flux density dependent distortion of the lateral dose response function, as far as secondary electron transport is concerned, vanishes in the case of water-equivalent detectors of normal water density. By means of secondary electron history backtracing, the spatial distribution of the photon interactions giving rise either directly to secondary electrons or to scattered photons further downstream producing secondary electrons which contribute to the detector's signal, and their lateral shift due to the Lorentz force is elucidated. Electron history backtracing also serves to illustrate the correct treatment of the influences of the Lorentz force in the EGSnrc Monte Carlo code applied in this study.

  5. Deep Recurrent Neural Networks for Human Activity Recognition

    PubMed Central

    Murad, Abdulmajid

    2017-01-01

    Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs. PMID:29113103

  6. Deep Recurrent Neural Networks for Human Activity Recognition.

    PubMed

    Murad, Abdulmajid; Pyun, Jae-Young

    2017-11-06

    Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.

  7. Acceleration of Monte Carlo SPECT simulation using convolution-based forced detection

    NASA Astrophysics Data System (ADS)

    de Jong, H. W. A. M.; Slijpen, E. T. P.; Beekman, F. J.

    2001-02-01

    Monte Carlo (MC) simulation is an established tool to calculate photon transport through tissue in Emission Computed Tomography (ECT). Since the first appearance of MC a large variety of variance reduction techniques (VRT) have been introduced to speed up these notoriously slow simulations. One example of a very effective and established VRT is known as forced detection (FD). In standard FD the path from the photon's scatter position to the camera is chosen stochastically from the appropriate probability density function (PDF), modeling the distance-dependent detector response. In order to speed up MC the authors propose a convolution-based FD (CFD) which involves replacing the sampling of the PDF by a convolution with a kernel which depends on the position of the scatter event. The authors validated CFD for parallel-hole Single Photon Emission Computed Tomography (SPECT) using a digital thorax phantom. Comparison of projections estimated with CFD and standard FD shows that both estimates converge to practically identical projections (maximum bias 0.9% of peak projection value), despite the slightly different photon paths used in CFD and standard FD. Projections generated with CFD converge, however, to a noise-free projection up to one or two orders of magnitude faster, which is extremely useful in many applications such as model-based image reconstruction.

  8. Small-kernel, constrained least-squares restoration of sampled image data

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Park, Stephen K.

    1992-01-01

    Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.

  9. No-reference image quality assessment based on statistics of convolution feature maps

    NASA Astrophysics Data System (ADS)

    Lv, Xiaoxin; Qin, Min; Chen, Xiaohui; Wei, Guo

    2018-04-01

    We propose a Convolutional Feature Maps (CFM) driven approach to accurately predict image quality. Our motivation bases on the finding that the Nature Scene Statistic (NSS) features on convolution feature maps are significantly sensitive to distortion degree of an image. In our method, a Convolutional Neural Network (CNN) is trained to obtain kernels for generating CFM. We design a forward NSS layer which performs on CFM to better extract NSS features. The quality aware features derived from the output of NSS layer is effective to describe the distortion type and degree an image suffered. Finally, a Support Vector Regression (SVR) is employed in our No-Reference Image Quality Assessment (NR-IQA) model to predict a subjective quality score of a distorted image. Experiments conducted on two public databases demonstrate the promising performance of the proposed method is competitive to state of the art NR-IQA methods.

  10. Numerical Evaluation of the "Dual-Kernel Counter-flow" Matric Convolution Integral that Arises in Discrete/Continuous (D/C) Control Theory

    NASA Technical Reports Server (NTRS)

    Nixon, Douglas D.

    2009-01-01

    Discrete/Continuous (D/C) control theory is a new generalized theory of discrete-time control that expands the concept of conventional (exact) discrete-time control to create a framework for design and implementation of discretetime control systems that include a continuous-time command function generator so that actuator commands need not be constant between control decisions, but can be more generally defined and implemented as functions that vary with time across sample period. Because the plant/control system construct contains two linear subsystems arranged in tandem, a novel dual-kernel counter-flow convolution integral appears in the formulation. As part of the D/C system design and implementation process, numerical evaluation of that integral over the sample period is required. Three fundamentally different evaluation methods and associated algorithms are derived for the constant-coefficient case. Numerical results are matched against three available examples that have closed-form solutions.

  11. SU-E-T-510: Calculation of High Resolution and Material-Specific Photon Energy Deposition Kernels.

    PubMed

    Huang, J; Childress, N; Kry, S

    2012-06-01

    To calculate photon energy deposition kernels (EDKs) used for convolution/superposition dose calculation at a higher resolution than the original Mackie et al. 1988 kernels and to calculate material-specific kernels that describe how energy is transported and deposited by secondary particles when the incident photon interacts in a material other than water. The high resolution EDKs for various incident photon energies were generated using the EGSnrc user-code EDKnrc, which forces incident photons to interact at the center of a 60 cm radius sphere of water. The simulation geometry is essentially the same as the original Mackie calculation but with a greater number of scoring voxels (48 radial, 144 angular bins). For the material-specific EDKs, incident photons were forced to interact at the center of a 1 mm radius sphere of material (lung, cortical bone, silver, or titanium) surrounded by a 60 cm radius water sphere, using the original scoring voxel geometry implemented by Mackie et al. 1988 (24 radial, 48 angular bins). Our Monte Carlo-calculated high resolution EDKs showed excellent agreement with the Mackie kernels, with our kernels providing more information about energy deposition close to the interaction site. Furthermore, our EDKs resulted in smoother dose deposition functions due to the finer resolution and greater number of simulation histories. The material-specific EDK results show that the angular distribution of energy deposition is different for incident photons interacting in different materials. Calculated from the angular dose distribution for 300 keV incident photons, the expected polar angle for dose deposition () is 28.6° for water, 33.3° for lung, 36.0° for cortical bone, 44.6° for titanium, and 58.1° for silver, showing a dependence on the material in which the primary photon interacts. These high resolution and material-specific EDKs have implications for convolution/superposition dose calculations in heterogeneous patient geometries, especially at material interfaces. © 2012 American Association of Physicists in Medicine.

  12. Volterra series truncation and kernel estimation of nonlinear systems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Billings, S. A.

    2017-02-01

    The Volterra series model is a direct generalisation of the linear convolution integral and is capable of displaying the intrinsic features of a nonlinear system in a simple and easy to apply way. Nonlinear system analysis using Volterra series is normally based on the analysis of its frequency-domain kernels and a truncated description. But the estimation of Volterra kernels and the truncation of Volterra series are coupled with each other. In this paper, a novel complex-valued orthogonal least squares algorithm is developed. The new algorithm provides a powerful tool to determine which terms should be included in the Volterra series expansion and to estimate the kernels and thus solves the two problems all together. The estimated results are compared with those determined using the analytical expressions of the kernels to validate the method. To further evaluate the effectiveness of the method, the physical parameters of the system are also extracted from the measured kernels. Simulation studies demonstrates that the new approach not only can truncate the Volterra series expansion and estimate the kernels of a weakly nonlinear system, but also can indicate the applicability of the Volterra series analysis in a severely nonlinear system case.

  13. Semiclassical analysis for pseudo-relativistic Hartree equations

    NASA Astrophysics Data System (ADS)

    Cingolani, Silvia; Secchi, Simone

    2015-06-01

    In this paper we study the semiclassical limit for the pseudo-relativistic Hartree equation $\\sqrt{-\\varepsilon^2 \\Delta + m^2}u + V u = (I_\\alpha * |u|^{p}) |u|^{p-2}u$ in $\\mathbb{R}^N$ where $m>0$, $2 \\leq p < \\frac{2N}{N-1}$, $V \\colon \\mathbb{R}^N \\to \\mathbb{R}$ is an external scalar potential, $I_\\alpha (x) = \\frac{c_{N,\\alpha}}{|x|^{N-\\alpha}}$ is a convolution kernel, $c_{N,\\alpha}$ is a positive constant and $(N-1)p-N<\\alpha

  14. Classification of mass and normal breast tissue: A convolution neural network classifier with spatial domain and texture images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sahiner, B.; Chan, H.P.; Petrick, N.

    1996-10-01

    The authors investigated the classification of regions of interest (ROI`s) on mammograms as either mass or normal tissue using a convolution neural network (CNN). A CNN is a back-propagation neural network with two-dimensional (2-D) weight kernels that operate on images. A generalized, fast and stable implementation of the CNN was developed. The input images to the CNN were obtained form the ROI`s using two techniques. The first technique employed averaging and subsampling. The second technique employed texture feature extraction methods applied to small subregions inside the ROI. Features computed over different subregions were arranged as texture images, which were subsequentlymore » used as CNN inputs. The effects of CNN architecture and texture feature parameters on classification accuracy were studied. Receiver operating characteristic (ROC) methodology was used to evaluate the classification accuracy. A data set consisting of 168 ROI`s containing biopsy-proven masses and 504 ROI`s containing normal breast tissue was extracted from 168 mammograms by radiologists experienced in mammography. This data set was used for training and testing the CNN. With the best combination of CNN architecture and texture feature parameters, the area under the test ROC curve reached 0.87, which corresponded to a true-positive fraction of 90% at a false positive fraction of 31%. The results demonstrate the feasibility of using a CNN for classification of masses and normal tissue on mammograms.« less

  15. Measurement of Flaw Size From Thermographic Data

    NASA Technical Reports Server (NTRS)

    Winfree, William P.; Zalameda, Joseph N.; Howell, Patricia A.

    2015-01-01

    Simple methods for reducing the pulsed thermographic responses of delaminations tend to overestimate the size of the delamination, since the heat diffuses in the plane parallel to the surface. The result is a temperature profile over the delamination which is larger than the delamination size. A variational approach is presented for reducing the thermographic data to produce an estimated size for a flaw that is much closer to the true size of the delamination. The method is based on an estimate for the thermal response that is a convolution of a Gaussian kernel with the shape of the flaw. The size is determined from both the temporal and spatial thermal response of the exterior surface above the delamination and constraints on the length of the contour surrounding the delamination. Examples of the application of the technique to simulation and experimental data are presented to investigate the limitations of the technique.

  16. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images.

    PubMed

    Pereira, Sergio; Pinto, Adriano; Alves, Victor; Silva, Carlos A

    2016-05-01

    Among brain tumors, gliomas are the most common and aggressive, leading to a very short life expectancy in their highest grade. Thus, treatment planning is a key stage to improve the quality of life of oncological patients. Magnetic resonance imaging (MRI) is a widely used imaging technique to assess these tumors, but the large amount of data produced by MRI prevents manual segmentation in a reasonable time, limiting the use of precise quantitative measurements in the clinical practice. So, automatic and reliable segmentation methods are required; however, the large spatial and structural variability among brain tumors make automatic segmentation a challenging problem. In this paper, we propose an automatic segmentation method based on Convolutional Neural Networks (CNN), exploring small 3 ×3 kernels. The use of small kernels allows designing a deeper architecture, besides having a positive effect against overfitting, given the fewer number of weights in the network. We also investigated the use of intensity normalization as a pre-processing step, which though not common in CNN-based segmentation methods, proved together with data augmentation to be very effective for brain tumor segmentation in MRI images. Our proposal was validated in the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013), obtaining simultaneously the first position for the complete, core, and enhancing regions in Dice Similarity Coefficient metric (0.88, 0.83, 0.77) for the Challenge data set. Also, it obtained the overall first position by the online evaluation platform. We also participated in the on-site BRATS 2015 Challenge using the same model, obtaining the second place, with Dice Similarity Coefficient metric of 0.78, 0.65, and 0.75 for the complete, core, and enhancing regions, respectively.

  17. Oscillatory singular integrals and harmonic analysis on nilpotent groups

    PubMed Central

    Ricci, F.; Stein, E. M.

    1986-01-01

    Several related classes of operators on nilpotent Lie groups are considered. These operators involve the following features: (i) oscillatory factors that are exponentials of imaginary polynomials, (ii) convolutions with singular kernels supported on lower-dimensional submanifolds, (iii) validity in the general context not requiring the existence of dilations that are automorphisms. PMID:16593640

  18. Some Issues Related to Integrating Active Flow Control With Flight Control

    NASA Technical Reports Server (NTRS)

    Williams, David; Colonius, Tim; Tadmor, Gilead; Rowley, Clancy

    2010-01-01

    Time varying control of CL is necessary for integrating AFC and Flight Control (Biasing allows for +/- changes in lift) Time delays associated with actuation are long (APPROX.5.8 c/U) and must be included in controllers. Convolution of input signal with single pulse kernel gives reasonable prediction of lift response.

  19. Convolutional virtual electric field for image segmentation using active contours.

    PubMed

    Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden

    2014-01-01

    Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images.

  20. Restoration of recto-verso colour documents using correlated component analysis

    NASA Astrophysics Data System (ADS)

    Tonazzini, Anna; Bedini, Luigi

    2013-12-01

    In this article, we consider the problem of removing see-through interferences from pairs of recto-verso documents acquired either in grayscale or RGB modality. The see-through effect is a typical degradation of historical and archival documents or manuscripts, and is caused by transparency or seeping of ink from the reverse side of the page. We formulate the problem as one of separating two individual texts, overlapped in the recto and verso maps of the colour channels through a linear convolutional mixing operator, where the mixing coefficients are unknown, while the blur kernels are assumed known a priori or estimated off-line. We exploit statistical techniques of blind source separation to estimate both the unknown model parameters and the ideal, uncorrupted images of the two document sides. We show that recently proposed correlated component analysis techniques overcome the already satisfactory performance of independent component analysis techniques and colour decorrelation, when the two texts are even sensibly correlated.

  1. Higher-order kinetic expansion of quantum dissipative dynamics: mapping quantum networks to kinetic networks.

    PubMed

    Wu, Jianlan; Cao, Jianshu

    2013-07-28

    We apply a new formalism to derive the higher-order quantum kinetic expansion (QKE) for studying dissipative dynamics in a general quantum network coupled with an arbitrary thermal bath. The dynamics of system population is described by a time-convoluted kinetic equation, where the time-nonlocal rate kernel is systematically expanded of the order of off-diagonal elements of the system Hamiltonian. In the second order, the rate kernel recovers the expression of the noninteracting-blip approximation method. The higher-order corrections in the rate kernel account for the effects of the multi-site quantum coherence and the bath relaxation. In a quantum harmonic bath, the rate kernels of different orders are analytically derived. As demonstrated by four examples, the higher-order QKE can reliably predict quantum dissipative dynamics, comparing well with the hierarchic equation approach. More importantly, the higher-order rate kernels can distinguish and quantify distinct nontrivial quantum coherent effects, such as long-range energy transfer from quantum tunneling and quantum interference arising from the phase accumulation of interactions.

  2. The Influence of Reconstruction Kernel on Bone Mineral and Strength Estimates Using Quantitative Computed Tomography and Finite Element Analysis.

    PubMed

    Michalski, Andrew S; Edwards, W Brent; Boyd, Steven K

    2017-10-17

    Quantitative computed tomography has been posed as an alternative imaging modality to investigate osteoporosis. We examined the influence of computed tomography convolution back-projection reconstruction kernels on the analysis of bone quantity and estimated mechanical properties in the proximal femur. Eighteen computed tomography scans of the proximal femur were reconstructed using both a standard smoothing reconstruction kernel and a bone-sharpening reconstruction kernel. Following phantom-based density calibration, we calculated typical bone quantity outcomes of integral volumetric bone mineral density, bone volume, and bone mineral content. Additionally, we performed finite element analysis in a standard sideways fall on the hip loading configuration. Significant differences for all outcome measures, except integral bone volume, were observed between the 2 reconstruction kernels. Volumetric bone mineral density measured using images reconstructed by the standard kernel was significantly lower (6.7%, p < 0.001) when compared with images reconstructed using the bone-sharpening kernel. Furthermore, the whole-bone stiffness and the failure load measured in images reconstructed by the standard kernel were significantly lower (16.5%, p < 0.001, and 18.2%, p < 0.001, respectively) when compared with the image reconstructed by the bone-sharpening kernel. These data suggest that for future quantitative computed tomography studies, a standardized reconstruction kernel will maximize reproducibility, independent of the use of a quantitative calibration phantom. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  3. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    PubMed

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  4. Quantification and classification of neuronal responses in kernel-smoothed peristimulus time histograms

    PubMed Central

    Fried, Itzhak; Koch, Christof

    2014-01-01

    Peristimulus time histograms are a widespread form of visualizing neuronal responses. Kernel convolution methods transform these histograms into a smooth, continuous probability density function. This provides an improved estimate of a neuron's actual response envelope. We here develop a classifier, called the h-coefficient, to determine whether time-locked fluctuations in the firing rate of a neuron should be classified as a response or as random noise. Unlike previous approaches, the h-coefficient takes advantage of the more precise response envelope estimation provided by the kernel convolution method. The h-coefficient quantizes the smoothed response envelope and calculates the probability of a response of a given shape to occur by chance. We tested the efficacy of the h-coefficient in a large data set of Monte Carlo simulated smoothed peristimulus time histograms with varying response amplitudes, response durations, trial numbers, and baseline firing rates. Across all these conditions, the h-coefficient significantly outperformed more classical classifiers, with a mean false alarm rate of 0.004 and a mean hit rate of 0.494. We also tested the h-coefficient's performance in a set of neuronal responses recorded in humans. The algorithm behind the h-coefficient provides various opportunities for further adaptation and the flexibility to target specific parameters in a given data set. Our findings confirm that the h-coefficient can provide a conservative and powerful tool for the analysis of peristimulus time histograms with great potential for future development. PMID:25475352

  5. Convolving optically addressed VLSI liquid crystal SLM

    NASA Astrophysics Data System (ADS)

    Jared, David A.; Stirk, Charles W.

    1994-03-01

    We designed, fabricated, and tested an optically addressed spatial light modulator (SLM) that performs a 3 X 3 kernel image convolution using ferroelectric liquid crystal on VLSI technology. The chip contains a 16 X 16 array of current-mirror-based convolvers with a fixed kernel for finding edges. The pixels are located on 75 micron centers, and the modulators are 20 microns on a side. The array successfully enhanced edges in illumination patterns. We developed a high-level simulation tool (CON) for analyzing the performance of convolving SLM designs. CON has a graphical interface and simulates SLM functions using SPICE-like device models. The user specifies the pixel function along with the device parameters and nonuniformities. We discovered through analysis, simulation and experiment that the operation of current-mirror-based convolver pixels is degraded at low light levels by the variation of transistor threshold voltages inherent to CMOS chips. To function acceptable, the test SLM required the input image to have an minimum irradiance of 10 (mu) W/cm2. The minimum required irradiance can be further reduced by adding a photodarlington near the photodetector or by increasing the size of the transistors used to calculate the convolution.

  6. Two projects in theoretical neuroscience: A convolution-based metric for neural membrane potentials and a combinatorial connectionist semantic network method

    NASA Astrophysics Data System (ADS)

    Evans, Garrett Nolan

    In this work, I present two projects that both contribute to the aim of discovering how intelligence manifests in the brain. The first project is a method for analyzing recorded neural signals, which takes the form of a convolution-based metric on neural membrane potential recordings. Relying only on integral and algebraic operations, the metric compares the timing and number of spikes within recordings as well as the recordings' subthreshold features: summarizing differences in these with a single "distance" between the recordings. Like van Rossum's (2001) metric for spike trains, the metric is based on a convolution operation that it performs on the input data. The kernel used for the convolution is carefully chosen such that it produces a desirable frequency space response and, unlike van Rossum's kernel, causes the metric to be first order both in differences between nearby spike times and in differences between same-time membrane potential values: an important trait. The second project is a combinatorial syntax method for connectionist semantic network encoding. Combinatorial syntax has been a point on which those who support a symbol-processing view of intelligent processing and those who favor a connectionist view have had difficulty seeing eye-to-eye. Symbol-processing theorists have persuasively argued that combinatorial syntax is necessary for certain intelligent mental operations, such as reasoning by analogy. Connectionists have focused on the versatility and adaptability offered by self-organizing networks of simple processing units. With this project, I show that there is a way to reconcile the two perspectives and to ascribe a combinatorial syntax to a connectionist network. The critical principle is to interpret nodes, or units, in the connectionist network as bound integrations of the interpretations for nodes that they share links with. Nodes need not correspond exactly to neurons and may correspond instead to distributed sets, or assemblies, of neurons.

  7. Signed-negabinary-arithmetic-based optical computing by use of a single liquid-crystal-display panel.

    PubMed

    Datta, Asit K; Munshi, Soumika

    2002-03-10

    Based on the negabinary number representation, parallel one-step arithmetic operations (that is, addition and subtraction), logical operations, and matrix-vector multiplication on data have been optically implemented, by use of a two-dimensional spatial-encoding technique. For addition and subtraction, one of the operands in decimal form is converted into the unsigned negabinary form, whereas the other decimal number is represented in the signed negabinary form. The result of operation is obtained in the mixed negabinary form and is converted back into decimal. Matrix-vector multiplication for unsigned negabinary numbers is achieved through the convolution technique. Both of the operands for logical operation are converted to their signed negabinary forms. All operations are implemented by use of a unique optical architecture. The use of a single liquid-crystal-display panel to spatially encode the input data, operational kernels, and decoding masks have simplified the architecture as well as reduced the cost and complexity.

  8. Exact calculation of the time convolutionless master equation generator: Application to the nonequilibrium resonant level model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kidon, Lyran; The Sackler Center for Computational Molecular and Materials Science, Tel Aviv University, Tel Aviv 69978; Wilner, Eli Y.

    2015-12-21

    The generalized quantum master equation provides a powerful tool to describe the dynamics in quantum impurity models driven away from equilibrium. Two complementary approaches, one based on Nakajima–Zwanzig–Mori time-convolution (TC) and the other on the Tokuyama–Mori time-convolutionless (TCL) formulations provide a starting point to describe the time-evolution of the reduced density matrix. A key in both approaches is to obtain the so called “memory kernel” or “generator,” going beyond second or fourth order perturbation techniques. While numerically converged techniques are available for the TC memory kernel, the canonical approach to obtain the TCL generator is based on inverting a super-operatormore » in the full Hilbert space, which is difficult to perform and thus, nearly all applications of the TCL approach rely on a perturbative scheme of some sort. Here, the TCL generator is expressed using a reduced system propagator which can be obtained from system observables alone and requires the calculation of super-operators and their inverse in the reduced Hilbert space rather than the full one. This makes the formulation amenable to quantum impurity solvers or to diagrammatic techniques, such as the nonequilibrium Green’s function. We implement the TCL approach for the resonant level model driven away from equilibrium and compare the time scales for the decay of the generator with that of the memory kernel in the TC approach. Furthermore, the effects of temperature, source-drain bias, and gate potential on the TCL/TC generators are discussed.« less

  9. Chinese character recognition based on Gabor feature extraction and CNN

    NASA Astrophysics Data System (ADS)

    Xiong, Yudian; Lu, Tongwei; Jiang, Yongyuan

    2018-03-01

    As an important application in the field of text line recognition and office automation, Chinese character recognition has become an important subject of pattern recognition. However, due to the large number of Chinese characters and the complexity of its structure, there is a great difficulty in the Chinese character recognition. In order to solve this problem, this paper proposes a method of printed Chinese character recognition based on Gabor feature extraction and Convolution Neural Network(CNN). The main steps are preprocessing, feature extraction, training classification. First, the gray-scale Chinese character image is binarized and normalized to reduce the redundancy of the image data. Second, each image is convoluted with Gabor filter with different orientations, and the feature map of the eight orientations of Chinese characters is extracted. Third, the feature map through Gabor filters and the original image are convoluted with learning kernels, and the results of the convolution is the input of pooling layer. Finally, the feature vector is used to classify and recognition. In addition, the generalization capacity of the network is improved by Dropout technology. The experimental results show that this method can effectively extract the characteristics of Chinese characters and recognize Chinese characters.

  10. Accelerated SPECT Monte Carlo Simulation Using Multiple Projection Sampling and Convolution-Based Forced Detection

    NASA Astrophysics Data System (ADS)

    Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.

    2008-02-01

    Monte Carlo (MC) is a well-utilized tool for simulating photon transport in single photon emission computed tomography (SPECT) due to its ability to accurately model physical processes of photon transport. As a consequence of this accuracy, it suffers from a relatively low detection efficiency and long computation time. One technique used to improve the speed of MC modeling is the effective and well-established variance reduction technique (VRT) known as forced detection (FD). With this method, photons are followed as they traverse the object under study but are then forced to travel in the direction of the detector surface, whereby they are detected at a single detector location. Another method, called convolution-based forced detection (CFD), is based on the fundamental idea of FD with the exception that detected photons are detected at multiple detector locations and determined with a distance-dependent blurring kernel. In order to further increase the speed of MC, a method named multiple projection convolution-based forced detection (MP-CFD) is presented. Rather than forcing photons to hit a single detector, the MP-CFD method follows the photon transport through the object but then, at each scatter site, forces the photon to interact with a number of detectors at a variety of angles surrounding the object. This way, it is possible to simulate all the projection images of a SPECT simulation in parallel, rather than as independent projections. The result of this is vastly improved simulation time as much of the computation load of simulating photon transport through the object is done only once for all projection angles. The results of the proposed MP-CFD method agrees well with the experimental data in measurements of point spread function (PSF), producing a correlation coefficient (r2) of 0.99 compared to experimental data. The speed of MP-CFD is shown to be about 60 times faster than a regular forced detection MC program with similar results.

  11. Crowd counting via region based multi-channel convolution neural network

    NASA Astrophysics Data System (ADS)

    Cao, Xiaoguang; Gao, Siqi; Bai, Xiangzhi

    2017-11-01

    This paper proposed a novel region based multi-channel convolution neural network architecture for crowd counting. In order to effectively solve the perspective distortion in crowd datasets with a great diversity of scales, this work combines the main channel and three branch channels. These channels extract both the global and region features. And the results are used to estimate density map. Moreover, kernels with ladder-shaped sizes are designed across all the branch channels, which generate adaptive region features. Also, branch channels use relatively deep and shallow network to achieve more accurate detector. By using these strategies, the proposed architecture achieves state-of-the-art performance on ShanghaiTech datasets and competitive performance on UCF_CC_50 datasets.

  12. An improved robust blind motion de-blurring algorithm for remote sensing images

    NASA Astrophysics Data System (ADS)

    He, Yulong; Liu, Jin; Liang, Yonghui

    2016-10-01

    Shift-invariant motion blur can be modeled as a convolution of the true latent image and the blur kernel with additive noise. Blind motion de-blurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. This paper proposes an improved edge-specific motion de-blurring algorithm which proved to be fit for processing remote sensing images. We find that an inaccurate blur kernel is the main factor to the low-quality restored images. To improve image quality, we do the following contributions. For the robust kernel estimation, first, we adapt the multi-scale scheme to make sure that the edge map could be constructed accurately; second, an effective salient edge selection method based on RTV (Relative Total Variation) is used to extract salient structure from texture; third, an alternative iterative method is introduced to perform kernel optimization, in this step, we adopt l1 and l0 norm as the priors to remove noise and ensure the continuity of blur kernel. For the final latent image reconstruction, an improved adaptive deconvolution algorithm based on TV-l2 model is used to recover the latent image; we control the regularization weight adaptively in different region according to the image local characteristics in order to preserve tiny details and eliminate noise and ringing artifacts. Some synthetic remote sensing images are used to test the proposed algorithm, and results demonstrate that the proposed algorithm obtains accurate blur kernel and achieves better de-blurring results.

  13. Evaluation of human exposure to single electromagnetic pulses of arbitrary shape.

    PubMed

    Jelínek, Lukás; Pekárek, Ludĕk

    2006-03-01

    Transient current density J(t) induced in the body of a person exposed to a single magnetic pulse of arbitrary shape or to a magnetic jump is filtered by a convolution integral containing in its kernel the frequency and phase dependence of the basic limit value adopted in a way similar to that used for reference values in the International Commission on Non-lonising Radiation Protection statement. From the obtained time-dependent dimensionless impact function W(J)(t) can immediately be determined whether the exposure to the analysed single event complies with the basic limit. For very slowly varying field, the integral kernel is extended to include the softened ICNIRP basic limit for frequencies lower than 4 Hz.

  14. QCDNUM: Fast QCD evolution and convolution

    NASA Astrophysics Data System (ADS)

    Botje, M.

    2011-02-01

    The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline coefficients by solving (coupled) triangular matrix equations with a forward substitution algorithm. Fast computation of convolution integrals as weighted sums of spline coefficients, with weights derived from user-given convolution kernels. Restrictions: Accuracy and speed are determined by the density of the evolution grid. Running time: Less than 10 ms on a 2 GHz Intel Core 2 Duo processor to evolve the gluon density and 12 quark densities at next-to-next-to-leading order over a large kinematic range.

  15. Learning Hierarchical Feature Extractors for Image Recognition

    DTIC Science & Technology

    2012-09-01

    space as a natural criterion for devising better pools. Finally, we propose ways to make coding faster and more powerful through fast convolutional...parameter is the set of pools over which the summary statistic is computed. We propose locality in feature configuration space as a natural criterion for...pooling (dotted lines) is consistently higher than average pooling (solid lines), but the gap is much less signif - icant with intersection kernel (closed

  16. Dimensionality-varied convolutional neural network for spectral-spatial classification of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Liu, Wanjun; Liang, Xuejian; Qu, Haicheng

    2017-11-01

    Hyperspectral image (HSI) classification is one of the most popular topics in remote sensing community. Traditional and deep learning-based classification methods were proposed constantly in recent years. In order to improve the classification accuracy and robustness, a dimensionality-varied convolutional neural network (DVCNN) was proposed in this paper. DVCNN was a novel deep architecture based on convolutional neural network (CNN). The input of DVCNN was a set of 3D patches selected from HSI which contained spectral-spatial joint information. In the following feature extraction process, each patch was transformed into some different 1D vectors by 3D convolution kernels, which were able to extract features from spectral-spatial data. The rest of DVCNN was about the same as general CNN and processed 2D matrix which was constituted by by all 1D data. So that the DVCNN could not only extract more accurate and rich features than CNN, but also fused spectral-spatial information to improve classification accuracy. Moreover, the robustness of network on water-absorption bands was enhanced in the process of spectral-spatial fusion by 3D convolution, and the calculation was simplified by dimensionality varied convolution. Experiments were performed on both Indian Pines and Pavia University scene datasets, and the results showed that the classification accuracy of DVCNN improved by 32.87% on Indian Pines and 19.63% on Pavia University scene than spectral-only CNN. The maximum accuracy improvement of DVCNN achievement was 13.72% compared with other state-of-the-art HSI classification methods, and the robustness of DVCNN on water-absorption bands noise was demonstrated.

  17. Convolutional coding techniques for data protection

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  18. Electron beam lithographic modeling assisted by artificial intelligence technology

    NASA Astrophysics Data System (ADS)

    Nakayamada, Noriaki; Nishimura, Rieko; Miura, Satoru; Nomura, Haruyuki; Kamikubo, Takashi

    2017-07-01

    We propose a new concept of tuning a point-spread function (a "kernel" function) in the modeling of electron beam lithography using the machine learning scheme. Normally in the work of artificial intelligence, the researchers focus on the output results from a neural network, such as success ratio in image recognition or improved production yield, etc. In this work, we put more focus on the weights connecting the nodes in a convolutional neural network, which are naturally the fractions of a point-spread function, and take out those weighted fractions after learning to be utilized as a tuned kernel. Proof-of-concept of the kernel tuning has been demonstrated using the examples of proximity effect correction with 2-layer network, and charging effect correction with 3-layer network. This type of new tuning method can be beneficial to give researchers more insights to come up with a better model, yet it might be too early to be deployed to production to give better critical dimension (CD) and positional accuracy almost instantly.

  19. Wilson loops and QCD/string scattering amplitudes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makeenko, Yuri; Olesen, Poul; Niels Bohr International Academy, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen O

    2009-07-15

    We generalize modern ideas about the duality between Wilson loops and scattering amplitudes in N=4 super Yang-Mills theory to large N QCD by deriving a general relation between QCD meson scattering amplitudes and Wilson loops. We then investigate properties of the open-string disk amplitude integrated over reparametrizations. When the Wilson-loop is approximated by the area behavior, we find that the QCD scattering amplitude is a convolution of the standard Koba-Nielsen integrand and a kernel. As usual poles originate from the first factor, whereas no (momentum-dependent) poles can arise from the kernel. We show that the kernel becomes a constant whenmore » the number of external particles becomes large. The usual Veneziano amplitude then emerges in the kinematical regime, where the Wilson loop can be reliably approximated by the area behavior. In this case, we obtain a direct duality between Wilson loops and scattering amplitudes when spatial variables and momenta are interchanged, in analogy with the N=4 super Yang-Mills theory case.« less

  20. Agile convolutional neural network for pulmonary nodule classification using CT images.

    PubMed

    Zhao, Xinzhuo; Liu, Liyao; Qi, Shouliang; Teng, Yueyang; Li, Jianhua; Qian, Wei

    2018-04-01

    To distinguish benign from malignant pulmonary nodules using CT images is critical for their precise diagnosis and treatment. A new Agile convolutional neural network (CNN) framework is proposed to conquer the challenges of a small-scale medical image database and the small size of the nodules, and it improves the performance of pulmonary nodule classification using CT images. A hybrid CNN of LeNet and AlexNet is constructed through combining the layer settings of LeNet and the parameter settings of AlexNet. A dataset with 743 CT image nodule samples is built up based on the 1018 CT scans of LIDC to train and evaluate the Agile CNN model. Through adjusting the parameters of the kernel size, learning rate, and other factors, the effect of these parameters on the performance of the CNN model is investigated, and an optimized setting of the CNN is obtained finally. After finely optimizing the settings of the CNN, the estimation accuracy and the area under the curve can reach 0.822 and 0.877, respectively. The accuracy of the CNN is significantly dependent on the kernel size, learning rate, training batch size, dropout, and weight initializations. The best performance is achieved when the kernel size is set to [Formula: see text], the learning rate is 0.005, the batch size is 32, and dropout and Gaussian initialization are used. This competitive performance demonstrates that our proposed CNN framework and the optimization strategy of the CNN parameters are suitable for pulmonary nodule classification characterized by small medical datasets and small targets. The classification model might help diagnose and treat pulmonary nodules effectively.

  1. The Swift-Hohenberg equation with a nonlocal nonlinearity

    NASA Astrophysics Data System (ADS)

    Morgan, David; Dawes, Jonathan H. P.

    2014-03-01

    It is well known that aspects of the formation of localised states in a one-dimensional Swift-Hohenberg equation can be described by Ginzburg-Landau-type envelope equations. This paper extends these multiple scales analyses to cases where an additional nonlinear integral term, in the form of a convolution, is present. The presence of a kernel function introduces a new lengthscale into the problem, and this results in additional complexity in both the derivation of envelope equations and in the bifurcation structure. When the kernel is short-range, weakly nonlinear analysis results in envelope equations of standard type but whose coefficients are modified in complicated ways by the nonlinear nonlocal term. Nevertheless, these computations can be formulated quite generally in terms of properties of the Fourier transform of the kernel function. When the lengthscale associated with the kernel is longer, our method leads naturally to the derivation of two different, novel, envelope equations that describe aspects of the dynamics in these new regimes. The first of these contains additional bifurcations, and unexpected loops in the bifurcation diagram. The second of these captures the stretched-out nature of the homoclinic snaking curves that arises due to the nonlocal term.

  2. A Theoretical Perspective on Aided Target Recognition Research

    DTIC Science & Technology

    1991-07-01

    place (i.e., the viewer having obtained an image of the scene), the problem then becomes the after- the -fact or... the scene is s, and p(i) is the similarly unconditioned probability that the image is i. The use of Bayes’ rule places no restrictions on the ...Therefore, a necessary condition for the existence of deconvolvers is that the Fourier transforms of the convolution kernels do not possess any common

  3. Inferring planetary obliquity using rotational and orbital photometry

    NASA Astrophysics Data System (ADS)

    Schwartz, J. C.; Sekowski, C.; Haggard, H. M.; Pallé, E.; Cowan, N. B.

    2016-03-01

    The obliquity of a terrestrial planet is an important clue about its formation and critical to its climate. Previous studies using simulated photometry of Earth show that continuous observations over most of a planet's orbit can be inverted to infer obliquity. However, few studies of more general planets with arbitrary albedo markings have been made and, in particular, a simple theoretical understanding of why it is possible to extract obliquity from light curves is missing. Reflected light seen by a distant observer is the product of a planet's albedo map, its host star's illumination, and the visibility of different regions. It is useful to treat the product of illumination and visibility as the kernel of a convolution. Time-resolved photometry constrains both the albedo map and the kernel, the latter of which sweeps over the planet due to rotational and orbital motion. The kernel's movement distinguishes prograde from retrograde rotation for planets with non-zero obliquity on inclined orbits. We demonstrate that the kernel's longitudinal width and mean latitude are distinct functions of obliquity and axial orientation. Notably, we find that a planet's spin axis affects the kernel - and hence time-resolved photometry - even if this planet is east-west uniform or spinning rapidly, or if it is north-south uniform. We find that perfect knowledge of the kernel at 2-4 orbital phases is usually sufficient to uniquely determine a planet's spin axis. Surprisingly, we predict that east-west albedo contrast is more useful for constraining obliquity than north-south contrast.

  4. Extension of a nonlinear systems theory to general-frequency unsteady transonic aerodynamic responses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1993-01-01

    A methodology for modeling nonlinear unsteady aerodynamic responses, for subsequent use in aeroservoelastic analysis and design, using the Volterra-Wiener theory of nonlinear systems is presented. The methodology is extended to predict nonlinear unsteady aerodynamic responses of arbitrary frequency. The Volterra-Wiener theory uses multidimensional convolution integrals to predict the response of nonlinear systems to arbitrary inputs. The CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code is used to generate linear and nonlinear unit impulse responses that correspond to each of the integrals for a rectangular wing with a NACA 0012 section with pitch and plunge degrees of freedom. The computed kernels then are used to predict linear and nonlinear unsteady aerodynamic responses via convolution and compared to responses obtained using the CAP-TSD code directly. The results indicate that the approach can be used to predict linear unsteady aerodynamic responses exactly for any input amplitude or frequency at a significant cost savings. Convolution of the nonlinear terms results in nonlinear unsteady aerodynamic responses that compare reasonably well with those computed using the CAP-TSD code directly but at significant computational cost savings.

  5. A new concept of pencil beam dose calculation for 40-200 keV photons using analytical dose kernels.

    PubMed

    Bartzsch, Stefan; Oelfke, Uwe

    2013-11-01

    The advent of widespread kV-cone beam computer tomography in image guided radiation therapy and special therapeutic application of keV photons, e.g., in microbeam radiation therapy (MRT) require accurate and fast dose calculations for photon beams with energies between 40 and 200 keV. Multiple photon scattering originating from Compton scattering and the strong dependence of the photoelectric cross section on the atomic number of the interacting tissue render these dose calculations by far more challenging than the ones established for corresponding MeV beams. That is why so far developed analytical models of kV photon dose calculations fail to provide the required accuracy and one has to rely on time consuming Monte Carlo simulation techniques. In this paper, the authors introduce a novel analytical approach for kV photon dose calculations with an accuracy that is almost comparable to the one of Monte Carlo simulations. First, analytical point dose and pencil beam kernels are derived for homogeneous media and compared to Monte Carlo simulations performed with the Geant4 toolkit. The dose contributions are systematically separated into contributions from the relevant orders of multiple photon scattering. Moreover, approximate scaling laws for the extension of the algorithm to inhomogeneous media are derived. The comparison of the analytically derived dose kernels in water showed an excellent agreement with the Monte Carlo method. Calculated values deviate less than 5% from Monte Carlo derived dose values, for doses above 1% of the maximum dose. The analytical structure of the kernels allows adaption to arbitrary materials and photon spectra in the given energy range of 40-200 keV. The presented analytical methods can be employed in a fast treatment planning system for MRT. In convolution based algorithms dose calculation times can be reduced to a few minutes.

  6. PIPE: a protein–protein interaction passage extraction module for BioCreative challenge

    PubMed Central

    Chu, Chun-Han; Su, Yu-Chen; Chen, Chien Chin; Hsu, Wen-Lian

    2016-01-01

    Identifying the interactions between proteins mentioned in biomedical literatures is one of the frequently discussed topics of text mining in the life science field. In this article, we propose PIPE, an interaction pattern generation module used in the Collaborative Biocurator Assistant Task at BioCreative V (http://www.biocreative.org/) to capture frequent protein-protein interaction (PPI) patterns within text. We also present an interaction pattern tree (IPT) kernel method that integrates the PPI patterns with convolution tree kernel (CTK) to extract PPIs. Methods were evaluated on LLL, IEPA, HPRD50, AIMed and BioInfer corpora using cross-validation, cross-learning and cross-corpus evaluation. Empirical evaluations demonstrate that our method is effective and outperforms several well-known PPI extraction methods. Database URL: PMID:27524807

  7. A mathematical deconvolution formulation for superficial dose distribution measurement by Cerenkov light dosimetry.

    PubMed

    Brost, Eric Edward; Watanabe, Yoichi

    2018-06-01

    Cerenkov photons are created by high-energy radiation beams used for radiation therapy. In this study, we developed a Cerenkov light dosimetry technique to obtain a two-dimensional dose distribution in a superficial region of medium from the images of Cerenkov photons by using a deconvolution method. An integral equation was derived to represent the Cerenkov photon image acquired by a camera for a given incident high-energy photon beam by using convolution kernels. Subsequently, an equation relating the planar dose at a depth to a Cerenkov photon image using the well-known relationship between the incident beam fluence and the dose distribution in a medium was obtained. The final equation contained a convolution kernel called the Cerenkov dose scatter function (CDSF). The CDSF function was obtained by deconvolving the Cerenkov scatter function (CSF) with the dose scatter function (DSF). The GAMOS (Geant4-based Architecture for Medicine-Oriented Simulations) Monte Carlo particle simulation software was used to obtain the CSF and DSF. The dose distribution was calculated from the Cerenkov photon intensity data using an iterative deconvolution method with the CDSF. The theoretical formulation was experimentally evaluated by using an optical phantom irradiated by high-energy photon beams. The intensity of the deconvolved Cerenkov photon image showed linear dependence on the dose rate and the photon beam energy. The relative intensity showed a field size dependence similar to the beam output factor. Deconvolved Cerenkov images showed improvement in dose profiles compared with the raw image data. In particular, the deconvolution significantly improved the agreement in the high dose gradient region, such as in the penumbra. Deconvolution with a single iteration was found to provide the most accurate solution of the dose. Two-dimensional dose distributions of the deconvolved Cerenkov images agreed well with the reference distributions for both square fields and a multileaf collimator (MLC) defined, irregularly shaped field. The proposed technique improved the accuracy of the Cerenkov photon dosimetry in the penumbra region. The results of this study showed initial validation of the deconvolution method for beam profile measurements in a homogeneous media. The new formulation accounted for the physical processes of Cerenkov photon transport in the medium more accurately than previously published methods. © 2018 American Association of Physicists in Medicine.

  8. Improved digital filters for evaluating Fourier and Hankel transform integrals

    USGS Publications Warehouse

    Anderson, Walter L.

    1975-01-01

    New algorithms are described for evaluating Fourier (cosine, sine) and Hankel (J0,J1) transform integrals by means of digital filters. The filters have been designed with extended lengths so that a variable convolution operation can be applied to a large class of integral transforms having the same system transfer function. A f' lagged-convolution method is also presented to significantly decrease the computation time when computing a series of like-transforms over a parameter set spaced the same as the filters. Accuracy of the new filters is comparable to Gaussian integration, provided moderate parameter ranges and well-behaved kernel functions are used. A collection of Fortran IV subprograms is included for both real and complex functions for each filter type. The algorithms have been successfully used in geophysical applications containing a wide variety of integral transforms

  9. The spatial resolution of a rotating gamma camera tomographic facility.

    PubMed

    Webb, S; Flower, M A; Ott, R J; Leach, M O; Inamdar, R

    1983-12-01

    An important feature determining the spatial resolution in transverse sections reconstructed by convolution and back-projection is the frequency filter corresponding to the convolution kernel. Equations have been derived giving the theoretical spatial resolution, for a perfect detector and noise-free data, using four filter functions. Experiments have shown that physical constraints will always limit the resolution that can be achieved with a given system. The experiments indicate that the region of the frequency spectrum between KN/2 and KN where KN is the Nyquist frequency does not contribute significantly to resolution. In order to investigate the physical effect of these filter functions, the spatial resolution of reconstructed images obtained with a GE 400T rotating gamma camera has been measured. The results obtained serve as an aid to choosing appropriate reconstruction filters for use with a rotating gamma camera system.

  10. 2D convolution kernels of ionization chambers used for photon-beam dosimetry in magnetic fields: the advantage of small over large chamber dimensions

    NASA Astrophysics Data System (ADS)

    Khee Looe, Hui; Delfs, Björn; Poppinga, Daniela; Harder, Dietrich; Poppe, Björn

    2018-04-01

    This study aims at developing an optimization strategy for photon-beam dosimetry in magnetic fields using ionization chambers. Similar to the familiar case in the absence of a magnetic field, detectors should be selected under the criterion that their measured 2D signal profiles M(x,y) approximate the absorbed dose to water profiles D(x,y) as closely as possible. Since the conversion of D(x,y) into M(x,y) is known as the convolution with the ‘lateral dose response function’ K(x-ξ, y-η) of the detector, the ideal detector would be characterized by a vanishing magnetic field dependence of this convolution kernel (Looe et al 2017b Phys. Med. Biol. 62 5131–48). The idea of the present study is to find out, by Monte Carlo simulation of two commercial ionization chambers of different size, whether the smaller chamber dimensions would be instrumental to approach this aim. As typical examples, the lateral dose response functions in the presence and absence of a magnetic field have been Monte-Carlo modeled for the new commercial ionization chambers PTW 31021 (‘Semiflex 3D’, internal radius 2.4 mm) and PTW 31022 (‘PinPoint 3D’, internal radius 1.45 mm), which are both available with calibration factors. The Monte-Carlo model of the ionization chambers has been adjusted to account for the presence of the non-collecting part of the air volume near the guard ring. The Monte-Carlo results allow a comparison between the widths of the magnetic field dependent photon fluence response function K M(x-ξ, y-η) and of the lateral dose response function K(x-ξ, y-η) of the two chambers with the width of the dose deposition kernel K D(x-ξ, y-η). The simulated dose and chamber signal profiles show that in small photon fields and in the presence of a 1.5 T field the distortion of the chamber signal profile compared with the true dose profile is weakest for the smaller chamber. The dose responses of both chambers at large field size are shown to be altered by not more than 2% in magnetic fields up to 1.5 T for all three investigated chamber orientations.

  11. Local dynamic range compensation for scanning electron microscope imaging system by sub-blocking multiple peak HE with convolution.

    PubMed

    Sim, K S; Teh, V; Tey, Y C; Kho, T K

    2016-11-01

    This paper introduces new development technique to improve the Scanning Electron Microscope (SEM) image quality and we name it as sub-blocking multiple peak histogram equalization (SUB-B-MPHE) with convolution operator. By using this new proposed technique, it shows that the new modified MPHE performs better than original MPHE. In addition, the sub-blocking method consists of convolution operator which can help to remove the blocking effect for SEM images after applying this new developed technique. Hence, by using the convolution operator, it effectively removes the blocking effect by properly distributing the suitable pixel value for the whole image. Overall, the SUB-B-MPHE with convolution outperforms the rest of methods. SCANNING 38:492-501, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  12. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  13. Face recognition via Gabor and convolutional neural network

    NASA Astrophysics Data System (ADS)

    Lu, Tongwei; Wu, Menglu; Lu, Tao

    2018-04-01

    In recent years, the powerful feature learning and classification ability of convolutional neural network have attracted widely attention. Compared with the deep learning, the traditional machine learning algorithm has a good explanatory which deep learning does not have. Thus, In this paper, we propose a method to extract the feature of the traditional algorithm as the input of convolution neural network. In order to reduce the complexity of the network, the kernel function of Gabor wavelet is used to extract the feature from different position, frequency and direction of target image. It is sensitive to edge of image which can provide good direction and scale selection. The extraction of the image from eight directions on a scale are as the input of network that we proposed. The network have the advantage of weight sharing and local connection and texture feature of the input image can reduce the influence of facial expression, gesture and illumination. At the same time, we introduced a layer which combined the results of the pooling and convolution can extract deeper features. The training network used the open source caffe framework which is beneficial to feature extraction. The experiment results of the proposed method proved that the network structure effectively overcame the barrier of illumination and had a good robustness as well as more accurate and rapid than the traditional algorithm.

  14. Correlated Topic Vector for Scene Classification.

    PubMed

    Wei, Pengxu; Qin, Fei; Wan, Fang; Zhu, Yi; Jiao, Jianbin; Ye, Qixiang

    2017-07-01

    Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features.

  15. From prompt gamma distribution to dose: a novel approach combining an evolutionary algorithm and filtering based on Gaussian-powerlaw convolutions.

    PubMed

    Schumann, A; Priegnitz, M; Schoene, S; Enghardt, W; Rohling, H; Fiedler, F

    2016-10-07

    Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.

  16. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images*

    PubMed Central

    Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G

    2014-01-01

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image-based dosimetry in nuclear medicine. PMID:24200697

  17. Effect of mixing scanner types and reconstruction kernels on the characterization of lung parenchymal pathologies: emphysema, interstitial pulmonary fibrosis and normal non-smokers

    NASA Astrophysics Data System (ADS)

    Xu, Ye; van Beek, Edwin J.; McLennan, Geoffrey; Guo, Junfeng; Sonka, Milan; Hoffman, Eric

    2006-03-01

    In this study we utilize our texture characterization software (3-D AMFM) to characterize interstitial lung diseases (including emphysema) based on MDCT generated volumetric data using 3-dimensional texture features. We have sought to test whether the scanner and reconstruction filter (kernel) type affect the classification of lung diseases using the 3-D AMFM. We collected MDCT images in three subject groups: emphysema (n=9), interstitial pulmonary fibrosis (IPF) (n=10), and normal non-smokers (n=9). In each group, images were scanned either on a Siemens Sensation 16 or 64-slice scanner, (B50f or B30 recon. kernel) or a Philips 4-slice scanner (B recon. kernel). A total of 1516 volumes of interest (VOIs; 21x21 pixels in plane) were marked by two chest imaging experts using the Iowa Pulmonary Analysis Software Suite (PASS). We calculated 24 volumetric features. Bayesian methods were used for classification. Images from different scanners/kernels were combined in all possible combinations to test how robust the tissue classification was relative to the differences in image characteristics. We used 10-fold cross validation for testing the result. Sensitivity, specificity and accuracy were calculated. One-way Analysis of Variances (ANOVA) was used to compare the classification result between the various combinations of scanner and reconstruction kernel types. This study yielded a sensitivity of 94%, 91%, 97%, and 93% for emphysema, ground-glass, honeycombing, and normal non-smoker patterns respectively using a mixture of all three subject groups. The specificity for these characterizations was 97%, 99%, 99%, and 98%, respectively. The F test result of ANOVA shows there is no significant difference (p <0.05) between different combinations of data with respect to scanner and convolution kernel type. Since different MDCT and reconstruction kernel types did not show significant differences in regards to the classification result, this study suggests that the 3-D AMFM can be generally introduced.

  18. Novel near-infrared sampling apparatus for single kernel analysis of oil content in maize.

    PubMed

    Janni, James; Weinstock, B André; Hagen, Lisa; Wright, Steve

    2008-04-01

    A method of rapid, nondestructive chemical and physical analysis of individual maize (Zea mays L.) kernels is needed for the development of high value food, feed, and fuel traits. Near-infrared (NIR) spectroscopy offers a robust nondestructive method of trait determination. However, traditional NIR bulk sampling techniques cannot be applied successfully to individual kernels. Obtaining optimized single kernel NIR spectra for applied chemometric predictive analysis requires a novel sampling technique that can account for the heterogeneous forms, morphologies, and opacities exhibited in individual maize kernels. In this study such a novel technique is described and compared to less effective means of single kernel NIR analysis. Results of the application of a partial least squares (PLS) derived model for predictive determination of percent oil content per individual kernel are shown.

  19. Timescales of Land Surface Evapotranspiration Response

    NASA Technical Reports Server (NTRS)

    Scott, Russell; Entekhabi, Dara; Koster, Randal; Suarez, Max

    1997-01-01

    Soil and vegetation exert strong control over the evapotranspiration rate, which couples the land surface water and energy balances. A method is presented to quantify the timescale of this surface control using daily general circulation model (GCM) simulation values of evapotranspiration and precipitation. By equating the time history of evaporation efficiency (ratio of actual to potential evapotranspiration) to the convolution of precipitation and a unit kernel (temporal weighting function), response functions are generated that can be used to characterize the timescales of evapotranspiration response for the land surface model (LSM) component of GCMS. The technique is applied to the output of two multiyear simulations of a GCM, one using a Surface-Vegetation-Atmosphere-Transfer (SVAT) scheme and the other a Bucket LSM. The derived response functions show that the Bucket LSM's response is significantly slower than that of the SVAT across the globe. The analysis also shows how the timescales of interception reservoir evaporation, bare soil evaporation, and vegetation transpiration differ within the SVAT LSM.

  20. Neural source dynamics of brain responses to continuous stimuli: Speech processing from acoustics to comprehension.

    PubMed

    Brodbeck, Christian; Presacco, Alessandro; Simon, Jonathan Z

    2018-05-15

    Human experience often involves continuous sensory information that unfolds over time. This is true in particular for speech comprehension, where continuous acoustic signals are processed over seconds or even minutes. We show that brain responses to such continuous stimuli can be investigated in detail, for magnetoencephalography (MEG) data, by combining linear kernel estimation with minimum norm source localization. Previous research has shown that the requirement to average data over many trials can be overcome by modeling the brain response as a linear convolution of the stimulus and a kernel, or response function, and estimating a kernel that predicts the response from the stimulus. However, such analysis has been typically restricted to sensor space. Here we demonstrate that this analysis can also be performed in neural source space. We first computed distributed minimum norm current source estimates for continuous MEG recordings, and then computed response functions for the current estimate at each source element, using the boosting algorithm with cross-validation. Permutation tests can then assess the significance of individual predictor variables, as well as features of the corresponding spatio-temporal response functions. We demonstrate the viability of this technique by computing spatio-temporal response functions for speech stimuli, using predictor variables reflecting acoustic, lexical and semantic processing. Results indicate that processes related to comprehension of continuous speech can be differentiated anatomically as well as temporally: acoustic information engaged auditory cortex at short latencies, followed by responses over the central sulcus and inferior frontal gyrus, possibly related to somatosensory/motor cortex involvement in speech perception; lexical frequency was associated with a left-lateralized response in auditory cortex and subsequent bilateral frontal activity; and semantic composition was associated with bilateral temporal and frontal brain activity. We conclude that this technique can be used to study the neural processing of continuous stimuli in time and anatomical space with the millisecond temporal resolution of MEG. This suggests new avenues for analyzing neural processing of naturalistic stimuli, without the necessity of averaging over artificially short or truncated stimuli. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Automatic lumbar vertebrae detection based on feature fusion deep learning for partial occluded C-arm X-ray images.

    PubMed

    Yang Li; Wei Liang; Yinlong Zhang; Haibo An; Jindong Tan

    2016-08-01

    Automatic and accurate lumbar vertebrae detection is an essential step of image-guided minimally invasive spine surgery (IG-MISS). However, traditional methods still require human intervention due to the similarity of vertebrae, abnormal pathological conditions and uncertain imaging angle. In this paper, we present a novel convolutional neural network (CNN) model to automatically detect lumbar vertebrae for C-arm X-ray images. Training data is augmented by DRR and automatic segmentation of ROI is able to reduce the computational complexity. Furthermore, a feature fusion deep learning (FFDL) model is introduced to combine two types of features of lumbar vertebrae X-ray images, which uses sobel kernel and Gabor kernel to obtain the contour and texture of lumbar vertebrae, respectively. Comprehensive qualitative and quantitative experiments demonstrate that our proposed model performs more accurate in abnormal cases with pathologies and surgical implants in multi-angle views.

  2. Radiobiological characterization of post-lumpectomy focal brachytherapy with lipid nanoparticle-carried radionuclides

    NASA Astrophysics Data System (ADS)

    Hrycushko, Brian A.; Gutierrez, Alonso N.; Goins, Beth; Yan, Weiqiang; Phillips, William T.; Otto, Pamela M.; Bao, Ande

    2011-02-01

    Post-operative radiotherapy has commonly been used for early stage breast cancer to treat residual disease. The primary objective of this work was to characterize, through dosimetric and radiobiological modeling, a novel focal brachytherapy technique which uses direct intracavitary infusion of β-emitting radionuclides (186Re/188Re) carried by lipid nanoparticles (liposomes). Absorbed dose calculations were performed for a spherical lumpectomy cavity with a uniformly injected activity distribution using a dose point kernel convolution technique. Radiobiological indices were used to relate predicted therapy outcome and normal tissue complication of this technique with equivalent external beam radiotherapy treatment regimens. Modeled stromal damage was used as a measure of the inhibition of the stimulatory effect on tumor growth driven by the wound healing response. A sample treatment plan delivering 50 Gy at a therapeutic range of 2.0 mm for 186Re-liposomes and 5.0 mm for 188Re-liposomes takes advantage of the dose delivery characteristics of the β-emissions, providing significant EUD (58.2 Gy and 72.5 Gy for 186Re and 188Re, respectively) with a minimal NTCP (0.046%) of the healthy ipsilateral breast. Modeling of kidney BED and ipsilateral breast NTCP showed that large injected activity concentrations of both radionuclides could be safely administered without significant complications.

  3. On simplified application of multidimensional Savitzky-Golay filters and differentiators

    NASA Astrophysics Data System (ADS)

    Shekhar, Chandra

    2016-02-01

    I propose a simplified approach for multidimensional Savitzky-Golay filtering, to enable its fast and easy implementation in scientific and engineering applications. The proposed method, which is derived from a generalized framework laid out by Thornley (D. J. Thornley, "Novel anisotropic multidimensional convolution filters for derivative estimation and reconstruction" in Proceedings of International Conference on Signal Processing and Communications, November 2007), first transforms any given multidimensional problem into a unique one, by transforming coordinates of the sampled data nodes to unity-spaced, uniform data nodes, and then performs filtering and calculates partial derivatives on the unity-spaced nodes. It is followed by transporting the calculated derivatives back onto the original data nodes by using the chain rule of differentiation. The burden to performing the most cumbersome task, which is to carry out the filtering and to obtain derivatives on the unity-spaced nodes, is almost eliminated by providing convolution coefficients for a number of convolution kernel sizes and polynomial orders, up to four spatial dimensions. With the availability of the convolution coefficients, the task of filtering at a data node reduces merely to multiplication of two known matrices. Simplified strategies to adequately address near-boundary data nodes and to calculate partial derivatives there are also proposed. Finally, the proposed methodologies are applied to a three-dimensional experimentally obtained data set, which shows that multidimensional Savitzky-Golay filters and differentiators perform well in both the internal and the near-boundary regions of the domain.

  4. Optimized Beam Sculpting with Generalized Fringe-rate Filters

    NASA Astrophysics Data System (ADS)

    Parsons, Aaron R.; Liu, Adrian; Ali, Zaki S.; Cheng, Carina

    2016-03-01

    We generalize the technique of fringe-rate filtering, whereby visibilities measured by a radio interferometer are re-weighted according to their temporal variation. As the Earth rotates, radio sources traverse through an interferometer’s fringe pattern at rates that depend on their position on the sky. Capitalizing on this geometric interpretation of fringe rates, we employ time-domain convolution kernels to enact fringe-rate filters that sculpt the effective primary beam of antennas in an interferometer. As we show, beam sculpting through fringe-rate filtering can be used to optimize measurements for a variety of applications, including mapmaking, minimizing polarization leakage, suppressing instrumental systematics, and enhancing the sensitivity of power-spectrum measurements. We show that fringe-rate filtering arises naturally in minimum variance treatments of many of these problems, enabling optimal visibility-based approaches to analyses of interferometric data that avoid systematics potentially introduced by traditional approaches such as imaging. Our techniques have recently been demonstrated in Ali et al., where new upper limits were placed on the 21 {cm} power spectrum from reionization, showcasing the ability of fringe-rate filtering to successfully boost sensitivity and reduce the impact of systematics in deep observations.

  5. Robust approach to ocular fundus image analysis

    NASA Astrophysics Data System (ADS)

    Tascini, Guido; Passerini, Giorgio; Puliti, Paolo; Zingaretti, Primo

    1993-07-01

    The analysis of morphological and structural modifications of retinal blood vessels plays an important role both to establish the presence of some systemic diseases as hypertension and diabetes and to study their course. The paper describes a robust set of techniques developed to quantitatively evaluate morphometric aspects of the ocular fundus vascular and micro vascular network. They are defined: (1) the concept of 'Local Direction of a vessel' (LD); (2) a special form of edge detection, named Signed Edge Detection (SED), which uses LD to choose the convolution kernel in the edge detection process and is able to distinguish between the left or the right vessel edge; (3) an iterative tracking (IT) method. The developed techniques use intensively both LD and SED in: (a) the automatic detection of number, position and size of blood vessels departing from the optical papilla; (b) the tracking of body and edges of the vessels; (c) the recognition of vessel branches and crossings; (d) the extraction of a set of features as blood vessel length and average diameter, arteries and arterioles tortuosity, crossing position and angle between two vessels. The algorithms, implemented in C language, have an execution time depending on the complexity of the currently processed vascular network.

  6. SU-E-T-329: Dosimetric Impact of Implementing Metal Artifact Reduction Methods and Metal Energy Deposition Kernels for Photon Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, J; Followill, D; Howell, R

    2015-06-15

    Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titaniummore » and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus, these two strategies do have the potential to improve accuracy for patients with metal implants in certain scenarios. This work was supported by Public Health Service grants CA 180803 and CA 10953 awarded by the National Cancer Institute, United States of Health and Human Services, and in part by Mobius Medical Systems.« less

  7. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  8. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  9. Keypoint Density-Based Region Proposal for Fine-Grained Object Detection and Classification Using Regions with Convolutional Neural Network Features

    DTIC Science & Technology

    2015-12-15

    Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network ... Convolutional Neural Networks (CNNs) enable them to outperform conventional techniques on standard object detection and classification tasks, their...detection accuracy and speed on the fine-grained Caltech UCSD bird dataset (Wah et al., 2011). Recently, Convolutional Neural Networks (CNNs), a deep

  10. Numerical integration of the extended variable generalized Langevin equation with a positive Prony representable memory kernel.

    PubMed

    Baczewski, Andrew D; Bond, Stephen D

    2013-07-28

    Generalized Langevin dynamics (GLD) arise in the modeling of a number of systems, ranging from structured fluids that exhibit a viscoelastic mechanical response, to biological systems, and other media that exhibit anomalous diffusive phenomena. Molecular dynamics (MD) simulations that include GLD in conjunction with external and/or pairwise forces require the development of numerical integrators that are efficient, stable, and have known convergence properties. In this article, we derive a family of extended variable integrators for the Generalized Langevin equation with a positive Prony series memory kernel. Using stability and error analysis, we identify a superlative choice of parameters and implement the corresponding numerical algorithm in the LAMMPS MD software package. Salient features of the algorithm include exact conservation of the first and second moments of the equilibrium velocity distribution in some important cases, stable behavior in the limit of conventional Langevin dynamics, and the use of a convolution-free formalism that obviates the need for explicit storage of the time history of particle velocities. Capability is demonstrated with respect to accuracy in numerous canonical examples, stability in certain limits, and an exemplary application in which the effect of a harmonic confining potential is mapped onto a memory kernel.

  11. Parameter diagnostics of phases and phase transition learning by neural networks

    NASA Astrophysics Data System (ADS)

    Suchsland, Philippe; Wessel, Stefan

    2018-05-01

    We present an analysis of neural network-based machine learning schemes for phases and phase transitions in theoretical condensed matter research, focusing on neural networks with a single hidden layer. Such shallow neural networks were previously found to be efficient in classifying phases and locating phase transitions of various basic model systems. In order to rationalize the emergence of the classification process and for identifying any underlying physical quantities, it is feasible to examine the weight matrices and the convolutional filter kernels that result from the learning process of such shallow networks. Furthermore, we demonstrate how the learning-by-confusing scheme can be used, in combination with a simple threshold-value classification method, to diagnose the learning parameters of neural networks. In particular, we study the classification process of both fully-connected and convolutional neural networks for the two-dimensional Ising model with extended domain wall configurations included in the low-temperature regime. Moreover, we consider the two-dimensional XY model and contrast the performance of the learning-by-confusing scheme and convolutional neural networks trained on bare spin configurations to the case of preprocessed samples with respect to vortex configurations. We discuss these findings in relation to similar recent investigations and possible further applications.

  12. Surface EMG decomposition based on K-means clustering and convolution kernel compensation.

    PubMed

    Ning, Yong; Zhu, Xiangjun; Zhu, Shanan; Zhang, Yingchun

    2015-03-01

    A new approach has been developed by combining the K-mean clustering (KMC) method and a modified convolution kernel compensation (CKC) method for multichannel surface electromyogram (EMG) decomposition. The KMC method was first utilized to cluster vectors of observations at different time instants and then estimate the initial innervation pulse train (IPT). The CKC method, modified with a novel multistep iterative process, was conducted to update the estimated IPT. The performance of the proposed K-means clustering-Modified CKC (KmCKC) approach was evaluated by reconstructing IPTs from both simulated and experimental surface EMG signals. The KmCKC approach successfully reconstructed all 10 IPTs from the simulated surface EMG signals with true positive rates (TPR) of over 90% with a low signal-to-noise ratio (SNR) of -10 dB. More than 10 motor units were also successfully extracted from the 64-channel experimental surface EMG signals of the first dorsal interosseous (FDI) muscles when a contraction force was held at 8 N by using the KmCKC approach. A "two-source" test was further conducted with 64-channel surface EMG signals. The high percentage of common MUs and common pulses (over 92% at all force levels) between the IPTs reconstructed from the two independent groups of surface EMG signals demonstrates the reliability and capability of the proposed KmCKC approach in multichannel surface EMG decomposition. Results from both simulated and experimental data are consistent and confirm that the proposed KmCKC approach can successfully reconstruct IPTs with high accuracy at different levels of contraction.

  13. DRREP: deep ridge regressed epitope predictor.

    PubMed

    Sher, Gene; Zhi, Degui; Zhang, Shaojie

    2017-10-03

    The ability to predict epitopes plays an enormous role in vaccine development in terms of our ability to zero in on where to do a more thorough in-vivo analysis of the protein in question. Though for the past decade there have been numerous advancements and improvements in epitope prediction, on average the best benchmark prediction accuracies are still only around 60%. New machine learning algorithms have arisen within the domain of deep learning, text mining, and convolutional networks. This paper presents a novel analytically trained and string kernel using deep neural network, which is tailored for continuous epitope prediction, called: Deep Ridge Regressed Epitope Predictor (DRREP). DRREP was tested on long protein sequences from the following datasets: SARS, Pellequer, HIV, AntiJen, and SEQ194. DRREP was compared to numerous state of the art epitope predictors, including the most recently published predictors called LBtope and DMNLBE. Using area under ROC curve (AUC), DRREP achieved a performance improvement over the best performing predictors on SARS (13.7%), HIV (8.9%), Pellequer (1.5%), and SEQ194 (3.1%), with its performance being matched only on the AntiJen dataset, by the LBtope predictor, where both DRREP and LBtope achieved an AUC of 0.702. DRREP is an analytically trained deep neural network, thus capable of learning in a single step through regression. By combining the features of deep learning, string kernels, and convolutional networks, the system is able to perform residue-by-residue prediction of continues epitopes with higher accuracy than the current state of the art predictors.

  14. CLAss-Specific Subspace Kernel Representations and Adaptive Margin Slack Minimization for Large Scale Classification.

    PubMed

    Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan

    2018-02-01

    In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.

  15. Brain tumor image segmentation using kernel dictionary learning.

    PubMed

    Jeon Lee; Seung-Jun Kim; Rong Chen; Herskovits, Edward H

    2015-08-01

    Automated brain tumor image segmentation with high accuracy and reproducibility holds a big potential to enhance the current clinical practice. Dictionary learning (DL) techniques have been applied successfully to various image processing tasks recently. In this work, kernel extensions of the DL approach are adopted. Both reconstructive and discriminative versions of the kernel DL technique are considered, which can efficiently incorporate multi-modal nonlinear feature mappings based on the kernel trick. Our novel discriminative kernel DL formulation allows joint learning of a task-driven kernel-based dictionary and a linear classifier using a K-SVD-type algorithm. The proposed approaches were tested using real brain magnetic resonance (MR) images of patients with high-grade glioma. The obtained preliminary performances are competitive with the state of the art. The discriminative kernel DL approach is seen to reduce computational burden without much sacrifice in performance.

  16. A boosted optimal linear learner for retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Poletti, E.; Grisan, E.

    2014-03-01

    Ocular fundus images provide important information about retinal degeneration, which may be related to acute pathologies or to early signs of systemic diseases. An automatic and quantitative assessment of vessel morphological features, such as diameters and tortuosity, can improve clinical diagnosis and evaluation of retinopathy. At variance with available methods, we propose a data-driven approach, in which the system learns a set of optimal discriminative convolution kernels (linear learner). The set is progressively built based on an ADA-boost sample weighting scheme, providing seamless integration between linear learner estimation and classification. In order to capture the vessel appearance changes at different scales, the kernels are estimated on a pyramidal decomposition of the training samples. The set is employed as a rotating bank of matched filters, whose response is used by the boosted linear classifier to provide a classification of each image pixel into the two classes of interest (vessel/background). We tested the approach fundus images available from the DRIVE dataset. We show that the segmentation performance yields an accuracy of 0.94.

  17. Nonlocal Poisson-Fermi model for ionic solvent.

    PubMed

    Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob

    2016-07-01

    We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution.

  18. A tensor Banach algebra approach to abstract kinetic equations

    NASA Astrophysics Data System (ADS)

    Greenberg, W.; van der Mee, C. V. M.

    The study deals with a concrete algebraic construction providing the existence theory for abstract kinetic equation boundary-value problems, when the collision operator A is an accretive finite-rank perturbation of the identity operator in a Hilbert space H. An algebraic generalization of the Bochner-Phillips theorem is utilized to study solvability of the abstract boundary-value problem without any regulatory condition. A Banach algebra in which the convolution kernel acts is obtained explicitly, and this result is used to prove a perturbation theorem for bisemigroups, which then plays a vital role in solving the initial equations.

  19. Fast dose kernel interpolation using Fourier transform with application to permanent prostate brachytherapy dosimetry.

    PubMed

    Liu, Derek; Sloboda, Ron S

    2014-05-01

    Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.

  20. Three-dimensional Talairach-Tournoux brain atlas

    NASA Astrophysics Data System (ADS)

    Fang, Anthony; Nowinski, Wieslaw L.; Nguyen, Bonnie T.; Bryan, R. Nick

    1995-04-01

    The Talairach-Tournoux Stereotaxic Atlas of the human brain is a frequently consulted resource in stereotaxic neurosurgery and computer-based neuroradiology. Its primary application lies in the 2-D analysis and interpretation of neurological images. However, for the purpose of the analysis and visualization of shapes and forms, accurate mensuration of volumes, or 3-D models matching, a 3-D representation of the atlas is essential. This paper proposes and describes, along with its difficulties, a 3-D geometric extension of the atlas. We introduce a `zero-potential' surface smoothing technique, along with a space-dependent convolution kernel and space-dependent normalization. The mesh-based atlas structures are hierarchically organized, and anatomically conform to the original atlas. Structures and their constituents can be independently selected and manipulated in real-time within an integrated system. The extended atlas may be navigated by itself, or interactively registered with patient data with the proportional grid system (piecewise linear) transformation. Visualization of the geometric atlas along with patient data gives a remarkable visual `feel' of the biological structures, not usually perceivable to the untrained eyes in conventional 2-D atlas to image analysis.

  1. Stochastic subset selection for learning with kernel machines.

    PubMed

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  2. Automated detection of pulmonary nodules in PET/CT images: Ensemble false-positive reduction using a convolutional neural network technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teramoto, Atsushi, E-mail: teramoto@fujita-hu.ac.jp; Fujita, Hiroshi; Yamamuro, Osamu

    Purpose: Automated detection of solitary pulmonary nodules using positron emission tomography (PET) and computed tomography (CT) images shows good sensitivity; however, it is difficult to detect nodules in contact with normal organs, and additional efforts are needed so that the number of false positives (FPs) can be further reduced. In this paper, the authors propose an improved FP-reduction method for the detection of pulmonary nodules in PET/CT images by means of convolutional neural networks (CNNs). Methods: The overall scheme detects pulmonary nodules using both CT and PET images. In the CT images, a massive region is first detected using anmore » active contour filter, which is a type of contrast enhancement filter that has a deformable kernel shape. Subsequently, high-uptake regions detected by the PET images are merged with the regions detected by the CT images. FP candidates are eliminated using an ensemble method; it consists of two feature extractions, one by shape/metabolic feature analysis and the other by a CNN, followed by a two-step classifier, one step being rule based and the other being based on support vector machines. Results: The authors evaluated the detection performance using 104 PET/CT images collected by a cancer-screening program. The sensitivity in detecting candidates at an initial stage was 97.2%, with 72.8 FPs/case. After performing the proposed FP-reduction method, the sensitivity of detection was 90.1%, with 4.9 FPs/case; the proposed method eliminated approximately half the FPs existing in the previous study. Conclusions: An improved FP-reduction scheme using CNN technique has been developed for the detection of pulmonary nodules in PET/CT images. The authors’ ensemble FP-reduction method eliminated 93% of the FPs; their proposed method using CNN technique eliminates approximately half the FPs existing in the previous study. These results indicate that their method may be useful in the computer-aided detection of pulmonary nodules using PET/CT images.« less

  3. Triso coating development progress for uranium nitride kernels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jolly, Brian C.; Lindemer, Terrence; Terrani, Kurt A.

    2015-08-01

    In support of fully ceramic matrix (FCM) fuel development [1-2], coating development work is ongoing at the Oak Ridge National Laboratory (ORNL) to produce tri-structural isotropic (TRISO) coated fuel particles with UN kernels [3]. The nitride kernels are used to increase fissile density in these SiC-matrix fuel pellets with details described elsewhere [4]. The advanced gas reactor (AGR) program at ORNL used fluidized bed chemical vapor deposition (FBCVD) techniques for TRISO coating of UCO (two phase mixture of UO2 and UCx) kernels [5]. Similar techniques were employed for coating of the UN kernels, however significant changes in processing conditions weremore » required to maintain acceptable coating properties due to physical property and dimensional differences between the UCO and UN kernels (Table 1).« less

  4. Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms

    DTIC Science & Technology

    2007-09-01

    punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data

  5. Spectral interpolation - Zero fill or convolution. [image processing

    NASA Technical Reports Server (NTRS)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jolly, Brian C.; Lindemer, Terrence; Terrani, Kurt A.

    In support of fully ceramic matrix (FCM) fuel development, coating development work has begun at the Oak Ridge National Laboratory (ORNL) to produce tri-isotropic (TRISO) coated fuel particles with UN kernels. The nitride kernels are used to increase heavy metal density in these SiC-matrix fuel pellets with details described elsewhere. The advanced gas reactor (AGR) program at ORNL used fluidized bed chemical vapor deposition (FBCVD) techniques for TRISO coating of UCO (two phase mixture of UO 2 and UC x) kernels. Similar techniques were employed for coating of the UN kernels, however significant changes in processing conditions were required tomore » maintain acceptable coating properties due to physical property and dimensional differences between the UCO and UN kernels.« less

  7. Forecasting the timing of activation of rainfall-induced landslides. An application of GA-SAKe to the Acri case study (Calabria, Southern Italy)

    NASA Astrophysics Data System (ADS)

    Gariano, Stefano Luigi; Terranova, Oreste; Greco, Roberto; Iaquinta, Pasquale; Iovine, Giulio

    2013-04-01

    In Calabria (Southern Italy), rainfall-induced landslides often cause significant economic loss and victims. The timing of activation of rainfall-induced landslides can be predicted by means of either empirical ("hydrological") or physically-based ("complete") approaches. In this study, by adopting the Genetic-Algorithm based release of the hydrological model SAKe (Self Adaptive Kernel), the relationships between the rainfall series and the dates of historical activations of the Acri slope movement, a large rock slide located in the Sila Massif (Northern Calabria), have been investigated. SAKe is a self-adaptive hydrological model, based on a black-box approach and on the assumption of a linear and steady slope-stability response to rainfall. The model can be employed to predict the timing of occurrence of rainfall-induced landslides. With the model, either the mobilizations of a single phenomenon, or those of a homogeneous set of landslides in a given study area can be analysed. By properly tuning the model parameters against past occurrences, the mobility function and the threshold value can be identified. The ranges of the parameters depend on the characteristics of the slope and of the considered landslide, besides hydrological characteristics of the triggering events. SAKe requires as input: i) the series of rains, and ii) the set of known dates of landslide activation. The output of the model is represented by the mobilization function, Z(t): it is defined by means of the convolution between the rains and a filter function (i.e. the Kernel). The triggering conditions occur when the value of Z(t) gets greater than a given threshold, Zcr. In particular, the specific release of the model here employed (GA-SAKe) employs an automated tool, based on elitist Genetic Algorithms. As a result, a family of optimal, discretized kernels has been obtained from initial standard analytical functions. Such kernels maximize the fitness function of the model: they have been selected by means of a calibration technique based on the operators selection, crossover, and mutation. In this way, the values of model parameters could be iteratively changed, aiming at improving the fitness of the tested solutions. An example of model optimization is discussed, with reference to the Acri case study, to exemplify the potential application of SAKe for early-warning and civil-protection purposes.

  8. Incorporating High-Frequency Physiologic Data Using Computational Dictionary Learning Improves Prediction of Delayed Cerebral Ischemia Compared to Existing Methods.

    PubMed

    Megjhani, Murad; Terilli, Kalijah; Frey, Hans-Peter; Velazquez, Angela G; Doyle, Kevin William; Connolly, Edward Sander; Roh, David Jinou; Agarwal, Sachin; Claassen, Jan; Elhadad, Noemie; Park, Soojin

    2018-01-01

    Accurate prediction of delayed cerebral ischemia (DCI) after subarachnoid hemorrhage (SAH) can be critical for planning interventions to prevent poor neurological outcome. This paper presents a model using convolution dictionary learning to extract features from physiological data available from bedside monitors. We develop and validate a prediction model for DCI after SAH, demonstrating improved precision over standard methods alone. 488 consecutive SAH admissions from 2006 to 2014 to a tertiary care hospital were included. Models were trained on 80%, while 20% were set aside for validation testing. Modified Fisher Scale was considered the standard grading scale in clinical use; baseline features also analyzed included age, sex, Hunt-Hess, and Glasgow Coma Scales. An unsupervised approach using convolution dictionary learning was used to extract features from physiological time series (systolic blood pressure and diastolic blood pressure, heart rate, respiratory rate, and oxygen saturation). Classifiers (partial least squares and linear and kernel support vector machines) were trained on feature subsets of the derivation dataset. Models were applied to the validation dataset. The performances of the best classifiers on the validation dataset are reported by feature subset. Standard grading scale (mFS): AUC 0.54. Combined demographics and grading scales (baseline features): AUC 0.63. Kernel derived physiologic features: AUC 0.66. Combined baseline and physiologic features with redundant feature reduction: AUC 0.71 on derivation dataset and 0.78 on validation dataset. Current DCI prediction tools rely on admission imaging and are advantageously simple to employ. However, using an agnostic and computationally inexpensive learning approach for high-frequency physiologic time series data, we demonstrated that we could incorporate individual physiologic data to achieve higher classification accuracy.

  9. Radiative transfer theory for a random distribution of low velocity spheres as resonant isotropic scatterers

    NASA Astrophysics Data System (ADS)

    Sato, Haruo; Hayakawa, Toshihiko

    2014-10-01

    Short-period seismograms of earthquakes are complex especially beneath volcanoes, where the S wave mean free path is short and low velocity bodies composed of melt or fluid are expected in addition to random velocity inhomogeneities as scattering sources. Resonant scattering inherent in a low velocity body shows trap and release of waves with a delay time. Focusing of the delay time phenomenon, we have to consider seriously multiple resonant scattering processes. Since wave phases are complex in such a scattering medium, the radiative transfer theory has been often used to synthesize the variation of mean square (MS) amplitude of waves; however, resonant scattering has not been well adopted in the conventional radiative transfer theory. Here, as a simple mathematical model, we study the sequence of isotropic resonant scattering of a scalar wavelet by low velocity spheres at low frequencies, where the inside velocity is supposed to be low enough. We first derive the total scattering cross-section per time for each order of scattering as the convolution kernel representing the decaying scattering response. Then, for a random and uniform distribution of such identical resonant isotropic scatterers, we build the propagator of the MS amplitude by using causality, a geometrical spreading factor and the scattering loss. Using those propagators and convolution kernels, we formulate the radiative transfer equation for a spherically impulsive radiation from a point source. The synthesized MS amplitude time trace shows a dip just after the direct arrival and a delayed swelling, and then a decaying tail at large lapse times. The delayed swelling is a prominent effect of resonant scattering. The space distribution of synthesized MS amplitude shows a swelling near the source region in space, and it becomes a bell shape like a diffusion solution at large lapse times.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kieselmann, J; Bartzsch, S; Oelfke, U

    Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less

  11. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1994-01-01

    Brief summaries of research in the following areas are presented: (1) construction of optimum geometrically uniform trellis codes; (2) a statistical approach to constructing convolutional code generators; and (3) calculating the exact performance of a convolutional code.

  12. Alcoholism Detection by Data Augmentation and Convolutional Neural Network with Stochastic Pooling.

    PubMed

    Wang, Shui-Hua; Lv, Yi-Ding; Sui, Yuxiu; Liu, Shuai; Wang, Su-Jing; Zhang, Yu-Dong

    2017-11-17

    Alcohol use disorder (AUD) is an important brain disease. It alters the brain structure. Recently, scholars tend to use computer vision based techniques to detect AUD. We collected 235 subjects, 114 alcoholic and 121 non-alcoholic. Among the 235 image, 100 images were used as training set, and data augmentation method was used. The rest 135 images were used as test set. Further, we chose the latest powerful technique-convolutional neural network (CNN) based on convolutional layer, rectified linear unit layer, pooling layer, fully connected layer, and softmax layer. We also compared three different pooling techniques: max pooling, average pooling, and stochastic pooling. The results showed that our method achieved a sensitivity of 96.88%, a specificity of 97.18%, and an accuracy of 97.04%. Our method was better than three state-of-the-art approaches. Besides, stochastic pooling performed better than other max pooling and average pooling. We validated CNN with five convolution layers and two fully connected layers performed the best. The GPU yielded a 149× acceleration in training and a 166× acceleration in test, compared to CPU.

  13. Multi-Regge kinematics and the moduli space of Riemann spheres with marked points

    DOE PAGES

    Del Duca, Vittorio; Druc, Stefan; Drummond, James; ...

    2016-08-25

    We show that scattering amplitudes in planar N = 4 Super Yang-Mills in multi-Regge kinematics can naturally be expressed in terms of single-valued iterated integrals on the moduli space of Riemann spheres with marked points. As a consequence, scattering amplitudes in this limit can be expressed as convolutions that can easily be computed using Stokes’ theorem. We apply this framework to MHV amplitudes to leading-logarithmic accuracy (LLA), and we prove that at L loops all MHV amplitudes are determined by amplitudes with up to L + 4 external legs. We also investigate non-MHV amplitudes, and we show that they canmore » be obtained by convoluting the MHV results with a certain helicity flip kernel. We classify all leading singularities that appear at LLA in the Regge limit for arbitrary helicity configurations and any number of external legs. In conclusion, we use our new framework to obtain explicit analytic results at LLA for all MHV amplitudes up to five loops and all non-MHV amplitudes with up to eight external legs and four loops.« less

  14. Improvements to the kernel function method of steady, subsonic lifting surface theory

    NASA Technical Reports Server (NTRS)

    Medan, R. T.

    1974-01-01

    The application of a kernel function lifting surface method to three dimensional, thin wing theory is discussed. A technique for determining the influence functions is presented. The technique is shown to require fewer quadrature points, while still calculating the influence functions accurately enough to guarantee convergence with an increasing number of spanwise quadrature points. The method also treats control points on the wing leading and trailing edges. The report introduces and employs an aspect of the kernel function method which apparently has never been used before and which significantly enhances the efficiency of the kernel function approach.

  15. MUSIC: MUlti-Scale Initial Conditions

    NASA Astrophysics Data System (ADS)

    Hahn, Oliver; Abel, Tom

    2013-11-01

    MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.

  16. Visible Motion Blur

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor); Ahumada, Albert J. (Inventor)

    2014-01-01

    A method of measuring motion blur is disclosed comprising obtaining a moving edge temporal profile r(sub 1)(k) of an image of a high-contrast moving edge, calculating the masked local contrast m(sub1)(k) for r(sub 1)(k) and the masked local contrast m(sub 2)(k) for an ideal step edge waveform r(sub 2)(k) with the same amplitude as r(sub 1)(k), and calculating the measure or motion blur Psi as a difference function, The masked local contrasts are calculated using a set of convolution kernels scaled to simulate the performance of the human visual system, and Psi is measured in units of just-noticeable differences.

  17. Protein fold recognition using geometric kernel data fusion.

    PubMed

    Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves

    2014-07-01

    Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.

  18. A framework for optimal kernel-based manifold embedding of medical image data.

    PubMed

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Frozen Gaussian approximation for 3D seismic tomography

    NASA Astrophysics Data System (ADS)

    Chai, Lihui; Tong, Ping; Yang, Xu

    2018-05-01

    Three-dimensional (3D) wave-equation-based seismic tomography is computationally challenging in large scales and high-frequency regime. In this paper, we apply the frozen Gaussian approximation (FGA) method to compute 3D sensitivity kernels and seismic tomography of high-frequency. Rather than standard ray theory used in seismic inversion (e.g. Kirchhoff migration and Gaussian beam migration), FGA is used to compute the 3D high-frequency sensitivity kernels for travel-time or full waveform inversions. Specifically, we reformulate the equations of the forward and adjoint wavefields for the purpose of convenience to apply FGA, and with this reformulation, one can efficiently compute the Green’s functions whose convolutions with source time function produce wavefields needed for the construction of 3D kernels. Moreover, a fast summation method is proposed based on local fast Fourier transform which greatly improves the speed of reconstruction as the last step of FGA algorithm. We apply FGA to both the travel-time adjoint tomography and full waveform inversion (FWI) on synthetic crosswell seismic data with dominant frequencies as high as those of real crosswell data, and confirm again that FWI requires a more sophisticated initial velocity model for the convergence than travel-time adjoint tomography. We also numerically test the accuracy of applying FGA to local earthquake tomography. This study paves the way to directly apply wave-equation-based seismic tomography methods into real data around their dominant frequencies.

  20. Approaches to reducing photon dose calculation errors near metal implants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Jessie Y.; Followill, David S.; Howell, Reb

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well asmore » two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact reduction methods investigated, the authors found that O-MAR was the most consistent method, resulting in either improved dose calculation accuracy (dental case) or little impact on calculation accuracy (spine case). GSI was unsuccessful at reducing the severe artifacts caused by dental fillings and had very little impact on calculation accuracy. GSI with MARS on the other hand gave mixed results, sometimes introducing metal distortion and increasing calculation errors (titanium rectangular implant and titanium spinal hardware) but other times very successfully reducing artifacts (Cerrobend rectangular implant and dental fillings). Conclusions: Though successful at improving dose calculation accuracy upstream of metal implants, metal kernels were not found to substantially improve accuracy for clinical cases. Of the commercial artifact reduction methods investigated, O-MAR was found to be the most consistent candidate for all-purpose CT simulation imaging. The MARS algorithm for GSI should be used with caution for titanium implants, larger implants, and implants located near heterogeneities as it can distort the size and shape of implants and increase calculation errors.« less

  1. Convolutional neural network architectures for predicting DNA–protein binding

    PubMed Central

    Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.

    2016-01-01

    Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608

  2. Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals.

    PubMed

    Acharya, U Rajendra; Oh, Shu Lih; Hagiwara, Yuki; Tan, Jen Hong; Adeli, Hojjat

    2017-09-27

    An encephalogram (EEG) is a commonly used ancillary test to aide in the diagnosis of epilepsy. The EEG signal contains information about the electrical activity of the brain. Traditionally, neurologists employ direct visual inspection to identify epileptiform abnormalities. This technique can be time-consuming, limited by technical artifact, provides variable results secondary to reader expertise level, and is limited in identifying abnormalities. Therefore, it is essential to develop a computer-aided diagnosis (CAD) system to automatically distinguish the class of these EEG signals using machine learning techniques. This is the first study to employ the convolutional neural network (CNN) for analysis of EEG signals. In this work, a 13-layer deep convolutional neural network (CNN) algorithm is implemented to detect normal, preictal, and seizure classes. The proposed technique achieved an accuracy, specificity, and sensitivity of 88.67%, 90.00% and 95.00%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Hybrid dose calculation: a dose calculation algorithm for microbeam radiation therapy

    NASA Astrophysics Data System (ADS)

    Donzelli, Mattia; Bräuer-Krisch, Elke; Oelfke, Uwe; Wilkens, Jan J.; Bartzsch, Stefan

    2018-02-01

    Microbeam radiation therapy (MRT) is still a preclinical approach in radiation oncology that uses planar micrometre wide beamlets with extremely high peak doses, separated by a few hundred micrometre wide low dose regions. Abundant preclinical evidence demonstrates that MRT spares normal tissue more effectively than conventional radiation therapy, at equivalent tumour control. In order to launch first clinical trials, accurate and efficient dose calculation methods are an inevitable prerequisite. In this work a hybrid dose calculation approach is presented that is based on a combination of Monte Carlo and kernel based dose calculation. In various examples the performance of the algorithm is compared to purely Monte Carlo and purely kernel based dose calculations. The accuracy of the developed algorithm is comparable to conventional pure Monte Carlo calculations. In particular for inhomogeneous materials the hybrid dose calculation algorithm out-performs purely convolution based dose calculation approaches. It is demonstrated that the hybrid algorithm can efficiently calculate even complicated pencil beam and cross firing beam geometries. The required calculation times are substantially lower than for pure Monte Carlo calculations.

  4. Learning Midlevel Auditory Codes from Natural Sound Statistics.

    PubMed

    Młynarski, Wiktor; McDermott, Josh H

    2018-03-01

    Interaction with the world requires an organism to transform sensory signals into representations in which behaviorally meaningful properties of the environment are made explicit. These representations are derived through cascades of neuronal processing stages in which neurons at each stage recode the output of preceding stages. Explanations of sensory coding may thus involve understanding how low-level patterns are combined into more complex structures. To gain insight into such midlevel representations for sound, we designed a hierarchical generative model of natural sounds that learns combinations of spectrotemporal features from natural stimulus statistics. In the first layer, the model forms a sparse convolutional code of spectrograms using a dictionary of learned spectrotemporal kernels. To generalize from specific kernel activation patterns, the second layer encodes patterns of time-varying magnitude of multiple first-layer coefficients. When trained on corpora of speech and environmental sounds, some second-layer units learned to group similar spectrotemporal features. Others instantiate opponency between distinct sets of features. Such groupings might be instantiated by neurons in the auditory cortex, providing a hypothesis for midlevel neuronal computation.

  5. Novel vehicle detection system based on stacked DoG kernel and AdaBoost

    PubMed Central

    Kang, Hyun Ho; Lee, Seo Won; You, Sung Hyun

    2018-01-01

    This paper proposes a novel vehicle detection system that can overcome some limitations of typical vehicle detection systems using AdaBoost-based methods. The performance of the AdaBoost-based vehicle detection system is dependent on its training data. Thus, its performance decreases when the shape of a target differs from its training data, or the pattern of a preceding vehicle is not visible in the image due to the light conditions. A stacked Difference of Gaussian (DoG)–based feature extraction algorithm is proposed to address this issue by recognizing common characteristics, such as the shadow and rear wheels beneath vehicles—of vehicles under various conditions. The common characteristics of vehicles are extracted by applying the stacked DoG shaped kernel obtained from the 3D plot of an image through a convolution method and investigating only certain regions that have a similar patterns. A new vehicle detection system is constructed by combining the novel stacked DoG feature extraction algorithm with the AdaBoost method. Experiments are provided to demonstrate the effectiveness of the proposed vehicle detection system under different conditions. PMID:29513727

  6. Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, L.L.; Hendricks, J.S.

    1983-01-01

    The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays.

  7. SU-F-J-133: Adaptive Radiation Therapy with a Four-Dimensional Dose Calculation Algorithm That Optimizes Dose Distribution Considering Breathing Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, I; Algan, O; Ahmad, S

    Purpose: To model patient motion and produce four-dimensional (4D) optimized dose distributions that consider motion-artifacts in the dose calculation during the treatment planning process. Methods: An algorithm for dose calculation is developed where patient motion is considered in dose calculation at the stage of the treatment planning. First, optimal dose distributions are calculated for the stationary target volume where the dose distributions are optimized considering intensity-modulated radiation therapy (IMRT). Second, a convolution-kernel is produced from the best-fitting curve which matches the motion trajectory of the patient. Third, the motion kernel is deconvolved with the initial dose distribution optimized for themore » stationary target to produce a dose distribution that is optimized in four-dimensions. This algorithm is tested with measured doses using a mobile phantom that moves with controlled motion patterns. Results: A motion-optimized dose distribution is obtained from the initial dose distribution of the stationary target by deconvolution with the motion-kernel of the mobile target. This motion-optimized dose distribution is equivalent to that optimized for the stationary target using IMRT. The motion-optimized and measured dose distributions are tested with the gamma index with a passing rate of >95% considering 3% dose-difference and 3mm distance-to-agreement. If the dose delivery per beam takes place over several respiratory cycles, then the spread-out of the dose distributions is only dependent on the motion amplitude and not affected by motion frequency and phase. This algorithm is limited to motion amplitudes that are smaller than the length of the target along the direction of motion. Conclusion: An algorithm is developed to optimize dose in 4D. Besides IMRT that provides optimal dose coverage for a stationary target, it extends dose optimization to 4D considering target motion. This algorithm provides alternative to motion management techniques such as beam-gating or breath-holding and has potential applications in adaptive radiation therapy.« less

  8. A convolution model for computing the far-field directivity of a parametric loudspeaker array.

    PubMed

    Shi, Chuang; Kajikawa, Yoshinobu

    2015-02-01

    This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost.

  9. Fast Image Subtraction Using Multi-cores and GPUs

    NASA Astrophysics Data System (ADS)

    Hartung, Steven; Shukla, H.

    2013-01-01

    Many important image processing techniques in astronomy require a massive number of computations per pixel. Among them is an image differencing technique known as Optimal Image Subtraction (OIS), which is very useful for detecting and characterizing transient phenomena. Like many image processing routines, OIS computations increase proportionally with the number of pixels being processed, and the number of pixels in need of processing is increasing rapidly. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer clustering technologies, this work presents a new astronomy image processing pipeline architecture. The chosen OIS implementation focuses on the 2nd order spatially-varying kernel with the Dirac delta function basis, a powerful image differencing method that has seen limited deployment in part because of the heavy computational burden. This tool can process standard image calibration and OIS differencing in a fashion that is scalable with the increasing data volume. It employs several parallel processing technologies in a hierarchical fashion in order to best utilize each of their strengths. The Linux/Unix based application can operate on a single computer, or on an MPI configured cluster, with or without GPU hardware. With GPU hardware available, even low-cost commercial video cards, the OIS convolution and subtraction times for large images can be accelerated by up to three orders of magnitude.

  10. Motor unit number estimation based on high-density surface electromyography decomposition.

    PubMed

    Peng, Yun; He, Jinbao; Yao, Bo; Li, Sheng; Zhou, Ping; Zhang, Yingchun

    2016-09-01

    To advance the motor unit number estimation (MUNE) technique using high density surface electromyography (EMG) decomposition. The K-means clustering convolution kernel compensation algorithm was employed to detect the single motor unit potentials (SMUPs) from high-density surface EMG recordings of the biceps brachii muscles in eight healthy subjects. Contraction forces were controlled at 10%, 20% and 30% of the maximal voluntary contraction (MVC). Achieved MUNE results and the representativeness of the SMUP pools were evaluated using a high-density weighted-average method. Mean numbers of motor units were estimated as 288±132, 155±87, 107±99 and 132±61 by using the developed new MUNE at 10%, 20%, 30% and 10-30% MVCs, respectively. Over 20 SMUPs were obtained at each contraction level, and the mean residual variances were lower than 10%. The new MUNE method allows a convenient and non-invasive collection of a large size of SMUP pool with great representativeness. It provides a useful tool for estimating the motor unit number of proximal muscles. The present new MUNE method successfully avoids the use of intramuscular electrodes or multiple electrical stimuli which is required in currently available MUNE techniques; as such the new MUNE method can minimize patient discomfort for MUNE tests. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  11. SU-E-CAMPUS-I-06: Y90 PET/CT for the Instantaneous Determination of Both Target and Non-Target Absorbed Doses Following Hepatic Radioembolization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasciak, A; Kao, J

    2014-06-15

    Purpose The process of converting Yttrium-90 (Y90) PET/CT images into 3D absorbed dose maps will be explained. The simple methods presented will allow the medical physicst to analyze Y90 PET images following radioembolization and determine the absorbed dose to tumor, normal liver parenchyma and other areas of interest, without application of Monte-Carlo radiation transport or dose-point-kernel (DPK) convolution. Methods Absorbed dose can be computed from Y90 PET/CT images based on the premise that radioembolization is a permanent implant with a constant relative activity distribution after infusion. Many Y90 PET/CT publications have used DPK convolution to obtain 3D absorbed dose maps.more » However, this method requires specialized software limiting clinical utility. The Local Deposition method, an alternative to DPK convolution, can be used to obtain absorbed dose and requires no additional computer processing. Pixel values from regions of interest drawn on Y90 PET/CT images can be converted to absorbed dose (Gy) by multiplication with a scalar constant. Results There is evidence that suggests the Local Deposition method may actually be more accurate than DPK convolution and it has been successfully used in a recent Y90 PET/CT publication. We have analytically compared dose-volume-histograms (DVH) for phantom hot-spheres to determine the difference between the DPK and Local Deposition methods, as a function of PET scanner point-spread-function for Y90. We have found that for PET/CT systems with a FWHM greater than 3.0 mm when imaging Y90, the Local Deposition Method provides a more accurate representation of DVH, regardless of target size than DPK convolution. Conclusion Using the Local Deposition Method, post-radioembolization Y90 PET/CT images can be transformed into 3D absorbed dose maps of the liver. An interventional radiologist or a Medical Physicist can perform this transformation in a clinical setting, allowing for rapid prediction of treatment efficacy by comparison to published tumoricidal thresholds.« less

  12. Rapid simulation of spatial epidemics: a spectral method.

    PubMed

    Brand, Samuel P C; Tildesley, Michael J; Keeling, Matthew J

    2015-04-07

    Spatial structure and hence the spatial position of host populations plays a vital role in the spread of infection. In the majority of situations, it is only possible to predict the spatial spread of infection using simulation models, which can be computationally demanding especially for large population sizes. Here we develop an approximation method that vastly reduces this computational burden. We assume that the transmission rates between individuals or sub-populations are determined by a spatial transmission kernel. This kernel is assumed to be isotropic, such that the transmission rate is simply a function of the distance between susceptible and infectious individuals; as such this provides the ideal mechanism for modelling localised transmission in a spatial environment. We show that the spatial force of infection acting on all susceptibles can be represented as a spatial convolution between the transmission kernel and a spatially extended 'image' of the infection state. This representation allows the rapid calculation of stochastic rates of infection using fast-Fourier transform (FFT) routines, which greatly improves the computational efficiency of spatial simulations. We demonstrate the efficiency and accuracy of this fast spectral rate recalculation (FSR) method with two examples: an idealised scenario simulating an SIR-type epidemic outbreak amongst N habitats distributed across a two-dimensional plane; the spread of infection between US cattle farms, illustrating that the FSR method makes continental-scale outbreak forecasting feasible with desktop processing power. The latter model demonstrates which areas of the US are at consistently high risk for cattle-infections, although predictions of epidemic size are highly dependent on assumptions about the tail of the transmission kernel. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Kernel analysis in TeV gamma-ray selection

    NASA Astrophysics Data System (ADS)

    Moriarty, P.; Samuelson, F. W.

    2000-06-01

    We discuss the use of kernel analysis as a technique for selecting gamma-ray candidates in Atmospheric Cherenkov astronomy. The method is applied to observations of the Crab Nebula and Markarian 501 recorded with the Whipple 10 m Atmospheric Cherenkov imaging system, and the results are compared with the standard Supercuts analysis. Since kernel analysis is computationally intensive, we examine approaches to reducing the computational load. Extension of the technique to estimate the energy of the gamma-ray primary is considered. .

  14. Anelastic sensitivity kernels with parsimonious storage for adjoint tomography and full waveform inversion

    NASA Astrophysics Data System (ADS)

    Komatitsch, Dimitri; Xie, Zhinan; Bozdaǧ, Ebru; Sales de Andrade, Elliott; Peter, Daniel; Liu, Qinya; Tromp, Jeroen

    2016-09-01

    We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.

  15. Insights from Classifying Visual Concepts with Multiple Kernel Learning

    PubMed Central

    Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki

    2012-01-01

    Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970

  16. Chemical Shift Encoded Water–Fat Separation Using Parallel Imaging and Compressed Sensing

    PubMed Central

    Sharma, Samir D.; Hu, Houchun H.; Nayak, Krishna S.

    2013-01-01

    Chemical shift encoded techniques have received considerable attention recently because they can reliably separate water and fat in the presence of off-resonance. The insensitivity to off-resonance requires that data be acquired at multiple echo times, which increases the scan time as compared to a single echo acquisition. The increased scan time often requires that a compromise be made between the spatial resolution, the volume coverage, and the tolerance to artifacts from subject motion. This work describes a combined parallel imaging and compressed sensing approach for accelerated water–fat separation. In addition, the use of multiscale cubic B-splines for B0 field map estimation is introduced. The water and fat images and the B0 field map are estimated via an alternating minimization. Coil sensitivity information is derived from a calculated k-space convolution kernel and l1-regularization is imposed on the coil-combined water and fat image estimates. Uniform water–fat separation is demonstrated from retrospectively undersampled data in the liver, brachial plexus, ankle, and knee as well as from a prospectively undersampled acquisition of the knee at 8.6x acceleration. PMID:22505285

  17. An Interactive Graphics Program for Assistance in Learning Convolution.

    ERIC Educational Resources Information Center

    Frederick, Dean K.; Waag, Gary L.

    1980-01-01

    A program has been written for the interactive computer graphics facility at Rensselaer Polytechnic Institute that is designed to assist the user in learning the mathematical technique of convolving two functions. Because convolution can be represented graphically by a sequence of steps involving folding, shifting, multiplying, and integration, it…

  18. Imaging and automated detection of Sitophilus oryzae (Coleoptera: Curculionidae) pupae in hard red winter wheat.

    PubMed

    Toews, Michael D; Pearson, Tom C; Campbell, James F

    2006-04-01

    Computed tomography, an imaging technique commonly used for diagnosing internal human health ailments, uses multiple x-rays and sophisticated software to recreate a cross-sectional representation of a subject. The use of this technique to image hard red winter wheat, Triticum aestivm L., samples infested with pupae of Sitophilus oryzae (L.) was investigated. A software program was developed to rapidly recognize and quantify the infested kernels. Samples were imaged in a 7.6-cm (o.d.) plastic tube containing 0, 50, or 100 infested kernels per kg of wheat. Interkernel spaces were filled with corn oil so as to increase the contrast between voids inside kernels and voids among kernels. Automated image processing, using a custom C language software program, was conducted separately on each 100 g portion of the prepared samples. The average detection accuracy in the five infested kernels per 100-g samples was 94.4 +/- 7.3% (mean +/- SD, n = 10), whereas the average detection accuracy in the 10 infested kernels per 100-g sample was 87.3 +/- 7.9% (n = 10). Detection accuracy in the 10 infested kernels per 100-g samples was slightly less than the five infested kernels per 100-g samples because of some infested kernels overlapping with each other or air bubbles in the oil. A mean of 1.2 +/- 0.9 (n = 10) bubbles (per tube) was incorrectly classed as infested kernels in replicates containing no infested kernels. In light of these positive results, future studies should be conducted using additional grains, insect species, and life stages.

  19. Computed tomography coronary stent imaging with iterative reconstruction: a trade-off study between medium kernel and sharp kernel.

    PubMed

    Zhou, Qijing; Jiang, Biao; Dong, Fei; Huang, Peiyu; Liu, Hongtao; Zhang, Minming

    2014-01-01

    To evaluate the improvement of iterative reconstruction in image space (IRIS) technique in computed tomographic (CT) coronary stent imaging with sharp kernel, and to make a trade-off analysis. Fifty-six patients with 105 stents were examined by 128-slice dual-source CT coronary angiography (CTCA). Images were reconstructed using standard filtered back projection (FBP) and IRIS with both medium kernel and sharp kernel applied. Image noise and the stent diameter were investigated. Image noise was measured both in background vessel and in-stent lumen as objective image evaluation. Image noise score and stent score were performed as subjective image evaluation. The CTCA images reconstructed with IRIS were associated with significant noise reduction compared to that of CTCA images reconstructed using FBP technique in both of background vessel and in-stent lumen (the background noise decreased by approximately 25.4% ± 8.2% in medium kernel (P

  20. Generalized quantum kinetic expansion: Higher-order corrections to multichromophoric Förster theory

    NASA Astrophysics Data System (ADS)

    Wu, Jianlan; Gong, Zhihao; Tang, Zhoufei

    2015-08-01

    For a general two-cluster energy transfer network, a new methodology of the generalized quantum kinetic expansion (GQKE) method is developed, which predicts an exact time-convolution equation for the cluster population evolution under the initial condition of the local cluster equilibrium state. The cluster-to-cluster rate kernel is expanded over the inter-cluster couplings. The lowest second-order GQKE rate recovers the multichromophoric Förster theory (MCFT) rate. The higher-order corrections to the MCFT rate are systematically included using the continued fraction resummation form, resulting in the resummed GQKE method. The reliability of the GQKE methodology is verified in two model systems, revealing the relevance of higher-order corrections.

  1. Scatter of X-rays on polished surfaces

    NASA Technical Reports Server (NTRS)

    Hasinger, G.

    1981-01-01

    In investigating the dispersion properties of telescope mirrors used in X-ray astronomy, the slight scattering characteristics of X-ray radiation by statistically rough surfaces were examined. The mathematics and geometry of scattering theory are described. The measurement test assembly is described and results of measurements on samples of plane mirrors are given. Measurement results are evaluated. The direct beam, the convolution of the direct beam and the scattering halo, curve fitting by the method of least squares, various autocorrelation functions, results of the fitting procedure for small scattering, and deviations in the kernel of the scattering distribution are presented. A procedure for quality testing of mirror systems through diagnosis of rough surfaces is described.

  2. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    PubMed Central

    2016-01-01

    This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165

  3. Comment on 'Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study'.

    PubMed

    Valdes, Gilmer; Interian, Yannet

    2018-03-15

    The application of machine learning (ML) presents tremendous opportunities for the field of oncology, thus we read 'Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study' with great interest. In this article, the authors used state of the art techniques: a pre-trained convolutional neural network (VGG-16 CNN), transfer learning, data augmentation, drop out and early stopping, all of which are directly responsible for the success and the excitement that these algorithms have created in other fields. We believe that the use of these techniques can offer tremendous opportunities in the field of Medical Physics and as such we would like to praise the authors for their pioneering application to the field of Radiation Oncology. That being said, given that the field of Medical Physics has unique characteristics that differentiate us from those fields where these techniques have been applied successfully, we would like to raise some points for future discussion and follow up studies that could help the community understand the limitations and nuances of deep learning techniques.

  4. Scene Semantic Segmentation from Indoor Rgb-D Images Using Encode-Decoder Fully Convolutional Networks

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Li, T.; Pan, L.; Kang, Z.

    2017-09-01

    With increasing attention for the indoor environment and the development of low-cost RGB-D sensors, indoor RGB-D images are easily acquired. However, scene semantic segmentation is still an open area, which restricts indoor applications. The depth information can help to distinguish the regions which are difficult to be segmented out from the RGB images with similar color or texture in the indoor scenes. How to utilize the depth information is the key problem of semantic segmentation for RGB-D images. In this paper, we propose an Encode-Decoder Fully Convolutional Networks for RGB-D image classification. We use Multiple Kernel Maximum Mean Discrepancy (MK-MMD) as a distance measure to find common and special features of RGB and D images in the network to enhance performance of classification automatically. To explore better methods of applying MMD, we designed two strategies; the first calculates MMD for each feature map, and the other calculates MMD for whole batch features. Based on the result of classification, we use the full connect CRFs for the semantic segmentation. The experimental results show that our method can achieve a good performance on indoor RGB-D image semantic segmentation.

  5. Interpolation between spatial frameworks: an application of process convolution to estimating neighbourhood disease prevalence.

    PubMed

    Congdon, Peter

    2014-04-01

    Health data may be collected across one spatial framework (e.g. health provider agencies), but contrasts in health over another spatial framework (neighbourhoods) may be of policy interest. In the UK, population prevalence totals for chronic diseases are provided for populations served by general practitioner practices, but not for neighbourhoods (small areas of circa 1500 people), raising the question whether data for one framework can be used to provide spatially interpolated estimates of disease prevalence for the other. A discrete process convolution is applied to this end and has advantages when there are a relatively large number of area units in one or other framework. Additionally, the interpolation is modified to take account of the observed neighbourhood indicators (e.g. hospitalisation rates) of neighbourhood disease prevalence. These are reflective indicators of neighbourhood prevalence viewed as a latent construct. An illustrative application is to prevalence of psychosis in northeast London, containing 190 general practitioner practices and 562 neighbourhoods, including an assessment of sensitivity to kernel choice (e.g. normal vs exponential). This application illustrates how a zero-inflated Poisson can be used as the likelihood model for a reflective indicator.

  6. Wrist sensor-based tremor severity quantification in Parkinson's disease using convolutional neural network.

    PubMed

    Kim, Han Byul; Lee, Woong Woo; Kim, Aryun; Lee, Hong Ji; Park, Hye Young; Jeon, Hyo Seon; Kim, Sang Kyong; Jeon, Beomseok; Park, Kwang S

    2018-04-01

    Tremor is a commonly observed symptom in patients of Parkinson's disease (PD), and accurate measurement of tremor severity is essential in prescribing appropriate treatment to relieve its symptoms. We propose a tremor assessment system based on the use of a convolutional neural network (CNN) to differentiate the severity of symptoms as measured in data collected from a wearable device. Tremor signals were recorded from 92 PD patients using a custom-developed device (SNUMAP) equipped with an accelerometer and gyroscope mounted on a wrist module. Neurologists assessed the tremor symptoms on the Unified Parkinson's Disease Rating Scale (UPDRS) from simultaneously recorded video footages. The measured data were transformed into the frequency domain and used to construct a two-dimensional image for training the network, and the CNN model was trained by convolving tremor signal images with kernels. The proposed CNN architecture was compared to previously studied machine learning algorithms and found to outperform them (accuracy = 0.85, linear weighted kappa = 0.85). More precise monitoring of PD tremor symptoms in daily life could be possible using our proposed method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Time history solution program, L225 (TEV126). Volume 1: Engineering and usage

    NASA Technical Reports Server (NTRS)

    Kroll, R. I.; Tornallyay, A.; Clemmons, R. E.

    1979-01-01

    Volume 1 of a two volume document is presented. The usage of the convolution program L225 (TEV 126) is described. The program calculates the time response of a linear system by convoluting the impulsive response function with the time-dependent excitation function. The convolution is performed as a multiplication in the frequency domain. Fast Fourier transform techniques are used to transform the product back into the time domain to obtain response time histories. A brief description of the analysis used is presented.

  8. Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1984-01-01

    The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.

  9. Introducing single-crystal scattering and optical potentials into MCNPX: Predicting neutron emission from a convoluted moderator

    DOE PAGES

    Gallmeier, F. X.; Iverson, E. B.; Lu, W.; ...

    2016-01-08

    Neutron transport simulation codes are an indispensable tool used for the design and construction of modern neutron scattering facilities and instrumentation. It has become increasingly clear that some neutron instrumentation has started to exploit physics that is not well-modelled by the existing codes. Particularly, the transport of neutrons through single crystals and across interfaces in MCNP(X), Geant4 and other codes ignores scattering from oriented crystals and refractive effects, and yet these are essential ingredients for the performance of monochromators and ultra-cold neutron transport respectively (to mention but two examples). In light of these developments, we have extended the MCNPX codemore » to include a single-crystal neutron scattering model and neutron reflection/refraction physics. Furthermore, we have also generated silicon scattering kernels for single crystals of definable orientation with respect to an incoming neutron beam. As a first test of these new tools, we have chosen to model the recently developed convoluted moderator concept, in which a moderating material is interleaved with layers of perfect crystals to provide an exit path for neutrons moderated to energies below the crystal s Bragg cut off at locations deep within the moderator. Studies of simple cylindrical convoluted moderator systems of 100 mm diameter and composed of polyethylene and single crystal silicon were performed with the upgraded MCNPX code and reproduced the magnitude of effects seen in experiments compared to homogeneous moderator systems. Applying different material properties for refraction and reflection, and by replacing the silicon in the models with voids, we show that the emission enhancements seen in recent experiments are primarily caused by the transparency of the silicon/void layers. Finally the convoluted moderator experiments described by Iverson et al. were simulated and we find satisfactory agreement between the measurement and the results of simulations performed using the tools we have developed.« less

  10. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network.

    PubMed

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-12-12

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy.

  11. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network

    PubMed Central

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-01-01

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy. PMID:29231868

  12. Reconfigurable Gabor Filter For Fingerprint Recognition Using FPGA Verilog

    NASA Astrophysics Data System (ADS)

    Rosshidi, H. T.; Hadi, A. R.

    2009-06-01

    This paper present the implementations of Gabor filter for fingerprint recognition using Verilog HDL. This work demonstrates the application of Gabor Filter technique to enhance the fingerprint image. The incoming signal in form of image pixel will be filter out or convolute by the Gabor filter to define the ridge and valley regions of fingerprint. This is done with the application of a real time convolve based on Field Programmable Gate Array (FPGA) to perform the convolution operation. The main characteristic of the proposed approach are the usage of memory to store the incoming image pixel and the coefficient of the Gabor filter before the convolution matrix take place. The result was the signal convoluted with the Gabor coefficient.

  13. Wilson Dslash Kernel From Lattice QCD Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show themore » technique gives excellent performance on regular Xeon Architecture as well.« less

  14. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  15. An efficient method to determine double Gaussian fluence parameters in the eclipse™ proton pencil beam model.

    PubMed

    Shen, Jiajian; Liu, Wei; Stoker, Joshua; Ding, Xiaoning; Anand, Aman; Hu, Yanle; Herman, Michael G; Bues, Martin

    2016-12-01

    To find an efficient method to configure the proton fluence for a commercial proton pencil beam scanning (PBS) treatment planning system (TPS). An in-water dose kernel was developed to mimic the dose kernel of the pencil beam convolution superposition algorithm, which is part of the commercial proton beam therapy planning software, eclipse™ (Varian Medical Systems, Palo Alto, CA). The field size factor (FSF) was calculated based on the spot profile reconstructed by the in-house dose kernel. The workflow of using FSFs to find the desirable proton fluence is presented. The in-house derived spot profile and FSF were validated by a direct comparison with those calculated by the eclipse TPS. The validation included 420 comparisons of the FSFs from 14 proton energies, various field sizes from 2 to 20 cm and various depths from 20% to 80% of proton range. The relative in-water lateral profiles between the in-house calculation and the eclipse TPS agree very well even at the level of 10 -4 . The FSFs between the in-house calculation and the eclipse TPS also agree well. The maximum deviation is within 0.5%, and the standard deviation is less than 0.1%. The authors' method significantly reduced the time to find the desirable proton fluences of the clinical energies. The method is extensively validated and can be applied to any proton centers using PBS and the eclipse TPS.

  16. Feasibility of detecting Aflatoxin B1 in single maize kernels using hyperspectral imaging

    USDA-ARS?s Scientific Manuscript database

    The feasibility of detecting Aflatoxin B1 (AFB1) in single maize kernel inoculated with Aspergillus flavus conidia in the field, as well as its spatial distribution in the kernels, was assessed using near-infrared hyperspectral imaging (HSI) technique. Firstly, an image mask was applied to a pixel-b...

  17. Protein Analysis Meets Visual Word Recognition: A Case for String Kernels in the Brain

    ERIC Educational Resources Information Center

    Hannagan, Thomas; Grainger, Jonathan

    2012-01-01

    It has been recently argued that some machine learning techniques known as Kernel methods could be relevant for capturing cognitive and neural mechanisms (Jakel, Scholkopf, & Wichmann, 2009). We point out that "String kernels," initially designed for protein function prediction and spam detection, are virtually identical to one contending proposal…

  18. Experimental Investigation of Convoluted Contouring for Aircraft Afterbody Drag Reduction

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Hunter, Craig A.

    1999-01-01

    An experimental investigation was performed in the NASA Langley 16-Foot Transonic Tunnel to determine the aerodynamic effects of external convolutions, placed on the boattail of a nonaxisymmetric nozzle for drag reduction. Boattail angles of 15 and 22 were tested with convolutions placed at a forward location upstream of the boattail curvature, at a mid location along the curvature and at a full location that spanned the entire boattail flap. Each of the baseline nozzle afterbodies (no convolutions) had a parabolic, converging contour with a parabolically decreasing corner radius. Data were obtained at several Mach numbers from static conditions to 1.2 for a range of nozzle pressure ratios and angles of attack. An oil paint flow visualization technique was used to qualitatively assess the effect of the convolutions. Results indicate that afterbody drag reduction by convoluted contouring is convolution location, Mach number, boattail angle, and NPR dependent. The forward convolution location was the most effective contouring geometry for drag reduction on the 22 afterbody, but was only effective for M < 0.95. At M = 0.8, drag was reduced 20 and 36 percent at NPRs of 5.4 and 7, respectively, but drag was increased 10 percent for M = 0.95 at NPR = 7. Convoluted contouring along the 15 boattail angle afterbody was not effective at reducing drag because the flow was minimally separated from the baseline afterbody, unlike the massive separation along the 22 boattail angle baseline afterbody.

  19. SU-E-T-02: 90Y Microspheres Dosimetry Calculation with Voxel-S-Value Method: A Simple Use in the Clinic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maneru, F; Gracia, M; Gallardo, N

    2015-06-15

    Purpose: To present a simple and feasible method of voxel-S-value (VSV) dosimetry calculation for daily clinical use in radioembolization (RE) with {sup 90}Y microspheres. Dose distributions are obtained and visualized over CT images. Methods: Spatial dose distributions and dose in liver and tumor are calculated for RE patients treated with Sirtex Medical miscrospheres at our center. Data obtained from the previous simulation of treatment were the basis for calculations: Tc-99m maggregated albumin SPECT-CT study in a gammacamera (Infinia, General Electric Healthcare.). Attenuation correction and ordered-subsets expectation maximization (OSEM) algorithm were applied.For VSV calculations, both SPECT and CT were exported frommore » the gammacamera workstation and registered with the radiotherapy treatment planning system (Eclipse, Varian Medical systems). Convolution of activity matrix and local dose deposition kernel (S values) was implemented with an in-house developed software based on Python code. The kernel was downloaded from www.medphys.it. Final dose distribution was evaluated with the free software Dicompyler. Results: Liver mean dose is consistent with Partition method calculations (accepted as a good standard). Tumor dose has not been evaluated due to the high dependence on its contouring. Small lesion size, hot spots in health tissue and blurred limits can affect a lot the dose distribution in tumors. Extra work includes: export and import of images and other dicom files, create and calculate a dummy plan of external radiotherapy, convolution calculation and evaluation of the dose distribution with dicompyler. Total time spent is less than 2 hours. Conclusion: VSV calculations do not require any extra appointment or any uncomfortable process for patient. The total process is short enough to carry it out the same day of simulation and to contribute to prescription decisions prior to treatment. Three-dimensional dose knowledge provides much more information than other methods of dose calculation usually applied in the clinic.« less

  20. Experimental determination of the effect of detector size on profile measurements in narrow photon beams.

    PubMed

    Pappas, E; Maris, T G; Papadakis, A; Zacharopoulou, F; Damilakis, J; Papanikolaou, N; Gourtsoyiannis, N

    2006-10-01

    The aim of this work is to investigate experimentally the detector size effect on narrow beam profile measurements. Polymer gel and magnetic resonance imaging dosimetry was used for this purpose. Profile measurements (Pm(s)) of a 5 mm diameter 6 MV stereotactic beam were performed using polymer gels. Eight measurements of the profile of this narrow beam were performed using correspondingly eight different detector sizes. This was achieved using high spatial resolution (0.25 mm) two-dimensional measurements and eight different signal integration volumes A X A X slice thickness, simulating detectors of different size. "A" ranged from 0.25 to 7.5 mm, representing the detector size. The gel-derived profiles exhibited increased penumbra width with increasing detector size, for sizes >0.5 mm. By extrapolating the gel-derived profiles to zero detector size, the true profile (Pt) of the studied beam was derived. The same polymer gel data were also used to simulate a small-volume ion chamber profile measurement of the same beam, in terms of volume averaging. The comparison between these results and actual corresponding small-volume chamber profile measurements performed in this study, reveal that the penumbra broadening caused by both volume averaging and electron transport alterations (present in actual ion chamber profile measurements) is a lot more intense than that resulted by volume averaging effects alone (present in gel-derived profiles simulating ion chamber profile measurements). Therefore, not only the detector size, but also its composition and tissue equivalency is proved to be an important factor for correct narrow beam profile measurements. Additionally, the convolution kernels related to each detector size and to the air ion chamber were calculated using the corresponding profile measurements (Pm(s)), the gel-derived true profile (Pt), and convolution theory. The response kernels of any desired detector can be derived, allowing the elimination of the errors associated with narrow beam profile measurements.

  1. Quality Evaluation of Land-Cover Classification Using Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Dang, Y.; Zhang, J.; Zhao, Y.; Luo, F.; Ma, W.; Yu, F.

    2018-04-01

    Land-cover classification is one of the most important products of earth observation, which focuses mainly on profiling the physical characters of the land surface with temporal and distribution attributes and contains the information of both natural and man-made coverage elements, such as vegetation, soil, glaciers, rivers, lakes, marsh wetlands and various man-made structures. In recent years, the amount of high-resolution remote sensing data has increased sharply. Accordingly, the volume of land-cover classification products increases, as well as the need to evaluate such frequently updated products that is a big challenge. Conventionally, the automatic quality evaluation of land-cover classification is made through pixel-based classifying algorithms, which lead to a much trickier task and consequently hard to keep peace with the required updating frequency. In this paper, we propose a novel quality evaluation approach for evaluating the land-cover classification by a scene classification method Convolutional Neural Network (CNN) model. By learning from remote sensing data, those randomly generated kernels that serve as filter matrixes evolved to some operators that has similar functions to man-crafted operators, like Sobel operator or Canny operator, and there are other kernels learned by the CNN model that are much more complex and can't be understood as existing filters. The method using CNN approach as the core algorithm serves quality-evaluation tasks well since it calculates a bunch of outputs which directly represent the image's membership grade to certain classes. An automatic quality evaluation approach for the land-cover DLG-DOM coupling data (DLG for Digital Line Graphic, DOM for Digital Orthophoto Map) will be introduced in this paper. The CNN model as an robustness method for image evaluation, then brought out the idea of an automatic quality evaluation approach for land-cover classification. Based on this experiment, new ideas of quality evaluation of DLG-DOM coupling land-cover classification or other kinds of labelled remote sensing data can be further studied.

  2. DSN telemetry system performance with convolutionally code data

    NASA Technical Reports Server (NTRS)

    Mulhall, B. D. L.; Benjauthrit, B.; Greenhall, C. A.; Kuma, D. M.; Lam, J. K.; Wong, J. S.; Urech, J.; Vit, L. D.

    1975-01-01

    The results obtained to date and the plans for future experiments for the DSN telemetry system were presented. The performance of the DSN telemetry system in decoding convolutionally coded data by both sequential and maximum likelihood techniques is being determined by testing at various deep space stations. The evaluation of performance models is also an objective of this activity.

  3. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  4. Kernel and divergence techniques in high energy physics separations

    NASA Astrophysics Data System (ADS)

    Bouř, Petr; Kůs, Václav; Franc, Jiří

    2017-10-01

    Binary decision trees under the Bayesian decision technique are used for supervised classification of high-dimensional data. We present a great potential of adaptive kernel density estimation as the nested separation method of the supervised binary divergence decision tree. Also, we provide a proof of alternative computing approach for kernel estimates utilizing Fourier transform. Further, we apply our method to Monte Carlo data set from the particle accelerator Tevatron at DØ experiment in Fermilab and provide final top-antitop signal separation results. We have achieved up to 82 % AUC while using the restricted feature selection entering the signal separation procedure.

  5. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1992-01-01

    Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.

  6. Blind image fusion for hyperspectral imaging with the directional total variation

    NASA Astrophysics Data System (ADS)

    Bungert, Leon; Coomes, David A.; Ehrhardt, Matthias J.; Rasch, Jennifer; Reisenhofer, Rafael; Schönlieb, Carola-Bibiane

    2018-04-01

    Hyperspectral imaging is a cutting-edge type of remote sensing used for mapping vegetation properties, rock minerals and other materials. A major drawback of hyperspectral imaging devices is their intrinsic low spatial resolution. In this paper, we propose a method for increasing the spatial resolution of a hyperspectral image by fusing it with an image of higher spatial resolution that was obtained with a different imaging modality. This is accomplished by solving a variational problem in which the regularization functional is the directional total variation. To accommodate for possible mis-registrations between the two images, we consider a non-convex blind super-resolution problem where both a fused image and the corresponding convolution kernel are estimated. Using this approach, our model can realign the given images if needed. Our experimental results indicate that the non-convexity is negligible in practice and that reliable solutions can be computed using a variety of different optimization algorithms. Numerical results on real remote sensing data from plant sciences and urban monitoring show the potential of the proposed method and suggests that it is robust with respect to the regularization parameters, mis-registration and the shape of the kernel.

  7. GPU-accelerated non-uniform fast Fourier transform-based compressive sensing spectral domain optical coherence tomography.

    PubMed

    Xu, Daguang; Huang, Yong; Kang, Jin U

    2014-06-16

    We implemented the graphics processing unit (GPU) accelerated compressive sensing (CS) non-uniform in k-space spectral domain optical coherence tomography (SD OCT). Kaiser-Bessel (KB) function and Gaussian function are used independently as the convolution kernel in the gridding-based non-uniform fast Fourier transform (NUFFT) algorithm with different oversampling ratios and kernel widths. Our implementation is compared with the GPU-accelerated modified non-uniform discrete Fourier transform (MNUDFT) matrix-based CS SD OCT and the GPU-accelerated fast Fourier transform (FFT)-based CS SD OCT. It was found that our implementation has comparable performance to the GPU-accelerated MNUDFT-based CS SD OCT in terms of image quality while providing more than 5 times speed enhancement. When compared to the GPU-accelerated FFT based-CS SD OCT, it shows smaller background noise and less side lobes while eliminating the need for the cumbersome k-space grid filling and the k-linear calibration procedure. Finally, we demonstrated that by using a conventional desktop computer architecture having three GPUs, real-time B-mode imaging can be obtained in excess of 30 fps for the GPU-accelerated NUFFT based CS SD OCT with frame size 2048(axial) × 1,000(lateral).

  8. Influence of Initial Correlations on Evolution of a Subsystem in a Heat Bath and Polaron Mobility

    NASA Astrophysics Data System (ADS)

    Los, Victor F.

    2017-08-01

    A regular approach to accounting for initial correlations, which allows to go beyond the unrealistic random phase (initial product state) approximation in deriving the evolution equations, is suggested. An exact homogeneous (time-convolution and time-convolutionless) equations for a relevant part of the two-time equilibrium correlation function for the dynamic variables of a subsystem interacting with a boson field (heat bath) are obtained. No conventional approximation like RPA or Bogoliubov's principle of weakening of initial correlations is used. The obtained equations take into account the initial correlations in the kernel governing their evolution. The solution to these equations is found in the second order of the kernel expansion in the electron-phonon interaction, which demonstrates that generally the initial correlations influence the correlation function's evolution in time. It is explicitly shown that this influence vanishes on a large timescale (actually at t→ ∞) and the evolution process enters an irreversible kinetic regime. The developed approach is applied to the Fröhlich polaron and the low-temperature polaron mobility (which was under a long-time debate) is found with a correction due to initial correlations.

  9. Comment on ‘Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study’

    NASA Astrophysics Data System (ADS)

    Valdes, Gilmer; Interian, Yannet

    2018-03-01

    The application of machine learning (ML) presents tremendous opportunities for the field of oncology, thus we read ‘Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study’ with great interest. In this article, the authors used state of the art techniques: a pre-trained convolutional neural network (VGG-16 CNN), transfer learning, data augmentation, drop out and early stopping, all of which are directly responsible for the success and the excitement that these algorithms have created in other fields. We believe that the use of these techniques can offer tremendous opportunities in the field of Medical Physics and as such we would like to praise the authors for their pioneering application to the field of Radiation Oncology. That being said, given that the field of Medical Physics has unique characteristics that differentiate us from those fields where these techniques have been applied successfully, we would like to raise some points for future discussion and follow up studies that could help the community understand the limitations and nuances of deep learning techniques.

  10. Resting State fMRI Functional Connectivity-Based Classification Using a Convolutional Neural Network Architecture

    PubMed Central

    Meszlényi, Regina J.; Buza, Krisztian; Vidnyánszky, Zoltán

    2017-01-01

    Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network. PMID:29089883

  11. Resting State fMRI Functional Connectivity-Based Classification Using a Convolutional Neural Network Architecture.

    PubMed

    Meszlényi, Regina J; Buza, Krisztian; Vidnyánszky, Zoltán

    2017-01-01

    Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network.

  12. Complex Source and Radiation Behaviors of Small Elements of Linear and Matrix Flexible Ultrasonic Phased-Array Transducers

    NASA Astrophysics Data System (ADS)

    Amory, V.; Lhémery, A.

    2008-02-01

    Inspection of irregular components is problematical: maladjustment of transducer shoes to surfaces causes aberrations. Flexible phased-arrays (FPAs) designed at CEA LIST to maximize contact are driven by adapted delay laws to compensate for irregularities. Optimizing FPA requires simulation tools. The behavior of one element computed by FEM is observed at the surface and its radiation experimentally validated. Efforts for one element prevent from simulating a FPA by FEM. A model is proposed where each element behaves as nonuniform source of stresses. Exact and asymptotic formulas for Lamb problem are used as convolution kernels for longitudinal, transverse and head waves; the latter is of primary importance for angle-T-beam inspections.

  13. On the convergence of local approximations to pseudodifferential operators with applications

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas

    1994-01-01

    We consider the approximation of a class pseudodifferential operators by sequences of operators which can be expressed as compositions of differential operators and their inverses. We show that the error in such approximations can be bounded in terms of L(1) error in approximating a convolution kernel, and use this fact to develop convergence results. Our main result is a finite time convergence analysis of the Engquist-Majda Pade approximants to the square root of the d'Alembertian. We also show that no spatially local approximation to this operator can be convergent uniformly in time. We propose some temporally local but spatially nonlocal operators with better long time behavior. These are based on Laguerre and exponential series.

  14. Bivariate discrete beta Kernel graduation of mortality data.

    PubMed

    Mazza, Angelo; Punzo, Antonio

    2015-07-01

    Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors.

  15. The VLSI design of an error-trellis syndrome decoder for certain convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Jensen, J. M.; Hsu, I.-S.; Truong, T. K.

    1986-01-01

    A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.

  16. Convolutional coding results for the MVM '73 X-band telemetry experiment

    NASA Technical Reports Server (NTRS)

    Layland, J. W.

    1978-01-01

    Results of simulation of several short-constraint-length convolutional codes using a noisy symbol stream obtained via the turnaround ranging channels of the MVM'73 spacecraft are presented. First operational use of this coding technique is on the Voyager mission. The relative performance of these codes in this environment is as previously predicted from computer-based simulations.

  17. The VLSI design of error-trellis syndrome decoding for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Jensen, J. M.; Truong, T. K.; Hsu, I. S.

    1985-01-01

    A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.

  18. Accelerated Time-Domain Modeling of Electromagnetic Pulse Excitation of Finite-Length Dissipative Conductors over a Ground Plane via Function Fitting and Recursive Convolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campione, Salvatore; Warne, Larry K.; Sainath, Kamalesh

    In this report we overview the fundamental concepts for a pair of techniques which together greatly hasten computational predictions of electromagnetic pulse (EMP) excitation of finite-length dissipative conductors over a ground plane. In a time- domain, transmission line (TL) model implementation, predictions are computationally bottlenecked time-wise, either for late-time predictions (about 100ns-10000ns range) or predictions concerning EMP excitation of long TLs (order of kilometers or more ). This is because the method requires a temporal convolution to account for the losses in the ground. Addressing this to facilitate practical simulation of EMP excitation of TLs, we first apply a techniquemore » to extract an (approximate) complex exponential function basis-fit to the ground/Earth's impedance function, followed by incorporating this into a recursion-based convolution acceleration technique. Because the recursion-based method only requires the evaluation of the most recent voltage history data (versus the entire history in a "brute-force" convolution evaluation), we achieve necessary time speed- ups across a variety of TL/Earth geometry/material scenarios. Intentionally Left Blank« less

  19. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Speeding Up the Bilateral Filter: A Joint Acceleration Way.

    PubMed

    Dai, Longquan; Yuan, Mengke; Zhang, Xiaopeng

    2016-06-01

    Computational complexity of the brute-force implementation of the bilateral filter (BF) depends on its filter kernel size. To achieve the constant-time BF whose complexity is irrelevant to the kernel size, many techniques have been proposed, such as 2D box filtering, dimension promotion, and shiftability property. Although each of the above techniques suffers from accuracy and efficiency problems, previous algorithm designers were used to take only one of them to assemble fast implementations due to the hardness of combining them together. Hence, no joint exploitation of these techniques has been proposed to construct a new cutting edge implementation that solves these problems. Jointly employing five techniques: kernel truncation, best N-term approximation as well as previous 2D box filtering, dimension promotion, and shiftability property, we propose a unified framework to transform BF with arbitrary spatial and range kernels into a set of 3D box filters that can be computed in linear time. To the best of our knowledge, our algorithm is the first method that can integrate all these acceleration techniques and, therefore, can draw upon one another's strong point to overcome deficiencies. The strength of our method has been corroborated by several carefully designed experiments. In particular, the filtering accuracy is significantly improved without sacrificing the efficiency at running time.

  1. Local coding based matching kernel method for image classification.

    PubMed

    Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  2. Efficient protein structure search using indexing methods

    PubMed Central

    2013-01-01

    Understanding functions of proteins is one of the most important challenges in many studies of biological processes. The function of a protein can be predicted by analyzing the functions of structurally similar proteins, thus finding structurally similar proteins accurately and efficiently from a large set of proteins is crucial. A protein structure can be represented as a vector by 3D-Zernike Descriptor (3DZD) which compactly represents the surface shape of the protein tertiary structure. This simplified representation accelerates the searching process. However, computing the similarity of two protein structures is still computationally expensive, thus it is hard to efficiently process many simultaneous requests of structurally similar protein search. This paper proposes indexing techniques which substantially reduce the search time to find structurally similar proteins. In particular, we first exploit two indexing techniques, i.e., iDistance and iKernel, on the 3DZDs. After that, we extend the techniques to further improve the search speed for protein structures. The extended indexing techniques build and utilize an reduced index constructed from the first few attributes of 3DZDs of protein structures. To retrieve top-k similar structures, top-10 × k similar structures are first found using the reduced index, and top-k structures are selected among them. We also modify the indexing techniques to support θ-based nearest neighbor search, which returns data points less than θ to the query point. The results show that both iDistance and iKernel significantly enhance the searching speed. In top-k nearest neighbor search, the searching time is reduced 69.6%, 77%, 77.4% and 87.9%, respectively using iDistance, iKernel, the extended iDistance, and the extended iKernel. In θ-based nearest neighbor serach, the searching time is reduced 80%, 81%, 95.6% and 95.6% using iDistance, iKernel, the extended iDistance, and the extended iKernel, respectively. PMID:23691543

  3. Efficient protein structure search using indexing methods.

    PubMed

    Kim, Sungchul; Sael, Lee; Yu, Hwanjo

    2013-01-01

    Understanding functions of proteins is one of the most important challenges in many studies of biological processes. The function of a protein can be predicted by analyzing the functions of structurally similar proteins, thus finding structurally similar proteins accurately and efficiently from a large set of proteins is crucial. A protein structure can be represented as a vector by 3D-Zernike Descriptor (3DZD) which compactly represents the surface shape of the protein tertiary structure. This simplified representation accelerates the searching process. However, computing the similarity of two protein structures is still computationally expensive, thus it is hard to efficiently process many simultaneous requests of structurally similar protein search. This paper proposes indexing techniques which substantially reduce the search time to find structurally similar proteins. In particular, we first exploit two indexing techniques, i.e., iDistance and iKernel, on the 3DZDs. After that, we extend the techniques to further improve the search speed for protein structures. The extended indexing techniques build and utilize an reduced index constructed from the first few attributes of 3DZDs of protein structures. To retrieve top-k similar structures, top-10 × k similar structures are first found using the reduced index, and top-k structures are selected among them. We also modify the indexing techniques to support θ-based nearest neighbor search, which returns data points less than θ to the query point. The results show that both iDistance and iKernel significantly enhance the searching speed. In top-k nearest neighbor search, the searching time is reduced 69.6%, 77%, 77.4% and 87.9%, respectively using iDistance, iKernel, the extended iDistance, and the extended iKernel. In θ-based nearest neighbor serach, the searching time is reduced 80%, 81%, 95.6% and 95.6% using iDistance, iKernel, the extended iDistance, and the extended iKernel, respectively.

  4. Automated skin lesion segmentation with kernel density estimation

    NASA Astrophysics Data System (ADS)

    Pardo, A.; Real, E.; Fernandez-Barreras, G.; Madruga, F. J.; López-Higuera, J. M.; Conde, O. M.

    2017-07-01

    Skin lesion segmentation is a complex step for dermoscopy pathological diagnosis. Kernel density estimation is proposed as a segmentation technique based on the statistic distribution of color intensities in the lesion and non-lesion regions.

  5. Graph wavelet alignment kernels for drug virtual screening.

    PubMed

    Smalter, Aaron; Huan, Jun; Lushington, Gerald

    2009-06-01

    In this paper, we introduce a novel statistical modeling technique for target property prediction, with applications to virtual screening and drug design. In our method, we use graphs to model chemical structures and apply a wavelet analysis of graphs to summarize features capturing graph local topology. We design a novel graph kernel function to utilize the topology features to build predictive models for chemicals via Support Vector Machine classifier. We call the new graph kernel a graph wavelet-alignment kernel. We have evaluated the efficacy of the wavelet-alignment kernel using a set of chemical structure-activity prediction benchmarks. Our results indicate that the use of the kernel function yields performance profiles comparable to, and sometimes exceeding that of the existing state-of-the-art chemical classification approaches. In addition, our results also show that the use of wavelet functions significantly decreases the computational costs for graph kernel computation with more than ten fold speedup.

  6. A low-power, high-throughput maximum-likelihood convolutional decoder chip for NASA's 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Mccallister, R. D.; Crawford, J. J.

    1981-01-01

    It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.

  7. Understanding of the Geomorphological Elements in Discrimination of Typical Mediterranean Land Cover Types

    NASA Astrophysics Data System (ADS)

    Elhag, Mohamed; Boteva, Silvena

    2017-12-01

    Quantification of geomorphometric features is the keystone concern of the current study. The quantification was based on the statistical approach in term of multivariate analysis of local topographic features. The implemented algorithm utilizes the Digital Elevation Model (DEM) to categorize and extract the geomorphometric features embedded in the topographic dataset. The morphological settings were exercised on the central pixel of 3x3 per-defined convolution kernel to evaluate the surrounding pixels under the right directional pour point model (D8) of the azimuth viewpoints. Realization of unsupervised classification algorithm in term of Iterative Self-Organizing Data Analysis Technique (ISODATA) was carried out on ASTER GDEM within the boundary of the designated study area to distinguish 10 morphometric classes. The morphometric classes expressed spatial distribution variation in the study area. The adopted methodology is successful to appreciate the spatial distribution of the geomorphometric features under investigation. The conducted results verified the superimposition of the delineated geomorphometric elements over a given remote sensing imagery to be further analyzed. Robust relationship between different Land Cover types and the geomorphological elements was established in the context of the study area. The domination and the relative association of different Land Cover types in corresponding to its geomorphological elements were demonstrated.

  8. Vis-NIR spectrometric determination of Brix and sucrose in sugar production samples using kernel partial least squares with interval selection based on the successive projections algorithm.

    PubMed

    de Almeida, Valber Elias; de Araújo Gomes, Adriano; de Sousa Fernandes, David Douglas; Goicoechea, Héctor Casimiro; Galvão, Roberto Kawakami Harrop; Araújo, Mario Cesar Ugulino

    2018-05-01

    This paper proposes a new variable selection method for nonlinear multivariate calibration, combining the Successive Projections Algorithm for interval selection (iSPA) with the Kernel Partial Least Squares (Kernel-PLS) modelling technique. The proposed iSPA-Kernel-PLS algorithm is employed in a case study involving a Vis-NIR spectrometric dataset with complex nonlinear features. The analytical problem consists of determining Brix and sucrose content in samples from a sugar production system, on the basis of transflectance spectra. As compared to full-spectrum Kernel-PLS, the iSPA-Kernel-PLS models involve a smaller number of variables and display statistically significant superiority in terms of accuracy and/or bias in the predictions. Published by Elsevier B.V.

  9. Automatic Semantic Segmentation of Brain Gliomas from MRI Images Using a Deep Cascaded Neural Network.

    PubMed

    Cui, Shaoguo; Mao, Lei; Jiang, Jingfeng; Liu, Chang; Xiong, Shuyu

    2018-01-01

    Brain tumors can appear anywhere in the brain and have vastly different sizes and morphology. Additionally, these tumors are often diffused and poorly contrasted. Consequently, the segmentation of brain tumor and intratumor subregions using magnetic resonance imaging (MRI) data with minimal human interventions remains a challenging task. In this paper, we present a novel fully automatic segmentation method from MRI data containing in vivo brain gliomas. This approach can not only localize the entire tumor region but can also accurately segment the intratumor structure. The proposed work was based on a cascaded deep learning convolutional neural network consisting of two subnetworks: (1) a tumor localization network (TLN) and (2) an intratumor classification network (ITCN). The TLN, a fully convolutional network (FCN) in conjunction with the transfer learning technology, was used to first process MRI data. The goal of the first subnetwork was to define the tumor region from an MRI slice. Then, the ITCN was used to label the defined tumor region into multiple subregions. Particularly, ITCN exploited a convolutional neural network (CNN) with deeper architecture and smaller kernel. The proposed approach was validated on multimodal brain tumor segmentation (BRATS 2015) datasets, which contain 220 high-grade glioma (HGG) and 54 low-grade glioma (LGG) cases. Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity were used as evaluation metrics. Our experimental results indicated that our method could obtain the promising segmentation results and had a faster segmentation speed. More specifically, the proposed method obtained comparable and overall better DSC values (0.89, 0.77, and 0.80) on the combined (HGG + LGG) testing set, as compared to other methods reported in the literature. Additionally, the proposed approach was able to complete a segmentation task at a rate of 1.54 seconds per slice.

  10. Resummed memory kernels in generalized system-bath master equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mavros, Michael G.; Van Voorhis, Troy, E-mail: tvan@mit.edu

    2014-08-07

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between themore » two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.« less

  11. Dimensionality-varied deep convolutional neural network for spectral-spatial classification of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Qu, Haicheng; Liang, Xuejian; Liang, Shichao; Liu, Wanjun

    2018-01-01

    Many methods of hyperspectral image classification have been proposed recently, and the convolutional neural network (CNN) achieves outstanding performance. However, spectral-spatial classification of CNN requires an excessively large model, tremendous computations, and complex network, and CNN is generally unable to use the noisy bands caused by water-vapor absorption. A dimensionality-varied CNN (DV-CNN) is proposed to address these issues. There are four stages in DV-CNN and the dimensionalities of spectral-spatial feature maps vary with the stages. DV-CNN can reduce the computation and simplify the structure of the network. All feature maps are processed by more kernels in higher stages to extract more precise features. DV-CNN also improves the classification accuracy and enhances the robustness to water-vapor absorption bands. The experiments are performed on data sets of Indian Pines and Pavia University scene. The classification performance of DV-CNN is compared with state-of-the-art methods, which contain the variations of CNN, traditional, and other deep learning methods. The experiment of performance analysis about DV-CNN itself is also carried out. The experimental results demonstrate that DV-CNN outperforms state-of-the-art methods for spectral-spatial classification and it is also robust to water-vapor absorption bands. Moreover, reasonable parameters selection is effective to improve classification accuracy.

  12. Multicasting mesh AER: a scalable assembly approach for reconfigurable neuromorphic structured AER systems. Application to ConvNets.

    PubMed

    Zamarreno-Ramos, C; Linares-Barranco, A; Serrano-Gotarredona, T; Linares-Barranco, B

    2013-02-01

    This paper presents a modular, scalable approach to assembling hierarchically structured neuromorphic Address Event Representation (AER) systems. The method consists of arranging modules in a 2D mesh, each communicating bidirectionally with all four neighbors. Address events include a module label. Each module includes an AER router which decides how to route address events. Two routing approaches have been proposed, analyzed and tested, using either destination or source module labels. Our analyses reveal that depending on traffic conditions and network topologies either one or the other approach may result in better performance. Experimental results are given after testing the approach using high-end Virtex-6 FPGAs. The approach is proposed for both single and multiple FPGAs, in which case a special bidirectional parallel-serial AER link with flow control is exploited, using the FPGA Rocket-I/O interfaces. Extensive test results are provided exploiting convolution modules of 64 × 64 pixels with kernels with sizes up to 11 × 11, which process real sensory data from a Dynamic Vision Sensor (DVS) retina. One single Virtex-6 FPGA can hold up to 64 of these convolution modules, which is equivalent to a neural network with 262 × 10(3) neurons and almost 32 million synapses.

  13. Dusty Pair Plasma—Wave Propagation and Diffusive Transition of Oscillations

    NASA Astrophysics Data System (ADS)

    Atamaniuk, Barbara; Turski, Andrzej J.

    2011-11-01

    The crucial point of the paper is the relation between equilibrium distributions of plasma species and the type of propagation or diffusive transition of plasma response to a disturbance. The paper contains a unified treatment of disturbance propagation (transport) in the linearized Vlasov electron-positron and fullerene pair plasmas containing charged dust impurities, based on the space-time convolution integral equations. Electron-positron-dust/ion (e-p-d/i) plasmas are rather widespread in nature. Space-time responses of multi-component linearized Vlasov plasmas on the basis of multiple integral equations are invoked. An initial-value problem for Vlasov-Poisson/Ampère equations is reduced to the one multiple integral equation and the solution is expressed in terms of forcing function and its space-time convolution with the resolvent kernel. The forcing function is responsible for the initial disturbance and the resolvent is responsible for the equilibrium velocity distributions of plasma species. By use of resolvent equations, time-reversibility, space-reflexivity and the other symmetries are revealed. The symmetries carry on physical properties of Vlasov pair plasmas, e.g., conservation laws. Properly choosing equilibrium distributions for dusty pair plasmas, we can reduce the resolvent equation to: (i) the undamped dispersive wave equations, (ii) and diffusive transport equations of oscillations.

  14. Characterizing cartilage microarchitecture on phase-contrast x-ray computed tomography using deep learning with convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Deng, Botao; Abidin, Anas Z.; D'Souza, Adora M.; Nagarajan, Mahesh B.; Coan, Paola; Wismüller, Axel

    2017-03-01

    The effectiveness of phase contrast X-ray computed tomography (PCI-CT) in visualizing human patellar cartilage matrix has been demonstrated due to its ability to capture soft tissue contrast on a micrometer resolution scale. Recent studies have shown that off-the-shelf Convolutional Neural Network (CNN) features learned from a nonmedical data set can be used for medical image classification. In this paper, we investigate the ability of features extracted from two different CNNs for characterizing chondrocyte patterns in the cartilage matrix. We obtained features from 842 regions of interest annotated on PCI-CT images of human patellar cartilage using CaffeNet and Inception-v3 Network, which were then used in a machine learning task involving support vector machines with radial basis function kernel to classify the ROIs as healthy or osteoarthritic. Classification performance was evaluated using the area (AUC) under the Receiver Operating Characteristic (ROC) curve. The best classification performance was observed with features from Inception-v3 network (AUC = 0.95), which outperforms features extracted from CaffeNet (AUC = 0.91). These results suggest that such characterization of chondrocyte patterns using features from internal layers of CNNs can be used to distinguish between healthy and osteoarthritic tissue with high accuracy.

  15. Evaluation of the influence of double and triple Gaussian proton kernel models on accuracy of dose calculations for spot scanning technique.

    PubMed

    Hirayama, Shusuke; Takayanagi, Taisuke; Fujii, Yusuke; Fujimoto, Rintaro; Fujitaka, Shinichiro; Umezawa, Masumi; Nagamine, Yoshihiko; Hosaka, Masahiro; Yasui, Keisuke; Omachi, Chihiro; Toshito, Toshiyuki

    2016-03-01

    The main purpose in this study was to present the results of beam modeling and how the authors systematically investigated the influence of double and triple Gaussian proton kernel models on the accuracy of dose calculations for spot scanning technique. The accuracy of calculations was important for treatment planning software (TPS) because the energy, spot position, and absolute dose had to be determined by TPS for the spot scanning technique. The dose distribution was calculated by convolving in-air fluence with the dose kernel. The dose kernel was the in-water 3D dose distribution of an infinitesimal pencil beam and consisted of an integral depth dose (IDD) and a lateral distribution. Accurate modeling of the low-dose region was important for spot scanning technique because the dose distribution was formed by cumulating hundreds or thousands of delivered beams. The authors employed a double Gaussian function as the in-air fluence model of an individual beam. Double and triple Gaussian kernel models were also prepared for comparison. The parameters of the kernel lateral model were derived by fitting a simulated in-water lateral dose profile induced by an infinitesimal proton beam, whose emittance was zero, at various depths using Monte Carlo (MC) simulation. The fitted parameters were interpolated as a function of depth in water and stored as a separate look-up table. These stored parameters for each energy and depth in water were acquired from the look-up table when incorporating them into the TPS. The modeling process for the in-air fluence and IDD was based on the method proposed in the literature. These were derived using MC simulation and measured data. The authors compared the measured and calculated absolute doses at the center of the spread-out Bragg peak (SOBP) under various volumetric irradiation conditions to systematically investigate the influence of the two types of kernel models on the dose calculations. The authors investigated the difference between double and triple Gaussian kernel models. The authors found that the difference between the two studied kernel models appeared at mid-depths and the accuracy of predicting the double Gaussian model deteriorated at the low-dose bump that appeared at mid-depths. When the authors employed the double Gaussian kernel model, the accuracy of calculations for the absolute dose at the center of the SOBP varied with irradiation conditions and the maximum difference was 3.4%. In contrast, the results obtained from calculations with the triple Gaussian kernel model indicated good agreement with the measurements within ±1.1%, regardless of the irradiation conditions. The difference between the results obtained with the two types of studied kernel models was distinct in the high energy region. The accuracy of calculations with the double Gaussian kernel model varied with the field size and SOBP width because the accuracy of prediction with the double Gaussian model was insufficient at the low-dose bump. The evaluation was only qualitative under limited volumetric irradiation conditions. Further accumulation of measured data would be needed to quantitatively comprehend what influence the double and triple Gaussian kernel models had on the accuracy of dose calculations.

  16. Simultaneous Tumor Segmentation, Image Restoration, and Blur Kernel Estimation in PET Using Multiple Regularizations

    PubMed Central

    Li, Laquan; Wang, Jian; Lu, Wei; Tan, Shan

    2016-01-01

    Accurate tumor segmentation from PET images is crucial in many radiation oncology applications. Among others, partial volume effect (PVE) is recognized as one of the most important factors degrading imaging quality and segmentation accuracy in PET. Taking into account that image restoration and tumor segmentation are tightly coupled and can promote each other, we proposed a variational method to solve both problems simultaneously in this study. The proposed method integrated total variation (TV) semi-blind de-convolution and Mumford-Shah segmentation with multiple regularizations. Unlike many existing energy minimization methods using either TV or L2 regularization, the proposed method employed TV regularization over tumor edges to preserve edge information, and L2 regularization inside tumor regions to preserve the smooth change of the metabolic uptake in a PET image. The blur kernel was modeled as anisotropic Gaussian to address the resolution difference in transverse and axial directions commonly seen in a clinic PET scanner. The energy functional was rephrased using the Γ-convergence approximation and was iteratively optimized using the alternating minimization (AM) algorithm. The performance of the proposed method was validated on a physical phantom and two clinic datasets with non-Hodgkin’s lymphoma and esophageal cancer, respectively. Experimental results demonstrated that the proposed method had high performance for simultaneous image restoration, tumor segmentation and scanner blur kernel estimation. Particularly, the recovery coefficients (RC) of the restored images of the proposed method in the phantom study were close to 1, indicating an efficient recovery of the original blurred images; for segmentation the proposed method achieved average dice similarity indexes (DSIs) of 0.79 and 0.80 for two clinic datasets, respectively; and the relative errors of the estimated blur kernel widths were less than 19% in the transversal direction and 7% in the axial direction. PMID:28603407

  17. Integrating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Wilton, Donald R.

    2008-01-01

    A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form

  18. Effect of ultra-low doses, ASIR and MBIR on density and noise levels of MDCT images of dental implant sites.

    PubMed

    Widmann, Gerlig; Al-Shawaf, Reema; Schullian, Peter; Al-Sadhan, Ra'ed; Hörmann, Romed; Al-Ekrish, Asma'a A

    2017-05-01

    Differences in noise and density values in MDCT images obtained using ultra-low doses with FBP, ASIR, and MBIR may possibly affect implant site density analysis. The aim of this study was to compare density and noise measurements recorded from dental implant sites using ultra-low doses combined with FBP, ASIR, and MBIR. Cadavers were scanned using a standard protocol and four low-dose protocols. Scans were reconstructed using FBP, ASIR-50, ASIR-100, and MBIR, and either a bone or standard reconstruction kernel. Density (mean Hounsfield units [HUs]) of alveolar bone and noise levels (mean standard deviation of HUs) was recorded from all datasets and measurements were compared by paired t tests and two-way ANOVA with repeated measures. Significant differences in density and noise were found between the reference dose/FBP protocol and almost all test combinations. Maximum mean differences in HU were 178.35 (bone kernel) and 273.74 (standard kernel), and in noise, were 243.73 (bone kernel) and 153.88 (standard kernel). Decreasing radiation dose increased density and noise regardless of reconstruction technique and kernel. The effect of reconstruction technique on density and noise depends on the reconstruction kernel used. • Ultra-low-dose MDCT protocols allowed more than 90 % reductions in dose. • Decreasing the dose generally increased density and noise. • Effect of IRT on density and noise varies with reconstruction kernel. • Accuracy of low-dose protocols for interpretation of bony anatomy not known. • Effect of low doses on accuracy of computer-aided design models unknown.

  19. Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices

    PubMed Central

    Gokmen, Tayfun; Onen, Murat; Haensch, Wilfried

    2017-01-01

    In a previous work we have detailed the requirements for obtaining maximal deep learning performance benefit by implementing fully connected deep neural networks (DNN) in the form of arrays of resistive devices. Here we extend the concept of Resistive Processing Unit (RPU) devices to convolutional neural networks (CNNs). We show how to map the convolutional layers to fully connected RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed by the analog nature of the computations performed on the arrays significantly affect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and that can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that a combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of the RPU approach to a large class of neural network architectures. PMID:29066942

  20. Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices.

    PubMed

    Gokmen, Tayfun; Onen, Murat; Haensch, Wilfried

    2017-01-01

    In a previous work we have detailed the requirements for obtaining maximal deep learning performance benefit by implementing fully connected deep neural networks (DNN) in the form of arrays of resistive devices. Here we extend the concept of Resistive Processing Unit (RPU) devices to convolutional neural networks (CNNs). We show how to map the convolutional layers to fully connected RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed by the analog nature of the computations performed on the arrays significantly affect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and that can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that a combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of the RPU approach to a large class of neural network architectures.

  1. Deep learning with convolutional neural network in radiology.

    PubMed

    Yasaka, Koichiro; Akai, Hiroyuki; Kunimatsu, Akira; Kiryu, Shigeru; Abe, Osamu

    2018-04-01

    Deep learning with a convolutional neural network (CNN) is gaining attention recently for its high performance in image recognition. Images themselves can be utilized in a learning process with this technique, and feature extraction in advance of the learning process is not required. Important features can be automatically learned. Thanks to the development of hardware and software in addition to techniques regarding deep learning, application of this technique to radiological images for predicting clinically useful information, such as the detection and the evaluation of lesions, etc., are beginning to be investigated. This article illustrates basic technical knowledge regarding deep learning with CNNs along the actual course (collecting data, implementing CNNs, and training and testing phases). Pitfalls regarding this technique and how to manage them are also illustrated. We also described some advanced topics of deep learning, results of recent clinical studies, and the future directions of clinical application of deep learning techniques.

  2. Compound analysis via graph kernels incorporating chirality.

    PubMed

    Brown, J B; Urata, Takashi; Tamura, Takeyuki; Arai, Midori A; Kawabata, Takeo; Akutsu, Tatsuya

    2010-12-01

    High accuracy is paramount when predicting biochemical characteristics using Quantitative Structural-Property Relationships (QSPRs). Although existing graph-theoretic kernel methods combined with machine learning techniques are efficient for QSPR model construction, they cannot distinguish topologically identical chiral compounds which often exhibit different biological characteristics. In this paper, we propose a new method that extends the recently developed tree pattern graph kernel to accommodate stereoisomers. We show that Support Vector Regression (SVR) with a chiral graph kernel is useful for target property prediction by demonstrating its application to a set of human vitamin D receptor ligands currently under consideration for their potential anti-cancer effects.

  3. On the Computation of Optimal Designs for Certain Time Series Models with Applications to Optimal Quantile Selection for Location or Scale Parameter Estimation.

    DTIC Science & Technology

    1981-07-01

    process is observed over all of (0,1], the reproducing kernel Hilbert space (RKHS) techniques developed by Parzen (1961a, 1961b) 2 may be used to construct...covariance kernel,R, for the process (1.1) is the reproducing kernel for a reproducing kernel Hilbert space (RKHS) which will be denoted as H(R) (c.f...2.6), it is known that (c.f. Eubank, Smith and Smith (1981a, 1981b)), i) H(R) is a Hilbert function space consisting of functions which satisfy for fEH

  4. Kernel-Phase Interferometry for Super-Resolution Detection of Faint Companions

    NASA Astrophysics Data System (ADS)

    Factor, Samuel M.; Kraus, Adam L.

    2017-01-01

    Direct detection of close in companions (exoplanets or binary systems) is notoriously difficult. While coronagraphs and point spread function (PSF) subtraction can be used to reduce contrast and dig out signals of companions under the PSF, there are still significant limitations in separation and contrast. Non-redundant aperture masking (NRM) interferometry can be used to detect companions well inside the PSF of a diffraction limited image, though the mask discards ˜95% of the light gathered by the telescope and thus the technique is severely flux limited. Kernel-phase analysis applies interferometric techniques similar to NRM to a diffraction limited image utilizing the full aperture. Instead of non-redundant closure-phases, kernel-phases are constructed from a grid of points on the full aperture, simulating a redundant interferometer. I have developed my own faint companion detection pipeline which utilizes an Bayesian analysis of kernel-phases. I have used this pipeline to search for new companions in archival images from HST/NICMOS in order to constrain planet and binary formation models at separations inaccessible to previous techniques. Using this method, it is possible to detect a companion well within the classical λ/D Rayleigh diffraction limit using a fraction of the telescope time as NRM. This technique can easily be applied to archival data as no mask is needed and will thus make the detection of close in companions cheap and simple as no additional observations are needed. Since the James Webb Space Telescope (JWST) will be able to perform NRM observations, further development and characterization of kernel-phase analysis will allow efficient use of highly competitive JWST telescope time.

  5. Kernel-Phase Interferometry for Super-Resolution Detection of Faint Companions

    NASA Astrophysics Data System (ADS)

    Factor, Samuel

    2016-10-01

    Direct detection of close in companions (binary systems or exoplanets) is notoriously difficult. While chronagraphs and point spread function (PSF) subtraction can be used to reduce contrast and dig out signals of companions under the PSF, there are still significant limitations in separation and contrast. While non-redundant aperture masking (NRM) interferometry can be used to detect companions well inside the PSF of a diffraction limited image, the mask discards 95% of the light gathered by the telescope and thus the technique is severely flux limited. Kernel-phase analysis applies interferometric techniques similar to NRM though utilizing the full aperture. Instead of closure-phases, kernel-phases are constructed from a grid of points on the full aperture, simulating a redundant interferometer. I propose to develop my own faint companion detection pipeline which utilizes an MCMC analysis of kernel-phases. I will search for new companions in archival images from NIC1 and ACS/HRC in order to constrain binary and planet formation models at separations inaccessible to previous techniques. Using this method, it is possible to detect a companion well within the classical l/D Rayleigh diffraction limit using a fraction of the telescope time as NRM. This technique can easily be applied to archival data as no mask is needed and will thus make the detection of close in companions cheap and simple as no additional observations are needed. Since the James Webb Space Telescope (JWST) will be able to perform NRM observations, further development and characterization of kernel-phase analysis will allow efficient use of highly competitive JWST telescope time.

  6. Wavelet-enhanced convolutional neural network: a new idea in a deep learning paradigm.

    PubMed

    Savareh, Behrouz Alizadeh; Emami, Hassan; Hajiabadi, Mohamadreza; Azimi, Seyed Majid; Ghafoori, Mahyar

    2018-05-29

    Manual brain tumor segmentation is a challenging task that requires the use of machine learning techniques. One of the machine learning techniques that has been given much attention is the convolutional neural network (CNN). The performance of the CNN can be enhanced by combining other data analysis tools such as wavelet transform. In this study, one of the famous implementations of CNN, a fully convolutional network (FCN), was used in brain tumor segmentation and its architecture was enhanced by wavelet transform. In this combination, a wavelet transform was used as a complementary and enhancing tool for CNN in brain tumor segmentation. Comparing the performance of basic FCN architecture against the wavelet-enhanced form revealed a remarkable superiority of enhanced architecture in brain tumor segmentation tasks. Using mathematical functions and enhancing tools such as wavelet transform and other mathematical functions can improve the performance of CNN in any image processing task such as segmentation and classification.

  7. Facet Annotation by Extending CNN with a Matching Strategy.

    PubMed

    Wu, Bei; Wei, Bifan; Liu, Jun; Guo, Zhaotong; Zheng, Yuanhao; Chen, Yihe

    2018-06-01

    Most community question answering (CQA) websites manage plenty of question-answer pairs (QAPs) through topic-based organizations, which may not satisfy users' fine-grained search demands. Facets of topics serve as a powerful tool to navigate, refine, and group the QAPs. In this work, we propose FACM, a model to annotate QAPs with facets by extending convolution neural networks (CNNs) with a matching strategy. First, phrase information is incorporated into text representation by CNNs with different kernel sizes. Then, through a matching strategy among QAPs and facet label texts (FaLTs) acquired from Wikipedia, we generate similarity matrices to deal with the facet heterogeneity. Finally, a three-channel CNN is trained for facet label assignment of QAPs. Experiments on three real-world data sets show that FACM outperforms the state-of-the-art methods.

  8. Semi-blind sparse image reconstruction with application to MRFM.

    PubMed

    Park, Se Un; Dobigeon, Nicolas; Hero, Alfred O

    2012-09-01

    We propose a solution to the image deconvolution problem where the convolution kernel or point spread function (PSF) is assumed to be only partially known. Small perturbations generated from the model are exploited to produce a few principal components explaining the PSF uncertainty in a high-dimensional space. Unlike recent developments on blind deconvolution of natural images, we assume the image is sparse in the pixel basis, a natural sparsity arising in magnetic resonance force microscopy (MRFM). Our approach adopts a Bayesian Metropolis-within-Gibbs sampling framework. The performance of our Bayesian semi-blind algorithm for sparse images is superior to previously proposed semi-blind algorithms such as the alternating minimization algorithm and blind algorithms developed for natural images. We illustrate our myopic algorithm on real MRFM tobacco virus data.

  9. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    NASA Astrophysics Data System (ADS)

    Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.; Bagby, L.; Baller, B.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Bugel, L.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; James, C.; de Vries, J. Jan; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Jones, B. J. P.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; von Rohr, C. Rudolf; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van de Water, R. G.; Viren, B.; Weber, M.; Weston, J.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2017-03-01

    We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.

  10. Modulation and coding for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Yuen, Joseph H.; Simon, Marvin K.; Pollara, Fabrizio; Divsalar, Dariush; Miller, Warner H.; Morakis, James C.; Ryan, Carl R.

    1990-01-01

    Several modulation and coding advances supported by NASA are summarized. To support long-constraint-length convolutional code, a VLSI maximum-likelihood decoder, utilizing parallel processing techniques, which is being developed to decode convolutional codes of constraint length 15 and a code rate as low as 1/6 is discussed. A VLSI high-speed 8-b Reed-Solomon decoder which is being developed for advanced tracking and data relay satellite (ATDRS) applications is discussed. A 300-Mb/s modem with continuous phase modulation (CPM) and codings which is being developed for ATDRS is discussed. Trellis-coded modulation (TCM) techniques are discussed for satellite-based mobile communication applications.

  11. Convolutional neural networks for event-related potential detection: impact of the architecture.

    PubMed

    Cecotti, H

    2017-07-01

    The detection of brain responses at the single-trial level in the electroencephalogram (EEG) such as event-related potentials (ERPs) is a difficult problem that requires different processing steps to extract relevant discriminant features. While most of the signal and classification techniques for the detection of brain responses are based on linear algebra, different pattern recognition techniques such as convolutional neural network (CNN), as a type of deep learning technique, have shown some interests as they are able to process the signal after limited pre-processing. In this study, we propose to investigate the performance of CNNs in relation of their architecture and in relation to how they are evaluated: a single system for each subject, or a system for all the subjects. More particularly, we want to address the change of performance that can be observed between specifying a neural network to a subject, or by considering a neural network for a group of subjects, taking advantage of a larger number of trials from different subjects. The results support the conclusion that a convolutional neural network trained on different subjects can lead to an AUC above 0.9 by using an appropriate architecture using spatial filtering and shift invariant layers.

  12. Developing Higher-Order Materials Knowledge Systems

    NASA Astrophysics Data System (ADS)

    Fast, Anthony Nathan

    2011-12-01

    Advances in computational materials science and novel characterization techniques have allowed scientists to probe deeply into a diverse range of materials phenomena. These activities are producing enormous amounts of information regarding the roles of various hierarchical material features in the overall performance characteristics displayed by the material. Connecting the hierarchical information over disparate domains is at the crux of multiscale modeling. The inherent challenge of performing multiscale simulations is developing scale bridging relationships to couple material information between well separated length scales. Much progress has been made in the development of homogenization relationships which replace heterogeneous material features with effective homogenous descriptions. These relationships facilitate the flow of information from lower length scales to higher length scales. Meanwhile, most localization relationships that link the information from a from a higher length scale to a lower length scale are plagued by computationally intensive techniques which are not readily integrated into multiscale simulations. The challenge of executing fully coupled multiscale simulations is augmented by the need to incorporate the evolution of the material structure that may occur under conditions such as material processing. To address these challenges with multiscale simulation, a novel framework called the Materials Knowledge System (MKS) has been developed. This methodology efficiently extracts, stores, and recalls microstructure-property-processing localization relationships. This approach is built on the statistical continuum theories developed by Kroner that express the localization of the response field at the microscale using a series of highly complex convolution integrals, which have historically been evaluated analytically. The MKS approach dramatically improves the accuracy of these expressions by calibrating the convolution kernels in these expressions to results from previously validated physics-based models. These novel tools have been validated for the elastic strain localization in moderate contrast dual-phase composites by direct comparisons with predictions from finite element model. The versatility of the approach is further demonstrated by its successful application to capturing the structure evolution during spinodal decomposition of a binary alloy. Lastly, some key features in the future application of the MKS approach are developed using the Portevin-le Chaterlier effect. It has been shown with these case studies that the MKS approach is capable of accurately reproducing the results from physics based models with a drastic reduction in computational requirements.

  13. An iterative kernel based method for fourth order nonlinear equation with nonlinear boundary condition

    NASA Astrophysics Data System (ADS)

    Azarnavid, Babak; Parand, Kourosh; Abbasbandy, Saeid

    2018-06-01

    This article discusses an iterative reproducing kernel method with respect to its effectiveness and capability of solving a fourth-order boundary value problem with nonlinear boundary conditions modeling beams on elastic foundations. Since there is no method of obtaining reproducing kernel which satisfies nonlinear boundary conditions, the standard reproducing kernel methods cannot be used directly to solve boundary value problems with nonlinear boundary conditions as there is no knowledge about the existence and uniqueness of the solution. The aim of this paper is, therefore, to construct an iterative method by the use of a combination of reproducing kernel Hilbert space method and a shooting-like technique to solve the mentioned problems. Error estimation for reproducing kernel Hilbert space methods for nonlinear boundary value problems have yet to be discussed in the literature. In this paper, we present error estimation for the reproducing kernel method to solve nonlinear boundary value problems probably for the first time. Some numerical results are given out to demonstrate the applicability of the method.

  14. Multiple kernels learning-based biological entity relationship extraction method.

    PubMed

    Dongliang, Xu; Jingchang, Pan; Bailing, Wang

    2017-09-20

    Automatic extracting protein entity interaction information from biomedical literature can help to build protein relation network and design new drugs. There are more than 20 million literature abstracts included in MEDLINE, which is the most authoritative textual database in the field of biomedicine, and follow an exponential growth over time. This frantic expansion of the biomedical literature can often be difficult to absorb or manually analyze. Thus efficient and automated search engines are necessary to efficiently explore the biomedical literature using text mining techniques. The P, R, and F value of tag graph method in Aimed corpus are 50.82, 69.76, and 58.61%, respectively. The P, R, and F value of tag graph kernel method in other four evaluation corpuses are 2-5% higher than that of all-paths graph kernel. And The P, R and F value of feature kernel and tag graph kernel fuse methods is 53.43, 71.62 and 61.30%, respectively. The P, R and F value of feature kernel and tag graph kernel fuse methods is 55.47, 70.29 and 60.37%, respectively. It indicated that the performance of the two kinds of kernel fusion methods is better than that of simple kernel. In comparison with the all-paths graph kernel method, the tag graph kernel method is superior in terms of overall performance. Experiments show that the performance of the multi-kernels method is better than that of the three separate single-kernel method and the dual-mutually fused kernel method used hereof in five corpus sets.

  15. Predicting activity approach based on new atoms similarity kernel function.

    PubMed

    Abu El-Atta, Ahmed H; Moussa, M I; Hassanien, Aboul Ella

    2015-07-01

    Drug design is a high cost and long term process. To reduce time and costs for drugs discoveries, new techniques are needed. Chemoinformatics field implements the informational techniques and computer science like machine learning and graph theory to discover the chemical compounds properties, such as toxicity or biological activity. This is done through analyzing their molecular structure (molecular graph). To overcome this problem there is an increasing need for algorithms to analyze and classify graph data to predict the activity of molecules. Kernels methods provide a powerful framework which combines machine learning with graph theory techniques. These kernels methods have led to impressive performance results in many several chemoinformatics problems like biological activity prediction. This paper presents a new approach based on kernel functions to solve activity prediction problem for chemical compounds. First we encode all atoms depending on their neighbors then we use these codes to find a relationship between those atoms each other. Then we use relation between different atoms to find similarity between chemical compounds. The proposed approach was compared with many other classification methods and the results show competitive accuracy with these methods. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Background field removal technique using regularization enabled sophisticated harmonic artifact reduction for phase data with varying kernel sizes.

    PubMed

    Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2016-09-01

    An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. SVM and SVM Ensembles in Breast Cancer Prediction.

    PubMed

    Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong

    2017-01-01

    Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers.

  18. SVM and SVM Ensembles in Breast Cancer Prediction

    PubMed Central

    Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong

    2017-01-01

    Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers. PMID:28060807

  19. The gravitational potential of axially symmetric bodies from a regularized green kernel

    NASA Astrophysics Data System (ADS)

    Trova, A.; Huré, J.-M.; Hersant, F.

    2011-12-01

    The determination of the gravitational potential inside celestial bodies (rotating stars, discs, planets, asteroids) is a common challenge in numerical Astrophysics. Under axial symmetry, the potential is classically found from a two-dimensional integral over the body's meridional cross-section. Because it involves an improper integral, high accuracy is generally difficult to reach. We have discovered that, for homogeneous bodies, the singular Green kernel can be converted into a regular kernel by direct analytical integration. This new kernel, easily managed with standard techniques, opens interesting horizons, not only for numerical calculus but also to generate approximations, in particular for geometrically thin discs and rings.

  20. Fredholm-Volterra Integral Equation with a Generalized Singular Kernel and its Numerical Solutions

    NASA Astrophysics Data System (ADS)

    El-Kalla, I. L.; Al-Bugami, A. M.

    2010-11-01

    In this paper, the existence and uniqueness of solution of the Fredholm-Volterra integral equation (F-VIE), with a generalized singular kernel, are discussed and proved in the spaceL2(Ω)×C(0,T). The Fredholm integral term (FIT) is considered in position while the Volterra integral term (VIT) is considered in time. Using a numerical technique we have a system of Fredholm integral equations (SFIEs). This system of integral equations can be reduced to a linear algebraic system (LAS) of equations by using two different methods. These methods are: Toeplitz matrix method and Product Nyström method. A numerical examples are considered when the generalized kernel takes the following forms: Carleman function, logarithmic form, Cauchy kernel, and Hilbert kernel.

  1. Comparison of spatiotemporal interpolators for 4D image reconstruction from 2D transesophageal ultrasound

    NASA Astrophysics Data System (ADS)

    Haak, Alexander; van Stralen, Marijn; van Burken, Gerard; Klein, Stefan; Pluim, Josien P. W.; de Jong, Nico; van der Steen, Antonius F. W.; Bosch, Johan G.

    2012-03-01

    °For electrophysiology intervention monitoring, we intend to reconstruct 4D ultrasound (US) of structures in the beating heart from 2D transesophageal US by scanplane rotation. The image acquisition is continuous but unsynchronized to the heart rate, which results in a sparsely and irregularly sampled dataset and a spatiotemporal interpolation method is desired. Previously, we showed the potential of normalized convolution (NC) for interpolating such datasets. We explored 4D interpolation by 3 different methods: NC, nearest neighbor (NN), and temporal binning followed by linear interpolation (LTB). The test datasets were derived by slicing three 4D echocardiography datasets at random rotation angles (θ, range: 0-180) and random normalized cardiac phase (τ, range: 0-1). Four different distributions of rotated 2D images with 600, 900, 1350, and 1800 2D input images were created from all TEE sets. A 2D Gaussian kernel was used for NC and optimal kernel sizes (σθ and στ) were found by performing an exhaustive search. The RMS gray value error (RMSE) of the reconstructed images was computed for all interpolation methods. The estimated optimal kernels were in the range of σθ = 3.24 - 3.69°/ στ = 0.045 - 0.048, σθ = 2.79°/ στ = 0.031 - 0.038, σθ = 2.34°/ στ = 0.023 - 0.026, and σθ = 1.89°/ στ = 0.021 - 0.023 for 600, 900, 1350, and 1800 input images respectively. We showed that NC outperforms NN and LTB. For a small number of input images the advantage of NC is more pronounced.

  2. Tumour control probability derived from dose distribution in homogeneous and heterogeneous models: assuming similar pharmacokinetics, 125Sn-177Lu is superior to 90Y-177Lu in peptide receptor radiotherapy

    NASA Astrophysics Data System (ADS)

    Walrand, Stephan; Hanin, François-Xavier; Pauwels, Stanislas; Jamar, François

    2012-07-01

    Clinical trials on 177Lu-90Y therapy used empirical activity ratios. Radionuclides (RN) with larger beta maximal range could favourably replace 90Y. Our aim is to provide RN dose-deposition kernels and to compare the tumour control probability (TCP) of RN combinations. Dose kernels were derived by integration of the mono-energetic beta-ray dose distributions (computed using Monte Carlo) weighted by their respective beta spectrum. Nine homogeneous spherical tumours (1-25 mm in diameter) and four spherical tumours including a lattice of cold, but alive, spheres (1, 3, 5, 7 mm in diameter) were modelled. The TCP for 93Y, 90Y and 125Sn in combination with 177Lu in variable proportions (that kept constant the renal cortex biological effective dose) were derived by 3D dose kernel convolution. For a mean tumour-absorbed dose of 180 Gy, 2 mm homogeneous tumours and tumours including 3 mm diameter cold alive spheres were both well controlled (TCP > 0.9) using a 75-25% combination of 177Lu and 90Y activity. However, 125Sn-177Lu achieved a significantly better result by controlling 1 mm-homogeneous tumour simultaneously with tumours including 5 mm diameter cold alive spheres. Clinical trials using RN combinations should use RN proportions tuned to the patient dosimetry. 125Sn production and its coupling to somatostatin analogue appear feasible. Assuming similar pharmacokinetics 125Sn is the best RN for combination with 177Lu in peptide receptor radiotherapy justifying pharmacokinetics studies in rodent of 125Sn-labelled somatostatin analogues.

  3. A high-order strong stability preserving Runge-Kutta method for three-dimensional full waveform modeling and inversion of anelastic models

    NASA Astrophysics Data System (ADS)

    Wang, N.; Shen, Y.; Yang, D.; Bao, X.; Li, J.; Zhang, W.

    2017-12-01

    Accurate and efficient forward modeling methods are important for high resolution full waveform inversion. Compared with the elastic case, solving anelastic wave equation requires more computational time, because of the need to compute additional material-independent anelastic functions. A numerical scheme with a large Courant-Friedrichs-Lewy (CFL) condition number enables us to use a large time step to simulate wave propagation, which improves computational efficiency. In this work, we apply the fourth-order strong stability preserving Runge-Kutta method with an optimal CFL coeffiecient to solve the anelastic wave equation. We use a fourth order DRP/opt MacCormack scheme for the spatial discretization, and we approximate the rheological behaviors of the Earth by using the generalized Maxwell body model. With a larger CFL condition number, we find that the computational efficient is significantly improved compared with the traditional fourth-order Runge-Kutta method. Then, we apply the scattering-integral method for calculating travel time and amplitude sensitivity kernels with respect to velocity and attenuation structures. For each source, we carry out one forward simulation and save the time-dependent strain tensor. For each station, we carry out three `backward' simulations for the three components and save the corresponding strain tensors. The sensitivity kernels at each point in the medium are the convolution of the two sets of the strain tensors. Finally, we show several synthetic tests to verify the effectiveness of the strong stability preserving Runge-Kutta method in generating accurate synthetics in full waveform modeling, and in generating accurate strain tensors for calculating sensitivity kernels at regional and global scales.

  4. Single-source dual-energy computed tomography: use of monoenergetic extrapolation for a reduction of metal artifacts.

    PubMed

    Mangold, Stefanie; Gatidis, Sergios; Luz, Oliver; König, Benjamin; Schabel, Christoph; Bongers, Malte N; Flohr, Thomas G; Claussen, Claus D; Thomas, Christoph

    2014-12-01

    The objective of this study was to retrospectively determine the potential of virtual monoenergetic (ME) reconstructions for a reduction of metal artifacts using a new-generation single-source computed tomographic (CT) scanner. The ethics committee of our institution approved this retrospective study with a waiver of the need for informed consent. A total of 50 consecutive patients (29 men and 21 women; mean [SD] age, 51.3 [16.7] years) with metal implants after osteosynthetic fracture treatment who had been examined using a single-source CT scanner (SOMATOM Definition Edge; Siemens Healthcare, Forchheim, Germany; consecutive dual-energy mode with 140 kV/80 kV) were selected. Using commercially available postprocessing software (syngo Dual Energy; Siemens AG), virtual ME data sets with extrapolated energy of 130 keV were generated (medium smooth convolution kernel D30) and compared with standard polyenergetic images reconstructed with a B30 (medium smooth) and a B70 (sharp) kernel. For quantification of the beam hardening artifacts, CT values were measured on circular lines surrounding bone and the osteosynthetic device, and frequency analyses of these values were performed using discrete Fourier transform. A high proportion of low frequencies to the spectrum indicates a high level of metal artifacts. The measurements in all data sets were compared using the Wilcoxon signed rank test. The virtual ME images with extrapolated energy of 130 keV showed significantly lower contribution of low frequencies after the Fourier transform compared with any polyenergetic data set reconstructed with D30, B70, and B30 kernels (P < 0.001). Sequential single-source dual-energy CT allows an efficient reduction of metal artifacts using high-energy ME extrapolation after osteosynthetic fracture treatment.

  5. Kernel-Phase Interferometry for Super-Resolution Detection of Faint Companions

    NASA Astrophysics Data System (ADS)

    Factor, Samuel M.; Kraus, Adam L.

    2017-06-01

    Direct detection of close in companions (exoplanets or binary systems) is notoriously difficult. While coronagraphs and point spread function (PSF) subtraction can be used to reduce contrast and dig out signals of companions under the PSF, there are still significant limitations in separation and contrast near λ/D. Non-redundant aperture masking (NRM) interferometry can be used to detect companions well inside the PSF of a diffraction limited image, though the mask discards ˜ 95% of the light gathered by the telescope and thus the technique is severely flux limited. Kernel-phase analysis applies interferometric techniques similar to NRM to a diffraction limited image utilizing the full aperture. Instead of non-redundant closure-phases, kernel-phases are constructed from a grid of points on the full aperture, simulating a redundant interferometer. I have developed a new, easy to use, faint companion detection pipeline which analyzes kernel-phases utilizing Bayesian model comparison. I demonstrate this pipeline on archival images from HST/NICMOS, searching for new companions in order to constrain binary formation models at separations inaccessible to previous techniques. Using this method, it is possible to detect a companion well within the classical λ/D Rayleigh diffraction limit using a fraction of the telescope time as NRM. Since the James Webb Space Telescope (JWST) will be able to perform NRM observations, further development and characterization of kernel-phase analysis will allow efficient use of highly competitive JWST telescope time. As no mask is needed, this technique can easily be applied to archival data and even target acquisition images (e.g. from JWST), making the detection of close in companions cheap and simple as no additional observations are needed.

  6. Convergence of Chahine's nonlinear relaxation inversion method used for limb viewing remote sensing

    NASA Technical Reports Server (NTRS)

    Chu, W. P.

    1985-01-01

    The application of Chahine's (1970) inversion technique to remote sensing problems utilizing the limb viewing geometry is discussed. The problem considered here involves occultation-type measurements and limb radiance-type measurements from either spacecraft or balloon platforms. The kernel matrix of the inversion problem is either an upper or lower triangular matrix. It is demonstrated that the Chahine inversion technique always converges, provided the diagonal elements of the kernel matrix are nonzero.

  7. Two-dimensional convolute integers for analytical instrumentation

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.

    1982-01-01

    As new analytical instruments and techniques emerge with increased dimensionality, a corresponding need is seen for data processing logic which can appropriately address the data. Two-dimensional measurements reveal enhanced unknown mixture analysis capability as a result of the greater spectral information content over two one-dimensional methods taken separately. It is noted that two-dimensional convolute integers are merely an extension of the work by Savitzky and Golay (1964). It is shown that these low-pass, high-pass and band-pass digital filters are truly two-dimensional and that they can be applied in a manner identical with their one-dimensional counterpart, that is, a weighted nearest-neighbor, moving average with zero phase shifting, convoluted integer (universal number) weighting coefficients.

  8. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    DOE PAGES

    Acciarri, R.; Adams, C.; An, R.; ...

    2017-03-14

    Here, we present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. Lastly, we also address technical issues that arise when applying this technique to data from a large LArTPCmore » at or near ground level.« less

  9. Gas Classification Using Deep Convolutional Neural Networks.

    PubMed

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-08

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP).

  10. Gas Classification Using Deep Convolutional Neural Networks

    PubMed Central

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-01

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP). PMID:29316723

  11. Clinicopathologic correlations in Alibert-type mycosis fungoides.

    PubMed

    Eng, A M; Blekys, I; Worobec, S M

    1981-06-01

    Five cases of mycosis fungoides of the Alibert type were studied by taking multiple biopsy specimens at different stages of the disease. Large hyperchromatic, slightly irregular mononuclear cells are the most frequent cells. Ultrastructurally, the cells were only slightly convoluted, had prominent heterochromatin banding at the nuclear membrane, and unremarkable cytoplasmic organelles. Highly convoluted cerebriform nucleated cells were few. Large regular vesicular histiocytes were prominent in the early stages. Ultrastructurally, the cells showed evenly distributed euchromatin. Epidermotrophism was equally as important as Pautrier's abscess as a hallmark of the disease. Stereologic techniques comparing the infiltrate with regard to size and convolution of cells in all stages of mycosis fungoides with infiltrates seen in a variety of benign dermatoses showed no statistically significant differences.

  12. A New Method of Synthetic Aperture Radar Image Reconstruction Using Modified Convolution Back-Projection Algorithm.

    DTIC Science & Technology

    1986-08-01

    SECURITY CLASSIFICATION AUTHORITY 3 DISTRIBUTIONAVAILABILITY OF REPORT N/A \\pproved for public release, 21b. OECLASS FI) CAT ) ON/OOWNGRAOING SCMEOLLE...from this set of projections. The Convolution Back-Projection (CBP) algorithm is widely used technique in Computer Aide Tomography ( CAT ). In this work...University of Illinois at Urbana-Champaign. 1985 Ac % DTICEl_ FCTE " AUG 1 11986 Urbana. Illinois U,) I A NEW METHOD OF SYNTHETIC APERTURE RADAR IMAGE

  13. New syndrome decoder for (n, 1) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.

  14. Accurate lithography simulation model based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  15. Deep neural network using color and synthesized three-dimensional shape for face recognition

    NASA Astrophysics Data System (ADS)

    Rhee, Seon-Min; Yoo, ByungIn; Han, Jae-Joon; Hwang, Wonjun

    2017-03-01

    We present an approach for face recognition using synthesized three-dimensional (3-D) shape information together with two-dimensional (2-D) color in a deep convolutional neural network (DCNN). As 3-D facial shape is hardly affected by the extrinsic 2-D texture changes caused by illumination, make-up, and occlusions, it could provide more reliable complementary features in harmony with the 2-D color feature in face recognition. Unlike other approaches that use 3-D shape information with the help of an additional depth sensor, our approach generates a personalized 3-D face model by using only face landmarks in the 2-D input image. Using the personalized 3-D face model, we generate a frontalized 2-D color facial image as well as 3-D facial images (e.g., a depth image and a normal image). In our DCNN, we first feed 2-D and 3-D facial images into independent convolutional layers, where the low-level kernels are successfully learned according to their own characteristics. Then, we merge them and feed into higher-level layers under a single deep neural network. Our proposed approach is evaluated with labeled faces in the wild dataset and the results show that the error rate of the verification rate at false acceptance rate 1% is improved by up to 32.1% compared with the baseline where only a 2-D color image is used.

  16. A Hybrid Model for Spatially and Temporally Resolved Ozone Exposures in the Continental United States

    PubMed Central

    Di, Qian; Rowland, Sebastian; Koutrakis, Petros; Schwartz, Joel

    2017-01-01

    Ground-level ozone is an important atmospheric oxidant, which exhibits considerable spatial and temporal variability in its concentration level. Existing modeling approaches for ground-level ozone include chemical transport models, land-use regression, Kriging, and data fusion of chemical transport models with monitoring data. Each of these methods has both strengths and weaknesses. Combining those complementary approaches could improve model performance. Meanwhile, satellite-based total column ozone, combined with ozone vertical profile, is another potential input. We propose a hybrid model that integrates the above variables to achieve spatially and temporally resolved exposure assessments for ground-level ozone. We used a neural network for its capacity to model interactions and nonlinearity. Convolutional layers, which use convolution kernels to aggregate nearby information, were added to the neural network to account for spatial and temporal autocorrelation. We trained the model with AQS 8-hour daily maximum ozone in the continental United States from 2000 to 2012 and tested it with left out monitoring sites. Cross-validated R2 on the left out monitoring sites ranged from 0.74 to 0.80 (mean 0.76) for predictions on 1 km×1 km grid cells, which indicates good model performance. Model performance remains good even at low ozone concentrations. The prediction results facilitate epidemiological studies to assess the health effect of ozone in the long term and the short term. PMID:27332675

  17. Parametric output-only identification of time-varying structures using a kernel recursive extended least squares TARMA approach

    NASA Astrophysics Data System (ADS)

    Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim

    2018-01-01

    The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.

  18. Kernel analysis of partial least squares (PLS) regression models.

    PubMed

    Shinzawa, Hideyuki; Ritthiruangdej, Pitiporn; Ozaki, Yukihiro

    2011-05-01

    An analytical technique based on kernel matrix representation is demonstrated to provide further chemically meaningful insight into partial least squares (PLS) regression models. The kernel matrix condenses essential information about scores derived from PLS or principal component analysis (PCA). Thus, it becomes possible to establish the proper interpretation of the scores. A PLS model for the total nitrogen (TN) content in multiple Thai fish sauces is built with a set of near-infrared (NIR) transmittance spectra of the fish sauce samples. The kernel analysis of the scores effectively reveals that the variation of the spectral feature induced by the change in protein content is substantially associated with the total water content and the protein hydration. Kernel analysis is also carried out on a set of time-dependent infrared (IR) spectra representing transient evaporation of ethanol from a binary mixture solution of ethanol and oleic acid. A PLS model to predict the elapsed time is built with the IR spectra and the kernel matrix is derived from the scores. The detailed analysis of the kernel matrix provides penetrating insight into the interaction between the ethanol and the oleic acid.

  19. Near-infrared hyperspectral imaging for detecting Aflatoxin B1 of maize kernels

    USDA-ARS?s Scientific Manuscript database

    The feasibility of detecting the Aflatoxin B1 in maize kernels inoculated with Aspergillus flavus conidia in the field was assessed using near-infrared hyperspectral imaging technique. After pixel-level calibration, wavelength dependent offset, the masking method was adopted to reduce the noise and ...

  20. Coding/modulation trade-offs for Shuttle wideband data links

    NASA Technical Reports Server (NTRS)

    Batson, B. H.; Huth, G. K.; Trumpis, B. D.

    1974-01-01

    This paper describes various modulation and coding schemes which are potentially applicable to the Shuttle wideband data relay communications link. This link will be capable of accommodating up to 50 Mbps of scientific data and will be subject to a power constraint which forces the use of channel coding. Although convolutionally encoded coherent binary PSK is the tentative signal design choice for the wideband data relay link, FM techniques are of interest because of the associated hardware simplicity and because an FM system is already planned to be available for transmission of television via relay satellite to the ground. Binary and M-ary FSK are considered as candidate modulation techniques, and both coherent and noncoherent ground station detection schemes are examined. The potential use of convolutional coding is considered in conjunction with each of the candidate modulation techniques.

  1. Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN).

    PubMed

    Iqbal, Sajid; Ghani, M Usman; Saba, Tanzila; Rehman, Amjad

    2018-04-01

    A tumor could be found in any area of the brain and could be of any size, shape, and contrast. There may exist multiple tumors of different types in a human brain at the same time. Accurate tumor area segmentation is considered primary step for treatment of brain tumors. Deep Learning is a set of promising techniques that could provide better results as compared to nondeep learning techniques for segmenting timorous part inside a brain. This article presents a deep convolutional neural network (CNN) to segment brain tumors in MRIs. The proposed network uses BRATS segmentation challenge dataset which is composed of images obtained through four different modalities. Accordingly, we present an extended version of existing network to solve segmentation problem. The network architecture consists of multiple neural network layers connected in sequential order with the feeding of Convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data thus show the usability of the proposed approach and its superiority over the other approaches in this area of research. © 2018 Wiley Periodicals, Inc.

  2. Optical transmission testing based on asynchronous sampling techniques: images analysis containing chromatic dispersion using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Mrozek, T.; Perlicki, K.; Tajmajer, T.; Wasilewski, P.

    2017-08-01

    The article presents an image analysis method, obtained from an asynchronous delay tap sampling (ADTS) technique, which is used for simultaneous monitoring of various impairments occurring in the physical layer of the optical network. The ADTS method enables the visualization of the optical signal in the form of characteristics (so called phase portraits) that change their shape under the influence of impairments such as chromatic dispersion, polarization mode dispersion and ASE noise. Using this method, a simulation model was built with OptSim 4.0. After the simulation study, data were obtained in the form of images that were further analyzed using the convolutional neural network algorithm. The main goal of the study was to train a convolutional neural network to recognize the selected impairment (distortion); then to test its accuracy and estimate the impairment for the selected set of test images. The input data consisted of processed binary images in the form of two-dimensional matrices, with the position of the pixel. This article focuses only on the analysis of images containing chromatic dispersion.

  3. Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)

    2002-01-01

    We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.

  4. Improving KPCA Online Extraction by Orthonormalization in the Feature Space.

    PubMed

    Souza Filho, Joao B O; Diniz, Paulo S R

    2018-04-01

    Recently, some online kernel principal component analysis (KPCA) techniques based on the generalized Hebbian algorithm (GHA) were proposed for use in large data sets, defining kernel components using concise dictionaries automatically extracted from data. This brief proposes two new online KPCA extraction algorithms, exploiting orthogonalized versions of the GHA rule. In both the cases, the orthogonalization of kernel components is achieved by the inclusion of some low complexity additional steps to the kernel Hebbian algorithm, thus not substantially affecting the computational cost of the algorithm. Results show improved convergence speed and accuracy of components extracted by the proposed methods, as compared with the state-of-the-art online KPCA extraction algorithms.

  5. Facial recognition using multisensor images based on localized kernel eigen spaces.

    PubMed

    Gundimada, Satyanadh; Asari, Vijayan K

    2009-06-01

    A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.

  6. Accurate identification of motor unit discharge patterns from high-density surface EMG and validation with a novel signal-based performance metric

    NASA Astrophysics Data System (ADS)

    Holobar, A.; Minetto, M. A.; Farina, D.

    2014-02-01

    Objective. A signal-based metric for assessment of accuracy of motor unit (MU) identification from high-density surface electromyograms (EMG) is introduced. This metric, so-called pulse-to-noise-ratio (PNR), is computationally efficient, does not require any additional experimental costs and can be applied to every MU that is identified by the previously developed convolution kernel compensation technique. Approach. The analytical derivation of the newly introduced metric is provided, along with its extensive experimental validation on both synthetic and experimental surface EMG signals with signal-to-noise ratios ranging from 0 to 20 dB and muscle contraction forces from 5% to 70% of the maximum voluntary contraction. Main results. In all the experimental and simulated signals, the newly introduced metric correlated significantly with both sensitivity and false alarm rate in identification of MU discharges. Practically all the MUs with PNR > 30 dB exhibited sensitivity >90% and false alarm rates <2%. Therefore, a threshold of 30 dB in PNR can be used as a simple method for selecting only reliably decomposed units. Significance. The newly introduced metric is considered a robust and reliable indicator of accuracy of MU identification. The study also shows that high-density surface EMG can be reliably decomposed at contraction forces as high as 70% of the maximum.

  7. Kernel Methods for Mining Instance Data in Ontologies

    NASA Astrophysics Data System (ADS)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  8. Density separation as a strategy to reduce the enzyme load of preharvest sprouted wheat and enhance its bread making quality.

    PubMed

    Olaerts, Heleen; De Bondt, Yamina; Courtin, Christophe M

    2018-02-15

    As preharvest sprouting of wheat impairs its use in food applications, postharvest solutions for this problem are required. Due to the high kernel to kernel variability in enzyme activity in a batch of sprouted wheat, the potential of eliminating severely sprouted kernels based on density differences in NaCl solutions was evaluated. Compared to higher density kernels, lower density kernels displayed higher α-amylase, endoxylanase, and peptidase activities as well as signs of (incipient) protein, β-glucan and arabinoxylan breakdown. By discarding lower density kernels of mildly and severely sprouted wheat batches (11% and 16%, respectively), density separation increased flour FN of the batch from 280 to 345s and from 135 to 170s and increased RVA viscosity. This in turn improved dough handling, bread crumb texture and crust color. These data indicate that density separation is a powerful technique to increase the quality of a batch of sprouted wheat. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Coded spread spectrum digital transmission system design study

    NASA Technical Reports Server (NTRS)

    Heller, J. A.; Odenwalder, J. P.; Viterbi, A. J.

    1974-01-01

    Results are presented of a comprehensive study of the performance of Viterbi-decoded convolutional codes in the presence of nonideal carrier tracking and bit synchronization. A constraint length 7, rate 1/3 convolutional code and parameters suitable for the space shuttle coded communications links are used. Mathematical models are developed and theoretical and simulation results are obtained to determine the tracking and acquisition performance of the system. Pseudorandom sequence spread spectrum techniques are also considered to minimize potential degradation caused by multipath.

  10. The convolutional differentiator method for numerical modelling of acoustic and elastic wavefields

    NASA Astrophysics Data System (ADS)

    Zhang, Zhong-Jie; Teng, Ji-Wen; Yang, Ding-Hui

    1996-02-01

    Based on the techniques of forward and inverse Fourier transformation, the authors discussed the design scheme of ordinary differentiator used and applied in the simulation of acoustic and elastic wavefields in isotropic media respectively. To compress Gibbs effects by truncation effectively, Hanning window is introduced in. The model computation shows that, the convolutional differentiator method has the advantages of rapidity, low requirements of computer’s inner storage and high precision, which is a potential method of numerical simulation.

  11. Experimental study of turbulent flame kernel propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mansour, Mohy; Peters, Norbert; Schrader, Lars-Uve

    2008-07-15

    Flame kernels in spark ignited combustion systems dominate the flame propagation and combustion stability and performance. They are likely controlled by the spark energy, flow field and mixing field. The aim of the present work is to experimentally investigate the structure and propagation of the flame kernel in turbulent premixed methane flow using advanced laser-based techniques. The spark is generated using pulsed Nd:YAG laser with 20 mJ pulse energy in order to avoid the effect of the electrodes on the flame kernel structure and the variation of spark energy from shot-to-shot. Four flames have been investigated at equivalence ratios, {phi}{submore » j}, of 0.8 and 1.0 and jet velocities, U{sub j}, of 6 and 12 m/s. A combined two-dimensional Rayleigh and LIPF-OH technique has been applied. The flame kernel structure has been collected at several time intervals from the laser ignition between 10 {mu}s and 2 ms. The data show that the flame kernel structure starts with spherical shape and changes gradually to peanut-like, then to mushroom-like and finally disturbed by the turbulence. The mushroom-like structure lasts longer in the stoichiometric and slower jet velocity. The growth rate of the average flame kernel radius is divided into two linear relations; the first one during the first 100 {mu}s is almost three times faster than that at the later stage between 100 and 2000 {mu}s. The flame propagation is slightly faster in leaner flames. The trends of the flame propagation, flame radius, flame cross-sectional area and mean flame temperature are related to the jet velocity and equivalence ratio. The relations obtained in the present work allow the prediction of any of these parameters at different conditions. (author)« less

  12. MO-FG-CAMPUS-TeP1-05: Rapid and Efficient 3D Dosimetry for End-To-End Patient-Specific QA of Rotational SBRT Deliveries Using a High-Resolution EPID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Han, B; Xing, L

    2016-06-15

    Purpose: EPID-based patient-specific quality assurance provides verification of the planning setup and delivery process that phantomless QA and log-file based virtual dosimetry methods cannot achieve. We present a method for EPID-based QA utilizing spatially-variant EPID response kernels that allows for direct calculation of the entrance fluence and 3D phantom dose. Methods: An EPID dosimetry system was utilized for 3D dose reconstruction in a cylindrical phantom for the purposes of end-to-end QA. Monte Carlo (MC) methods were used to generate pixel-specific point-spread functions (PSFs) characterizing the spatially non-uniform EPID portal response in the presence of phantom scatter. The spatially-variant PSFs weremore » decomposed into spatially-invariant basis PSFs with the symmetric central-axis kernel as the primary basis kernel and off-axis representing orthogonal perturbations in pixel-space. This compact and accurate characterization enables the use of a modified Richardson-Lucy deconvolution algorithm to directly reconstruct entrance fluence from EPID images without iterative scatter subtraction. High-resolution phantom dose kernels were cogenerated in MC with the PSFs enabling direct recalculation of the resulting phantom dose by rapid forward convolution once the entrance fluence was calculated. A Delta4 QA phantom was used to validate the dose reconstructed in this approach. Results: The spatially-invariant representation of the EPID response accurately reproduced the entrance fluence with >99.5% fidelity with a simultaneous reduction of >60% in computational overhead. 3D dose for 10{sub 6} voxels was reconstructed for the entire phantom geometry. A 3D global gamma analysis demonstrated a >95% pass rate at 3%/3mm. Conclusion: Our approach demonstrates the capabilities of an EPID-based end-to-end QA methodology that is more efficient than traditional EPID dosimetry methods. Displacing the point of measurement external to the QA phantom reduces the necessary complexity of the phantom itself while offering a method that is highly scalable and inherently generalizable to rotational and trajectory based deliveries. This research was partially supported by Varian.« less

  13. Simultaneous multiple non-crossing quantile regression estimation using kernel constraints

    PubMed Central

    Liu, Yufeng; Wu, Yichao

    2011-01-01

    Quantile regression (QR) is a very useful statistical tool for learning the relationship between the response variable and covariates. For many applications, one often needs to estimate multiple conditional quantile functions of the response variable given covariates. Although one can estimate multiple quantiles separately, it is of great interest to estimate them simultaneously. One advantage of simultaneous estimation is that multiple quantiles can share strength among them to gain better estimation accuracy than individually estimated quantile functions. Another important advantage of joint estimation is the feasibility of incorporating simultaneous non-crossing constraints of QR functions. In this paper, we propose a new kernel-based multiple QR estimation technique, namely simultaneous non-crossing quantile regression (SNQR). We use kernel representations for QR functions and apply constraints on the kernel coefficients to avoid crossing. Both unregularised and regularised SNQR techniques are considered. Asymptotic properties such as asymptotic normality of linear SNQR and oracle properties of the sparse linear SNQR are developed. Our numerical results demonstrate the competitive performance of our SNQR over the original individual QR estimation. PMID:22190842

  14. Relationship of source and sink in determining kernel composition of maize

    PubMed Central

    Seebauer, Juliann R.; Singletary, George W.; Krumpelman, Paulette M.; Ruffo, Matías L.; Below, Frederick E.

    2010-01-01

    The relative role of the maternal source and the filial sink in controlling the composition of maize (Zea mays L.) kernels is unclear and may be influenced by the genotype and the N supply. The objective of this study was to determine the influence of assimilate supply from the vegetative source and utilization of assimilates by the grain sink on the final composition of maize kernels. Intermated B73×Mo17 recombinant inbred lines (IBM RILs) which displayed contrasting concentrations of endosperm starch were grown in the field with deficient or sufficient N, and the source supply altered by ear truncation (45% reduction) at 15 d after pollination (DAP). The assimilate supply into the kernels was determined at 19 DAP using the agar trap technique, and the final kernel composition was measured. The influence of N supply and kernel ear position on final kernel composition was also determined for a commercial hybrid. Concentrations of kernel protein and starch could be altered by genotype or the N supply, but remained fairly constant along the length of the ear. Ear truncation also produced a range of variation in endosperm starch and protein concentrations. The C/N ratio of the assimilate supply at 19 DAP was directly related to the final kernel composition, with an inverse relationship between the concentrations of starch and protein in the mature endosperm. The accumulation of kernel starch and protein in maize is uniform along the ear, yet adaptable within genotypic limits, suggesting that kernel composition is source limited in maize. PMID:19917600

  15. Searching Remote Homology with Spectral Clustering with Symmetry in Neighborhood Cluster Kernels

    PubMed Central

    Maulik, Ujjwal; Sarkar, Anasua

    2013-01-01

    Remote homology detection among proteins utilizing only the unlabelled sequences is a central problem in comparative genomics. The existing cluster kernel methods based on neighborhoods and profiles and the Markov clustering algorithms are currently the most popular methods for protein family recognition. The deviation from random walks with inflation or dependency on hard threshold in similarity measure in those methods requires an enhancement for homology detection among multi-domain proteins. We propose to combine spectral clustering with neighborhood kernels in Markov similarity for enhancing sensitivity in detecting homology independent of “recent” paralogs. The spectral clustering approach with new combined local alignment kernels more effectively exploits the unsupervised protein sequences globally reducing inter-cluster walks. When combined with the corrections based on modified symmetry based proximity norm deemphasizing outliers, the technique proposed in this article outperforms other state-of-the-art cluster kernels among all twelve implemented kernels. The comparison with the state-of-the-art string and mismatch kernels also show the superior performance scores provided by the proposed kernels. Similar performance improvement also is found over an existing large dataset. Therefore the proposed spectral clustering framework over combined local alignment kernels with modified symmetry based correction achieves superior performance for unsupervised remote homolog detection even in multi-domain and promiscuous domain proteins from Genolevures database families with better biological relevance. Source code available upon request. Contact: sarkar@labri.fr. PMID:23457439

  16. Searching remote homology with spectral clustering with symmetry in neighborhood cluster kernels.

    PubMed

    Maulik, Ujjwal; Sarkar, Anasua

    2013-01-01

    Remote homology detection among proteins utilizing only the unlabelled sequences is a central problem in comparative genomics. The existing cluster kernel methods based on neighborhoods and profiles and the Markov clustering algorithms are currently the most popular methods for protein family recognition. The deviation from random walks with inflation or dependency on hard threshold in similarity measure in those methods requires an enhancement for homology detection among multi-domain proteins. We propose to combine spectral clustering with neighborhood kernels in Markov similarity for enhancing sensitivity in detecting homology independent of "recent" paralogs. The spectral clustering approach with new combined local alignment kernels more effectively exploits the unsupervised protein sequences globally reducing inter-cluster walks. When combined with the corrections based on modified symmetry based proximity norm deemphasizing outliers, the technique proposed in this article outperforms other state-of-the-art cluster kernels among all twelve implemented kernels. The comparison with the state-of-the-art string and mismatch kernels also show the superior performance scores provided by the proposed kernels. Similar performance improvement also is found over an existing large dataset. Therefore the proposed spectral clustering framework over combined local alignment kernels with modified symmetry based correction achieves superior performance for unsupervised remote homolog detection even in multi-domain and promiscuous domain proteins from Genolevures database families with better biological relevance. Source code available upon request. sarkar@labri.fr.

  17. Enhanced Line Integral Convolution with Flow Feature Detection

    NASA Technical Reports Server (NTRS)

    Lane, David; Okada, Arthur

    1996-01-01

    The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain. The method produces a flow texture image based on the input velocity field defined in the domain. Because of the nature of the algorithm, the texture image tends to be blurry. This sometimes makes it difficult to identify boundaries where flow separation and reattachments occur. We present techniques to enhance LIC texture images and use colored texture images to highlight flow separation and reattachment boundaries. Our techniques have been applied to several flow fields defined in 3D curvilinear multi-block grids and scientists have found the results to be very useful.

  18. A systematic FPGA acceleration design for applications based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Dong, Hao; Jiang, Li; Li, Tianjian; Liang, Xiaoyao

    2018-04-01

    Most FPGA accelerators for convolutional neural network are designed to optimize the inner acceleration and are ignored of the optimization for the data path between the inner accelerator and the outer system. This could lead to poor performance in applications like real time video object detection. We propose a brand new systematic FPFA acceleration design to solve this problem. This design takes the data path optimization between the inner accelerator and the outer system into consideration and optimizes the data path using techniques like hardware format transformation, frame compression. It also takes fixed-point, new pipeline technique to optimize the inner accelerator. All these make the final system's performance very good, reaching about 10 times the performance comparing with the original system.

  19. Cascaded K-means convolutional feature learner and its application to face recognition

    NASA Astrophysics Data System (ADS)

    Zhou, Daoxiang; Yang, Dan; Zhang, Xiaohong; Huang, Sheng; Feng, Shu

    2017-09-01

    Currently, considerable efforts have been devoted to devise image representation. However, handcrafted methods need strong domain knowledge and show low generalization ability, and conventional feature learning methods require enormous training data and rich parameters tuning experience. A lightened feature learner is presented to solve these problems with application to face recognition, which shares similar topology architecture as a convolutional neural network. Our model is divided into three components: cascaded convolution filters bank learning layer, nonlinear processing layer, and feature pooling layer. Specifically, in the filters learning layer, we use K-means to learn convolution filters. Features are extracted via convoluting images with the learned filters. Afterward, in the nonlinear processing layer, hyperbolic tangent is employed to capture the nonlinear feature. In the feature pooling layer, to remove the redundancy information and incorporate the spatial layout, we exploit multilevel spatial pyramid second-order pooling technique to pool the features in subregions and concatenate them together as the final representation. Extensive experiments on four representative datasets demonstrate the effectiveness and robustness of our model to various variations, yielding competitive recognition results on extended Yale B and FERET. In addition, our method achieves the best identification performance on AR and labeled faces in the wild datasets among the comparative methods.

  20. Reduced multiple empirical kernel learning machine.

    PubMed

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3) this paper adopts the Gauss Elimination, one of the on-the-shelf techniques, to generate a basis of the original feature space, which is stable and efficient.

  1. Automatic Semantic Segmentation of Brain Gliomas from MRI Images Using a Deep Cascaded Neural Network

    PubMed Central

    Mao, Lei; Liu, Chang; Xiong, Shuyu

    2018-01-01

    Brain tumors can appear anywhere in the brain and have vastly different sizes and morphology. Additionally, these tumors are often diffused and poorly contrasted. Consequently, the segmentation of brain tumor and intratumor subregions using magnetic resonance imaging (MRI) data with minimal human interventions remains a challenging task. In this paper, we present a novel fully automatic segmentation method from MRI data containing in vivo brain gliomas. This approach can not only localize the entire tumor region but can also accurately segment the intratumor structure. The proposed work was based on a cascaded deep learning convolutional neural network consisting of two subnetworks: (1) a tumor localization network (TLN) and (2) an intratumor classification network (ITCN). The TLN, a fully convolutional network (FCN) in conjunction with the transfer learning technology, was used to first process MRI data. The goal of the first subnetwork was to define the tumor region from an MRI slice. Then, the ITCN was used to label the defined tumor region into multiple subregions. Particularly, ITCN exploited a convolutional neural network (CNN) with deeper architecture and smaller kernel. The proposed approach was validated on multimodal brain tumor segmentation (BRATS 2015) datasets, which contain 220 high-grade glioma (HGG) and 54 low-grade glioma (LGG) cases. Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity were used as evaluation metrics. Our experimental results indicated that our method could obtain the promising segmentation results and had a faster segmentation speed. More specifically, the proposed method obtained comparable and overall better DSC values (0.89, 0.77, and 0.80) on the combined (HGG + LGG) testing set, as compared to other methods reported in the literature. Additionally, the proposed approach was able to complete a segmentation task at a rate of 1.54 seconds per slice. PMID:29755716

  2. Backscatter correction factor for megavoltage photon beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Yida; Zhu, Timothy C.

    2011-10-15

    Purpose: For routine clinical dosimetry of photon beams, it is often necessary to know the minimum thickness of backscatter phantom material to ensure that full backscatter condition exists. Methods: In case of insufficient backscatter thickness, one can determine the backscatter correction factor, BCF(s,d,t), defined as the ratio of absorbed dose measured on the central-axis of a phantom with backscatter thickness of t to that with full backscatter for square field size s and forward depth d. Measurements were performed in SAD geometry for 6 and 15 MV photon beams using a 0.125 cc thimble chamber for field sizes between 10more » x 10 and 30 x 30 cm at depths between d{sub max} (1.5 cm for 6 MV and 3 cm for 15 MV) and 20 cm. Results: A convolution method was used to calculate BCF using Monte-Carlo simulated point-spread kernels generated for clinical photon beams for energies between Co-60 and 24 MV. The convolution calculation agrees with the experimental measurements to within 0.8% with the same physical trend. The value of BCF deviates more from 1 for lower energies and larger field sizes. According to our convolution calculation, the minimum BCF occurs at forward depth d{sub max} and 40 x 40 cm field size, 0.970 for 6 MV and 0.983 for 15 MV. Conclusions: The authors concluded that backscatter thickness is 6.0 cm for 6 MV and 4.0 cm for 15 MV for field size up to 10 x 10 cm when BCF = 0.998. If 4 cm backscatter thickness is used, BCF is 0.997 and 0.983 for field size of 10 x 10 and 40 x 40 cm for 6 MV, and is 0.998 and 0.990 for 10 x 10 and 40 x 40 cm for 15 MV, respectively.« less

  3. Chromatin accessibility prediction via a hybrid deep convolutional neural network.

    PubMed

    Liu, Qiao; Xia, Fei; Yin, Qijin; Jiang, Rui

    2018-03-01

    A majority of known genetic variants associated with human-inherited diseases lie in non-coding regions that lack adequate interpretation, making it indispensable to systematically discover functional sites at the whole genome level and precisely decipher their implications in a comprehensive manner. Although computational approaches have been complementing high-throughput biological experiments towards the annotation of the human genome, it still remains a big challenge to accurately annotate regulatory elements in the context of a specific cell type via automatic learning of the DNA sequence code from large-scale sequencing data. Indeed, the development of an accurate and interpretable model to learn the DNA sequence signature and further enable the identification of causative genetic variants has become essential in both genomic and genetic studies. We proposed Deopen, a hybrid framework mainly based on a deep convolutional neural network, to automatically learn the regulatory code of DNA sequences and predict chromatin accessibility. In a series of comparison with existing methods, we show the superior performance of our model in not only the classification of accessible regions against background sequences sampled at random, but also the regression of DNase-seq signals. Besides, we further visualize the convolutional kernels and show the match of identified sequence signatures and known motifs. We finally demonstrate the sensitivity of our model in finding causative noncoding variants in the analysis of a breast cancer dataset. We expect to see wide applications of Deopen with either public or in-house chromatin accessibility data in the annotation of the human genome and the identification of non-coding variants associated with diseases. Deopen is freely available at https://github.com/kimmo1019/Deopen. ruijiang@tsinghua.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  4. Whole brain diffeomorphic metric mapping via integration of sulcal and gyral curves, cortical surfaces, and images

    PubMed Central

    Du, Jia; Younes, Laurent; Qiu, Anqi

    2011-01-01

    This paper introduces a novel large deformation diffeomorphic metric mapping algorithm for whole brain registration where sulcal and gyral curves, cortical surfaces, and intensity images are simultaneously carried from one subject to another through a flow of diffeomorphisms. To the best of our knowledge, this is the first time that the diffeomorphic metric from one brain to another is derived in a shape space of intensity images and point sets (such as curves and surfaces) in a unified manner. We describe the Euler–Lagrange equation associated with this algorithm with respect to momentum, a linear transformation of the velocity vector field of the diffeomorphic flow. The numerical implementation for solving this variational problem, which involves large-scale kernel convolution in an irregular grid, is made feasible by introducing a class of computationally friendly kernels. We apply this algorithm to align magnetic resonance brain data. Our whole brain mapping results show that our algorithm outperforms the image-based LDDMM algorithm in terms of the mapping accuracy of gyral/sulcal curves, sulcal regions, and cortical and subcortical segmentation. Moreover, our algorithm provides better whole brain alignment than combined volumetric and surface registration (Postelnicu et al., 2009) and hierarchical attribute matching mechanism for elastic registration (HAMMER) (Shen and Davatzikos, 2002) in terms of cortical and subcortical volume segmentation. PMID:21281722

  5. Reformulation of Possio's kernel with application to unsteady wind tunnel interference

    NASA Technical Reports Server (NTRS)

    Fromme, J. A.; Golberg, M. A.

    1980-01-01

    An efficient method for computing the Possio kernel has remained elusive up to the present time. In this paper the Possio is reformulated so that it can be computed accurately using existing high precision numerical quadrature techniques. Convergence to the correct values is demonstrated and optimization of the integration procedures is discussed. Since more general kernels such as those associated with unsteady flows in ventilated wind tunnels are analytic perturbations of the Possio free air kernel, a more accurate evaluation of their collocation matrices results with an exponential improvement in convergence. An application to predicting frequency response of an airfoil-trailing edge control system in a wind tunnel compared with that in free air is given showing strong interference effects.

  6. A Very Efficient Transfer Function Bounding Technique on Bit Error Rate for Viterbi Decoded, Rate 1/N Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1984-01-01

    For rate 1/N convolutional codes, a recursive algorithm for finding the transfer function bound on bit error rate (BER) at the output of a Viterbi decoder is described. This technique is very fast and requires very little storage since all the unnecessary operations are eliminated. Using this technique, we find and plot bounds on the BER performance of known codes of rate 1/2 with K 18, rate 1/3 with K 14. When more than one reported code with the same parameter is known, we select the code that minimizes the required signal to noise ratio for a desired bit error rate of 0.000001. This criterion of determining goodness of a code had previously been found to be more useful than the maximum free distance criterion and was used in the code search procedures of very short constraint length codes. This very efficient technique can also be used for searches of longer constraint length codes.

  7. Inter-slice Leakage Artifact Reduction Technique for Simultaneous Multi-Slice Acquisitions

    PubMed Central

    Cauley, Stephen F.; Polimeni, Jonathan R.; Bhat, Himanshu; Wang, Dingxin; Wald, Lawrence L.; Setsompop, Kawin

    2015-01-01

    Purpose Controlled aliasing techniques for simultaneously acquired EPI slices have been shown to significantly increase the temporal efficiency for both diffusion-weighted imaging (DWI) and fMRI studies. The “slice-GRAPPA” (SG) method has been widely used to reconstruct such data. We investigate robust optimization techniques for SG to ensure image reconstruction accuracy through a reduction of leakage artifacts. Methods Split slice-GRAPPA (SP-SG) is proposed as an alternative kernel optimization method. The performance of SP-SG is compared to standard SG using data collected on a spherical phantom and in-vivo on two subjects at 3T. Slice accelerated and non-accelerated data were collected for a spin-echo diffusion weighted acquisition. Signal leakage metrics and time-series SNR were used to quantify the performance of the kernel fitting approaches. Results The SP-SG optimization strategy significantly reduces leakage artifacts for both phantom and in-vivo acquisitions. In addition, a significant boost in time-series SNR for in-vivo diffusion weighted acquisitions with in-plane 2× and slice 3× accelerations was observed with the SP-SG approach. Conclusion By minimizing the influence of leakage artifacts during the training of slice-GRAPPA kernels, we have significantly improved reconstruction accuracy. Our robust kernel fitting strategy should enable better reconstruction accuracy and higher slice-acceleration across many applications. PMID:23963964

  8. The spatial sensitivity of Sp converted waves—scattered-wave kernels and their applications to receiver-function migration and inversion

    NASA Astrophysics Data System (ADS)

    Mancinelli, N. J.; Fischer, K. M.

    2018-03-01

    We characterize the spatial sensitivity of Sp converted waves to improve constraints on lateral variations in uppermost-mantle velocity gradients, such as the lithosphere-asthenosphere boundary (LAB) and the mid-lithospheric discontinuities. We use SPECFEM2D to generate 2-D scattering kernels that relate perturbations from an elastic half-space to Sp waveforms. We then show that these kernels can be well approximated using ray theory, and develop an approach to calculating kernels for layered background models. As proof of concept, we show that lateral variations in uppermost-mantle discontinuity structure are retrieved by implementing these scattering kernels in the first iteration of a conjugate-directions inversion algorithm. We evaluate the performance of this technique on synthetic seismograms computed for 2-D models with undulations on the LAB of varying amplitude, wavelength and depth. The technique reliably images the position of discontinuities with dips <35° and horizontal wavelengths >100-200 km. In cases of mild topography on a shallow LAB, the relative brightness of the LAB and Moho converters approximately agrees with the ratio of velocity contrasts across the discontinuities. Amplitude retrieval degrades at deeper depths. For dominant periods of 4 s, the minimum station spacing required to produce unaliased results is 5 km, but the application of a Gaussian filter can improve discontinuity imaging where station spacing is greater.

  9. Manufacture and quality control of interconnecting wire hardnesses, Volume 1

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A standard is presented for manufacture, installation, and quality control of eight types of interconnecting wire harnesses. The processes, process controls, and inspection and test requirements reflected are based on acknowledgment of harness design requirements, acknowledgment of harness installation requirements, identification of the various parts, materials, etc., utilized in harness manufacture, and formulation of a typical manufacturing flow diagram for identification of each manufacturing and quality control process, operation, inspection, and test. The document covers interconnecting wire harnesses defined in the design standard, including type 1, enclosed in fluorocarbon elastomer convolute, tubing; type 2, enclosed in TFE convolute tubing lines with fiberglass braid; type 3, enclosed in TFE convolute tubing; and type 5, combination of types 3 and 4. Knowledge gained through experience on the Saturn 5 program coupled with recent advances in techniques, materials, and processes was incorporated.

  10. Attachment of Free Filament Thermocouples for Temperature Measurements on Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Lei, Jih-Fen; Cuy, Michael D.; Wnuk, Stephen P.

    1998-01-01

    At the NASA Lewis Research Center, a new installation technique utilizing convoluted wire thermocouples (TC's) was developed and proven to produce very good adhesion on CMC's, even in a burner rig environment. Because of their unique convoluted design, such TC's of various types and sizes adhere to flat or curved CMC specimens with no sign of delamination, open circuits, or interactions-even after testing in a Mach 0.3 burner rig to 1200 C (2200 F) for several thermal cycles and at several hours at high temperatures. Large differences in thermal expansion between metal thermocouples and low-expansion materials, such as CMC's, normally generate large stresses in the wires. These stresses cause straight wires to detach, but convoluted wires that are bonded with strips of coating allow bending in the unbonded portion to relieve these expansion stresses.

  11. Fully-Automated High-Throughput NMR System for Screening of Haploid Kernels of Maize (Corn) by Measurement of Oil Content

    PubMed Central

    Xu, Xiaoping; Huang, Qingming; Chen, Shanshan; Yang, Peiqiang; Chen, Shaojiang; Song, Yiqiao

    2016-01-01

    One of the modern crop breeding techniques uses doubled haploid plants that contain an identical pair of chromosomes in order to accelerate the breeding process. Rapid haploid identification method is critical for large-scale selections of double haploids. The conventional methods based on the color of the endosperm and embryo seeds are slow, manual and prone to error. On the other hand, there exists a significant difference between diploid and haploid seeds generated by high oil inducer, which makes it possible to use oil content to identify the haploid. This paper describes a fully-automated high-throughput NMR screening system for maize haploid kernel identification. The system is comprised of a sampler unit to select a single kernel to feed for measurement of NMR and weight, and a kernel sorter to distribute the kernel according to the measurement result. Tests of the system show a consistent accuracy of 94% with an average screening time of 4 seconds per kernel. Field test result is described and the directions for future improvement are discussed. PMID:27454427

  12. Detecting peanuts inoculated with toxigenic and atoxienic Aspergillus flavus strains with fluorescence hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Xing, Fuguo; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Zhu, Fengle; Brown, Robert L.; Bhatnagar, Deepak; Liu, Yang

    2017-05-01

    Aflatoxin contamination in peanut products has been an important and long-standing problem around the world. Produced mainly by Aspergillus flavus and Aspergillus parasiticus, aflatoxins are the most toxic and carcinogenic compounds among toxins. This study investigated the application of fluorescence visible near-infrared (VNIR) hyperspectral images to assess the spectral difference between peanut kernels inoculated with toxigenic and atoxigenic inocula of A. flavus and healthy kernels. Peanut kernels were inoculated with NRRL3357, a toxigenic strain of A. flavus, and AF36, an atoxigenic strain of A. flavus, respectively. Fluorescence hyperspectral images under ultraviolet (UV) excitation were recorded on peanut kernels with and without skin. Contaminated kernels exhibited different fluorescence features compared with healthy kernels. For the kernels without skin, the inoculated kernels had a fluorescence peaks shifted to longer wavelengths with lower intensity than healthy kernels. In addition, the fluorescence intensity of peanuts without skin was higher than that of peanuts with skin (10 times). The fluorescence spectra of kernels with skin are significantly different from that of the control group (p<0.001). Furthermore, the fluorescence intensity of the toxigenic, AF3357 peanuts with skin was lower than that of the atoxigenic AF36 group. Discriminate analysis showed that the inoculation group can be separated from the controls with 100% accuracy. However, the two inoculation groups (AF3357 vis AF36) can be separated with only ∼80% accuracy. This study demonstrated the potential of fluorescence hyperspectral imaging techniques for screening of peanut kernels contaminated with A. flavus, which could potentially lead to the production of rapid and non-destructive scanning-based detection technology for the peanut industry.

  13. An efficient and accurate molecular alignment and docking technique using ab initio quality scoring

    PubMed Central

    Füsti-Molnár, László; Merz, Kenneth M.

    2008-01-01

    An accurate and efficient molecular alignment technique is presented based on first principle electronic structure calculations. This new scheme maximizes quantum similarity matrices in the relative orientation of the molecules and uses Fourier transform techniques for two purposes. First, building up the numerical representation of true ab initio electronic densities and their Coulomb potentials is accelerated by the previously described Fourier transform Coulomb method. Second, the Fourier convolution technique is applied for accelerating optimizations in the translational coordinates. In order to avoid any interpolation error, the necessary analytical formulas are derived for the transformation of the ab initio wavefunctions in rotational coordinates. The results of our first implementation for a small test set are analyzed in detail and compared with published results of the literature. A new way of refinement of existing shape based alignments is also proposed by using Fourier convolutions of ab initio or other approximate electron densities. This new alignment technique is generally applicable for overlap, Coulomb, kinetic energy, etc., quantum similarity measures and can be extended to a genuine docking solution with ab initio scoring. PMID:18624561

  14. Diffuse dispersive delay and the time convolution/attenuation of transients

    NASA Technical Reports Server (NTRS)

    Bittner, Burt J.

    1991-01-01

    Test data and analytic evaluations are presented to show that relatively poor 100 KHz shielding of 12 Db can effectively provide an electromagnetic pulse transient reduction of 100 Db. More importantly, several techniques are shown for lightning surge attenuation as an alternative to crowbar, spark gap, or power zener type clipping which simply reflects the surge. A time delay test method is shown which allows CW testing, along with a convolution program to define transient shielding effectivity where the Fourier phase characteristics of the transient are known or can be broadly estimated.

  15. Person detection, tracking and following using stereo camera

    NASA Astrophysics Data System (ADS)

    Wang, Xiaofeng; Zhang, Lilian; Wang, Duo; Hu, Xiaoping

    2018-04-01

    Person detection, tracking and following is a key enabling technology for mobile robots in many human-robot interaction applications. In this article, we present a system which is composed of visual human detection, video tracking and following. The detection is based on YOLO(You only look once), which applies a single convolution neural network(CNN) to the full image, thus can predict bounding boxes and class probabilities directly in one evaluation. Then the bounding box provides initial person position in image to initialize and train the KCF(Kernelized Correlation Filter), which is a video tracker based on discriminative classifier. At last, by using a stereo 3D sparse reconstruction algorithm, not only the position of the person in the scene is determined, but also it can elegantly solve the problem of scale ambiguity in the video tracker. Extensive experiments are conducted to demonstrate the effectiveness and robustness of our human detection and tracking system.

  16. Intertwining solutions for magnetic relativistic Hartree type equations

    NASA Astrophysics Data System (ADS)

    Cingolani, Silvia; Secchi, Simone

    2018-05-01

    We consider the magnetic pseudo-relativistic Schrödinger equation where , m  >  0, is an external continuous scalar potential, is a continuous vector potential and is a convolution kernel, is a constant, , . We assume that A and V are symmetric with respect to a closed subgroup G of the group of orthogonal linear transformations of . If for any , the cardinality of the G-orbit of x is infinite, then we prove the existence of infinitely many intertwining solutions assuming that is either linear in x or uniformly bounded. The results are proved by means of a new local realization of the square root of the magnetic laplacian to a local elliptic operator with Neumann boundary condition on a half-space. Moreover we derive an existence result of a ground state intertwining solution for bounded vector potentials, if G admits a finite orbit.

  17. On the solution of integral equations with a generalized cauchy kernel

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1986-01-01

    In this paper a certain class of singular integral equations that may arise from the mixed boundary value problems in nonhomogeneous materials is considered. The distinguishing feature of these equations is that in addition to the Cauchy singularity, the kernels contain terms that are singular only at the end points. In the form of the singular integral equations adopted, the density function is a potential or a displacement and consequently the kernel has strong singularities of the form (t-x) sup-2, x sup n-2 (t+x) sup n, (n or = 2, 0x,tb). The complex function theory is used to determine the fundamental function of the problem for the general case and a simple numerical technique is described to solve the integral equation. Two examples from the theory of elasticity are then considered to show the application of the technique.

  18. A kernel function method for computing steady and oscillatory supersonic aerodynamics with interference.

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1973-01-01

    The method presented uses a collocation technique with the nonplanar kernel function to solve supersonic lifting surface problems with and without interference. A set of pressure functions are developed based on conical flow theory solutions which account for discontinuities in the supersonic pressure distributions. These functions permit faster solution convergence than is possible with conventional supersonic pressure functions. An improper integral of a 3/2 power singularity along the Mach hyperbola of the nonplanar supersonic kernel function is described and treated. The method is compared with other theories and experiment for a variety of cases.

  19. Oscillatory supersonic kernel function method for interfering surfaces

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1974-01-01

    In the method presented in this paper, a collocation technique is used with the nonplanar supersonic kernel function to solve multiple lifting surface problems with interference in steady or oscillatory flow. The pressure functions used are based on conical flow theory solutions and provide faster solution convergence than is possible with conventional functions. In the application of the nonplanar supersonic kernel function, an improper integral of a 3/2 power singularity along the Mach hyperbola is described and treated. The method is compared with other theories and experiment for two wing-tail configurations in steady and oscillatory flow.

  20. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  1. [Research on the methods for multi-class kernel CSP-based feature extraction].

    PubMed

    Wang, Jinjia; Zhang, Lingzhi; Hu, Bei

    2012-04-01

    To relax the presumption of strictly linear patterns in the common spatial patterns (CSP), we studied the kernel CSP (KCSP). A new multi-class KCSP (MKCSP) approach was proposed in this paper, which combines the kernel approach with multi-class CSP technique. In this approach, we used kernel spatial patterns for each class against all others, and extracted signal components specific to one condition from EEG data sets of multiple conditions. Then we performed classification using the Logistic linear classifier. Brain computer interface (BCI) competition III_3a was used in the experiment. Through the experiment, it can be proved that this approach could decompose the raw EEG singles into spatial patterns extracted from multi-class of single trial EEG, and could obtain good classification results.

  2. Does money matter in inflation forecasting?

    NASA Astrophysics Data System (ADS)

    Binner, J. M.; Tino, P.; Tepper, J.; Anderson, R.; Jones, B.; Kendall, G.

    2010-11-01

    This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two nonlinear techniques, namely, recurrent neural networks and kernel recursive least squares regression-techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naïve random walk model. The best models were nonlinear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. Beyond its economic findings, our study is in the tradition of physicists’ long-standing interest in the interconnections among statistical mechanics, neural networks, and related nonparametric statistical methods, and suggests potential avenues of extension for such studies.

  3. 3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study.

    PubMed

    Dolz, Jose; Desrosiers, Christian; Ben Ayed, Ismail

    2018-04-15

    This study investigates a 3D and fully convolutional neural network (CNN) for subcortical brain structure segmentation in MRI. 3D CNN architectures have been generally avoided due to their computational and memory requirements during inference. We address the problem via small kernels, allowing deeper architectures. We further model both local and global context by embedding intermediate-layer outputs in the final prediction, which encourages consistency between features extracted at different scales and embeds fine-grained information directly in the segmentation process. Our model is efficiently trained end-to-end on a graphics processing unit (GPU), in a single stage, exploiting the dense inference capabilities of fully CNNs. We performed comprehensive experiments over two publicly available datasets. First, we demonstrate a state-of-the-art performance on the ISBR dataset. Then, we report a large-scale multi-site evaluation over 1112 unregistered subject datasets acquired from 17 different sites (ABIDE dataset), with ages ranging from 7 to 64 years, showing that our method is robust to various acquisition protocols, demographics and clinical factors. Our method yielded segmentations that are highly consistent with a standard atlas-based approach, while running in a fraction of the time needed by atlas-based methods and avoiding registration/normalization steps. This makes it convenient for massive multi-site neuroanatomical imaging studies. To the best of our knowledge, our work is the first to study subcortical structure segmentation on such large-scale and heterogeneous data. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Epithelium-Stroma Classification via Convolutional Neural Networks and Unsupervised Domain Adaptation in Histopathological Images.

    PubMed

    Huang, Yue; Zheng, Han; Liu, Chi; Ding, Xinghao; Rohde, Gustavo K

    2017-11-01

    Epithelium-stroma classification is a necessary preprocessing step in histopathological image analysis. Current deep learning based recognition methods for histology data require collection of large volumes of labeled data in order to train a new neural network when there are changes to the image acquisition procedure. However, it is extremely expensive for pathologists to manually label sufficient volumes of data for each pathology study in a professional manner, which results in limitations in real-world applications. A very simple but effective deep learning method, that introduces the concept of unsupervised domain adaptation to a simple convolutional neural network (CNN), has been proposed in this paper. Inspired by transfer learning, our paper assumes that the training data and testing data follow different distributions, and there is an adaptation operation to more accurately estimate the kernels in CNN in feature extraction, in order to enhance performance by transferring knowledge from labeled data in source domain to unlabeled data in target domain. The model has been evaluated using three independent public epithelium-stroma datasets by cross-dataset validations. The experimental results demonstrate that for epithelium-stroma classification, the proposed framework outperforms the state-of-the-art deep neural network model, and it also achieves better performance than other existing deep domain adaptation methods. The proposed model can be considered to be a better option for real-world applications in histopathological image analysis, since there is no longer a requirement for large-scale labeled data in each specified domain.

  5. Mathematics of Computed Tomography

    NASA Astrophysics Data System (ADS)

    Hawkins, William Grant

    A review of the applications of the Radon transform is presented, with emphasis on emission computed tomography and transmission computed tomography. The theory of the 2D and 3D Radon transforms, and the effects of attenuation for emission computed tomography are presented. The algebraic iterative methods, their importance and limitations are reviewed. Analytic solutions of the 2D problem the convolution and frequency filtering methods based on linear shift invariant theory, and the solution of the circular harmonic decomposition by integral transform theory--are reviewed. The relation between the invisible kernels, the inverse circular harmonic transform, and the consistency conditions are demonstrated. The discussion and review are extended to the 3D problem-convolution, frequency filtering, spherical harmonic transform solutions, and consistency conditions. The Cormack algorithm based on reconstruction with Zernike polynomials is reviewed. An analogous algorithm and set of reconstruction polynomials is developed for the spherical harmonic transform. The relations between the consistency conditions, boundary conditions and orthogonal basis functions for the 2D projection harmonics are delineated and extended to the 3D case. The equivalence of the inverse circular harmonic transform, the inverse Radon transform, and the inverse Cormack transform is presented. The use of the number of nodes of a projection harmonic as a filter is discussed. Numerical methods for the efficient implementation of angular harmonic algorithms based on orthogonal functions and stable recursion are presented. The derivation of a lower bound for the signal-to-noise ratio of the Cormack algorithm is derived.

  6. Development of a Spect-Based Three-Dimensional Treatment Planner for Radionuclide Therapy with Iodine -131.

    NASA Astrophysics Data System (ADS)

    Giap, Huan Bosco

    Accurate calculation of absorbed dose to target tumors and normal tissues in the body is an important requirement for establishing fundamental dose-response relationships for radioimmunotherapy. Two major obstacles have been the difficulty in obtaining an accurate patient-specific 3-D activity map in-vivo and calculating the resulting absorbed dose. This study investigated a methodology for 3-D internal dosimetry, which integrates the 3-D biodistribution of the radionuclide acquired from SPECT with a dose-point kernel convolution technique to provide the 3-D distribution of absorbed dose. Accurate SPECT images were reconstructed with appropriate methods for noise filtering, attenuation correction, and Compton scatter correction. The SPECT images were converted into activity maps using a calibration phantom. The activity map was convolved with an ^{131}I dose-point kernel using a 3-D fast Fourier transform to yield a 3-D distribution of absorbed dose. The 3-D absorbed dose map was then processed to provide the absorbed dose distribution in regions of interest. This methodology can provide heterogeneous distributions of absorbed dose in volumes of any size and shape with nonuniform distributions of activity. Comparison of the activities quantitated by our SPECT methodology to true activities in an Alderson abdominal phantom (with spleen, liver, and spherical tumor) yielded errors of -16.3% to 4.4%. Volume quantitation errors ranged from -4.0 to 5.9% for volumes greater than 88 ml. The percentage differences of the average absorbed dose rates calculated by this methodology and the MIRD S-values were 9.1% for liver, 13.7% for spleen, and 0.9% for the tumor. Good agreement (percent differences were less than 8%) was found between the absorbed dose due to penetrating radiation calculated from this methodology and TLD measurement. More accurate estimates of the 3 -D distribution of absorbed dose can be used as a guide in specifying the minimum activity to be administered to patients to deliver a prescribed absorbed dose to tumor without exceeding the toxicity limits of normal tissues.

  7. Theoretical study of the influence of a heterogeneous activity distribution on intratumoral absorbed dose distribution.

    PubMed

    Bao, Ande; Zhao, Xia; Phillips, William T; Woolley, F Ross; Otto, Randal A; Goins, Beth; Hevezi, James M

    2005-01-01

    Radioimmunotherapy of hematopoeitic cancers and micrometastases has been shown to have significant therapeutic benefit. The treatment of solid tumors with radionuclide therapy has been less successful. Previous investigations of intratumoral activity distribution and studies on intratumoral drug delivery suggest that a probable reason for the disappointing results in solid tumor treatment is nonuniform intratumoral distribution coupled with restricted intratumoral drug penetrance, thus inhibiting antineoplastic agents from reaching the tumor's center. This paper describes a nonuniform intratumoral activity distribution identified by limited radiolabeled tracer diffusion from tumor surface to tumor center. This activity was simulated using techniques that allowed the absorbed dose distributions to be estimated using different intratumoral diffusion capabilities and calculated for tumors of varying diameters. The influences of these absorbed dose distributions on solid tumor radionuclide therapy are also discussed. The absorbed dose distribution was calculated using the dose point kernel method that provided for the application of a three-dimensional (3D) convolution between a dose rate kernel function and an activity distribution function. These functions were incorporated into 3D matrices with voxels measuring 0.10 x 0.10 x 0.10 mm3. At this point fast Fourier transform (FFT) and multiplication in frequency domain followed by inverse FFT (iFFT) were used to effect this phase of the dose calculation process. The absorbed dose distribution for tumors of 1, 3, 5, 10, and 15 mm in diameter were studied. Using the therapeutic radionuclides of 131I, 186Re, 188Re, and 90Y, the total average dose, center dose, and surface dose for each of the different tumor diameters were reported. The absorbed dose in the nearby normal tissue was also evaluated. When the tumor diameters exceed 15 mm, a much lower tumor center dose is delivered compared with tumors between 3 and 5 mm in diameter. Based on these findings, the use of higher beta-energy radionuclides, such as 188Re and 90Y is more effective in delivering a higher absorbed dose to the tumor center at tumor diameters around 10 mm.

  8. An automatic optimum kernel-size selection technique for edge enhancement

    USGS Publications Warehouse

    Chavez, Pat S.; Bauer, Brian P.

    1982-01-01

    Edge enhancement is a technique that can be considered, to a first order, a correction for the modulation transfer function of an imaging system. Digital imaging systems sample a continuous function at discrete intervals so that high-frequency information cannot be recorded at the same precision as lower frequency data. Because of this, fine detail or edge information in digital images is lost. Spatial filtering techniques can be used to enhance the fine detail information that does exist in the digital image, but the filter size is dependent on the type of area being processed. A technique has been developed by the authors that uses the horizontal first difference to automatically select the optimum kernel-size that should be used to enhance the edges that are contained in the image. 

  9. Fixed and Data Adaptive Kernels in Cohen’s Class of Time-Frequency Distributions

    DTIC Science & Technology

    1992-09-01

    translated into its associated analytic signal by using the techniques discussed in Chapter Four. 1. Wigner - Ville Distribution function PS = wvd (data,winlen...step,begin,theend) % PS = wvd (data,winlen,step,begin,theend) % ’wvd.ml returns the Wigner - Ville time-frequency distribution % for the input data...12 IV. FIXED KERNEL DISTRIBUTIONS .................................................................. 19 A. WIGNER - VILLE DISTRIBUTION

  10. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    NASA Astrophysics Data System (ADS)

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods.

  11. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    PubMed Central

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods. PMID:28120883

  12. Forecasting landslide activations by means of GA-SAKe. An example of application to three case studies in Calabria (Southern Italy)

    NASA Astrophysics Data System (ADS)

    Iovine, Giulio G. R.; De Rango, Alessio; Gariano, Stefano L.; Terranova, Oreste G.

    2016-04-01

    GA-SAKe - the Genetic-Algorithm based release of the hydrological model SAKe (Self Adaptive Kernel) - allows to forecast the timing of activation of landslides [1, 2], based on dates of landslide activations and rainfall series. The model can be applied to either single or set of similar landslides in a homogeneous context. Calibration of the model is performed through Genetic-Algorithm, and provides families of optimal, discretized solutions (kernels) that maximize the fitness function. The mobility functions are obtained through convolution of the optimal kernels with rain series. The shape of the kernel, including its base time, is related to magnitude of the landslide and hydro-geological complexity of the slope. Once validated, the model can be applied to estimate the timing of future landslide activations in the same study area, by employing measured or forecasted rainfall. GA-SAKe is here employed to analyse the historical activations of three rock slides in Calabria (Southern Italy), threatening villages and main infrastructures. In particular: 1) the Acri-Serra di Buda case, developed within a Sackung, involving weathered crystalline and metamorphic rocks; for this case study, 6 dates of activation are available; 2) the San Fili-Uncino case, developed in clay and conglomerate overlaying gneiss and biotitic schist; for this case study, 7 dates of activation are available [2]; 3) the San Benedetto Ullano-San Rocco case, developed in weathered metamorphic rocks; for this case study, 3 dates of activation are available [1, 3, 4, 5]. The obtained results are quite promising, given the high performance of the model against slope movements characterized by numerous historical activations. Obtained results, in terms of shape and base time of the kernels, are compared by taking into account types and sizes of the considered case studies, and involved rock types. References [1] Terranova O.G., Iaquinta P., Gariano S.L., Greco R. & Iovine G. (2013) In: Landslide Science and Practice, Margottini, Canuti, Sassa (Eds.), Vol. 3, pp.73-79. [2] Terranova O.G., Gariano S.L., Iaquinta P. & Iovine G.G.R. (2015). Geosci. Model Dev., 8, 1955-1978. [3] Iovine G., Iaquinta P. & Terranova O. (2009). In Anderssen, Braddock & Newham (Eds.), Proc. 18th World IMACS Congr. and MODSIM09 Int. Congr. on Modelling and Simulation, pp. 2686-2693. [4] Iovine G., Lollino P., Gariano S.L. & Terranova O.G. (2010). NHESS, 10, 2341-2354. [5] Capparelli G., Iaquinta P., Iovine G., Terranova O.G. & Versace P. (2012). Natural Hazards, 61(1), pp.247-256.

  13. Adapting line integral convolution for fabricating artistic virtual environment

    NASA Astrophysics Data System (ADS)

    Lee, Jiunn-Shyan; Wang, Chung-Ming

    2003-04-01

    Vector field occurs not only extensively in scientific applications but also in treasured art such as sculptures and paintings. Artist depicts our natural environment stressing valued directional feature besides color and shape information. Line integral convolution (LIC), developed for imaging vector field in scientific visualization, has potential of producing directional image. In this paper we present several techniques of exploring LIC techniques to generate impressionistic images forming artistic virtual environment. We take advantage of directional information given by a photograph, and incorporate many investigations to the work including non-photorealistic shading technique and statistical detail control. In particular, the non-photorealistic shading technique blends cool and warm colors into the photograph to imitate artists painting convention. Besides, we adopt statistical technique controlling integral length according to image variance to preserve details. Furthermore, we also propose method for generating a series of mip-maps, which revealing constant strokes under multi-resolution viewing and achieving frame coherence in an interactive walkthrough system. The experimental results show merits of emulating satisfyingly and computing efficiently, as a consequence, relying on the proposed technique successfully fabricates a wide category of non-photorealistic rendering (NPR) application such as interactive virtual environment with artistic perception.

  14. Scatter correction for cone-beam computed tomography using self-adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Xie, Shi-Peng; Luo, Li-Min

    2012-06-01

    The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT). The scatter kernel superposition (SKS) method has been used occasionally in previous studies. However, this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel. This study first evaluates the scatter kernel parameters using the SDB, and then isolates the scatter distribution based on the SKS. The quality of image can be improved by removing the scatter distribution. The results show that the method can effectively reduce the scatter artifacts, and increase the image quality. Our approach increases the image contrast and reduces the magnitude of cupping. The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel. This method is computationally efficient, easy to implement, and provides scatter correction using a single scan acquisition.

  15. Refinement of Methods for Evaluation of Near-Hypersingular Integrals in BEM Formulations

    NASA Technical Reports Server (NTRS)

    Fink, Patricia W.; Khayat, Michael A.; Wilton, Donald R.

    2006-01-01

    In this paper, we present advances in singularity cancellation techniques applied to integrals in BEM formulations that are nearly hypersingular. Significant advances have been made recently in singularity cancellation techniques applied to 1 R type kernels [M. Khayat, D. Wilton, IEEE Trans. Antennas and Prop., 53, pp. 3180-3190, 2005], as well as to the gradients of these kernels [P. Fink, D. Wilton, and M. Khayat, Proc. ICEAA, pp. 861-864, Torino, Italy, 2005] on curved subdomains. In these approaches, the source triangle is divided into three tangent subtriangles with a common vertex at the normal projection of the observation point onto the source element or the extended surface containing it. The geometry of a typical tangent subtriangle and its local rectangular coordinate system with origin at the projected observation point is shown in Fig. 1. Whereas singularity cancellation techniques for 1 R type kernels are now nearing maturity, the efficient handling of near-hypersingular kernels still needs attention. For example, in the gradient reference above, techniques are presented for computing the normal component of the gradient relative to the plane containing the tangent subtriangle. These techniques, summarized in the transformations in Table 1, are applied at the sub-triangle level and correspond particularly to the case in which the normal projection of the observation point lies within the boundary of the source element. They are found to be highly efficient as z approaches zero. Here, we extend the approach to cover two instances not previously addressed. First, we consider the case in which the normal projection of the observation point lies external to the source element. For such cases, we find that simple modifications to the transformations of Table 1 permit significant savings in computational cost. Second, we present techniques that permit accurate computation of the tangential components of the gradient; i.e., tangent to the plane containing the source element.

  16. Deep-Learning Convolutional Neural Networks Accurately Classify Genetic Mutations in Gliomas.

    PubMed

    Chang, P; Grinband, J; Weinberg, B D; Bardis, M; Khy, M; Cadena, G; Su, M-Y; Cha, S; Filippi, C G; Bota, D; Baldi, P; Poisson, L M; Jain, R; Chow, D

    2018-05-10

    The World Health Organization has recently placed new emphasis on the integration of genetic information for gliomas. While tissue sampling remains the criterion standard, noninvasive imaging techniques may provide complimentary insight into clinically relevant genetic mutations. Our aim was to train a convolutional neural network to independently predict underlying molecular genetic mutation status in gliomas with high accuracy and identify the most predictive imaging features for each mutation. MR imaging data and molecular information were retrospectively obtained from The Cancer Imaging Archives for 259 patients with either low- or high-grade gliomas. A convolutional neural network was trained to classify isocitrate dehydrogenase 1 ( IDH1 ) mutation status, 1p/19q codeletion, and O6-methylguanine-DNA methyltransferase ( MGMT ) promotor methylation status. Principal component analysis of the final convolutional neural network layer was used to extract the key imaging features critical for successful classification. Classification had high accuracy: IDH1 mutation status, 94%; 1p/19q codeletion, 92%; and MGMT promotor methylation status, 83%. Each genetic category was also associated with distinctive imaging features such as definition of tumor margins, T1 and FLAIR suppression, extent of edema, extent of necrosis, and textural features. Our results indicate that for The Cancer Imaging Archives dataset, machine-learning approaches allow classification of individual genetic mutations of both low- and high-grade gliomas. We show that relevant MR imaging features acquired from an added dimensionality-reduction technique demonstrate that neural networks are capable of learning key imaging components without prior feature selection or human-directed training. © 2018 by American Journal of Neuroradiology.

  17. Reduced-Order Models Based on Linear and Nonlinear Aerodynamic Impulse Responses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1999-01-01

    This paper discusses a method for the identification and application of reduced-order models based on linear and nonlinear aerodynamic impulse responses. The Volterra theory of nonlinear systems and an appropriate kernel identification technique are described. Insight into the nature of kernels is provided by applying the method to the nonlinear Riccati equation in a non-aerodynamic application. The method is then applied to a nonlinear aerodynamic model of RAE 2822 supercritical airfoil undergoing plunge motions using the CFL3D Navier-Stokes flow solver with the Spalart-Allmaras turbulence model. Results demonstrate the computational efficiency of the technique.

  18. Reduced Order Models Based on Linear and Nonlinear Aerodynamic Impulse Responses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1999-01-01

    This paper discusses a method for the identification and application of reduced-order models based on linear and nonlinear aerodynamic impulse responses. The Volterra theory of nonlinear systems and an appropriate kernel identification technique are described. Insight into the nature of kernels is provided by applying the method to the nonlinear Riccati equation in a non-aerodynamic application. The method is then applied to a nonlinear aerodynamic model of an RAE 2822 supercritical airfoil undergoing plunge motions using the CFL3D Navier-Stokes flow solver with the Spalart-Allmaras turbulence model. Results demonstrate the computational efficiency of the technique.

  19. A fast and objective multidimensional kernel density estimation method: fastKDE

    DOE PAGES

    O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.; ...

    2016-03-07

    Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less

  20. Evaluation of a scattering correction method for high energy tomography

    NASA Astrophysics Data System (ADS)

    Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel

    2018-01-01

    One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where experimental complexities must be avoided. This approach has been previously tested successfully in the energy range of 100 keV - 6 MeV. In this paper, the kernels are simulated using MCNP in order to take into account both photons and electronic processes in scattering radiation contribution. We present scatter correction results on a large object scanned with a 9 MeV linear accelerator.

  1. Higher-order phase transitions on financial markets

    NASA Astrophysics Data System (ADS)

    Kasprzak, A.; Kutner, R.; Perelló, J.; Masoliver, J.

    2010-08-01

    Statistical and thermodynamic properties of the anomalous multifractal structure of random interevent (or intertransaction) times were thoroughly studied by using the extended continuous-time random walk (CTRW) formalism of Montroll, Weiss, Scher, and Lax. Although this formalism is quite general (and can be applied to any interhuman communication with nontrivial priority), we consider it in the context of a financial market where heterogeneous agent activities can occur within a wide spectrum of time scales. As the main general consequence, we found (by additionally using the Saddle-Point Approximation) the scaling or power-dependent form of the partition function, Z(q'). It diverges for any negative scaling powers q' (which justifies the name anomalous) while for positive ones it shows the scaling with the general exponent τ(q'). This exponent is the nonanalytic (singular) or noninteger power of q', which is one of the pilar of higher-order phase transitions. In definition of the partition function we used the pausing-time distribution (PTD) as the central one, which takes the form of convolution (or superstatistics used, e.g. for describing turbulence as well as the financial market). Its integral kernel is given by the stretched exponential distribution (often used in disordered systems). This kernel extends both the exponential distribution assumed in the original version of the CTRW formalism (for description of the transient photocurrent measured in amorphous glassy material) as well as the Gaussian one sometimes used in this context (e.g. for diffusion of hydrogen in amorphous metals or for aging effects in glasses). Our most important finding is the third- and higher-order phase transitions, which can be roughly interpreted as transitions between the phase where high frequency trading is most visible and the phase defined by low frequency trading. The specific order of the phase transition directly depends upon the shape exponent α defining the stretched exponential integral kernel. On this basis a simple practical hint for investors was formulated.

  2. Treatment planning for internal emitter therapy: Methods, applications and clinical implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sgouros, G.

    1999-01-01

    Treatment planning involves three basic steps: (1) a procedure must be devised that will provide the most relevant information, (2) the procedure must be applied and (3) the resulting information must be translated into a definition of the optimum implementation. There are varying degrees of treatment planning that may be implemented in internal emitter therapy. As in chemotherapy, the information from a Phase 1 study may be used to treat patients based upon body surface area. If treatment planning is included on a patient-specific basis, a pretherapy, trace-labeled, administration of the radiopharmaceutical is generally required. The data collected following themore » tracer dose may range from time-activity curves of blood and whole-body for use in blood, marrow or total body absorbed dose estimation to patient imaging for three-dimensional internal emitter dosimetry. The most ambitious approach requires a three-dimensional set of images representing radionuclide distribution (SPECT or PET) and a corresponding set of images representing anatomy (CT or MRI). The absorbed dose (or dose-rate) distribution may be obtained by convolution of a point kernel with the radioactivity distribution or by direct Monte Carlo calculation. A critical requirement for both techniques is the development of an overall structure that makes it possible, in a routine manner, to input the images, to identify the structures of interest and to display the results of the dose calculations in a clinically relevant manner. 52 refs., 4 figs., 1 tab.« less

  3. A New Technique in Surgical Management of the Giant Cerebral Hydatid Cysts.

    PubMed

    Aydin, Mehmet Dumlu; Karaavci, Nuh Cagri; Akyuz, Mehmet Emin; Sahin, Mehmet Hakan; Zeynal, Mete; Kanat, Ayhan; Altinors, Mehmet Nur

    2018-05-01

    In hydatid disease, the central nervous system is affected approximately in 2% to 3% of patients. Surgical management in these patients is important. To develop a surgical technique to avoid the formation of great volume of cavity after hydatid cyst removal and prevent complications associated with brain collapse and cortical convolution. In 2 patients, hydatid cysts were delivered by this new technique. A balloon filled with 150 cc of sterile air/distilled water was placed in the cavity until the balloon filled the entire cavity. Air/distilled water evacuation was continued at a rate of 20 cc/d and, after a week, eventually, the balloons were removed RESULTS:: All cysts were delivered without rupture. Neurologic outcomes were good. No complications were observed related to usage of the system such as balloon rupture, evacuation problems, and infection. The authors believe that the balloon insertion technique may be a useful method to prevent brain collapse, cortical convolution, and complications associated with this condition. Further technical refinements of the system are needed for better results.

  4. New Syndrome Decoding Techniques for the (n, K) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.

  5. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

    1996-01-01

    We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

  6. Metaheuristic Algorithms for Convolution Neural Network

    PubMed Central

    Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  7. Metaheuristic Algorithms for Convolution Neural Network.

    PubMed

    Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).

  8. View-interpolation of sparsely sampled sinogram using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Lee, Hoyeon; Lee, Jongha; Cho, Suengryong

    2017-02-01

    Spare-view sampling and its associated iterative image reconstruction in computed tomography have actively investigated. Sparse-view CT technique is a viable option to low-dose CT, particularly in cone-beam CT (CBCT) applications, with advanced iterative image reconstructions with varying degrees of image artifacts. One of the artifacts that may occur in sparse-view CT is the streak artifact in the reconstructed images. Another approach has been investigated for sparse-view CT imaging by use of the interpolation methods to fill in the missing view data and that reconstructs the image by an analytic reconstruction algorithm. In this study, we developed an interpolation method using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing projection data and compared its performances with the other interpolation techniques.

  9. Detection of white matter lesion regions in MRI using SLIC0 and convolutional neural network.

    PubMed

    Diniz, Pedro Henrique Bandeira; Valente, Thales Levi Azevedo; Diniz, João Otávio Bandeira; Silva, Aristófanes Corrêa; Gattass, Marcelo; Ventura, Nina; Muniz, Bernardo Carvalho; Gasparetto, Emerson Leandro

    2018-04-19

    White matter lesions are non-static brain lesions that have a prevalence rate up to 98% in the elderly population. Because they may be associated with several brain diseases, it is important that they are detected as soon as possible. Magnetic Resonance Imaging (MRI) provides three-dimensional data with the possibility to detect and emphasize contrast differences in soft tissues, providing rich information about the human soft tissue anatomy. However, the amount of data provided for these images is far too much for manual analysis/interpretation, representing a difficult and time-consuming task for specialists. This work presents a computational methodology capable of detecting regions of white matter lesions of the brain in MRI of FLAIR modality. The techniques highlighted in this methodology are SLIC0 clustering for candidate segmentation and convolutional neural networks for candidate classification. The methodology proposed here consists of four steps: (1) images acquisition, (2) images preprocessing, (3) candidates segmentation and (4) candidates classification. The methodology was applied on 91 magnetic resonance images provided by DASA, and achieved an accuracy of 98.73%, specificity of 98.77% and sensitivity of 78.79% with 0.005 of false positives, without any false positives reduction technique, in detection of white matter lesion regions. It is demonstrated the feasibility of the analysis of brain MRI using SLIC0 and convolutional neural network techniques to achieve success in detection of white matter lesions regions. Copyright © 2018. Published by Elsevier B.V.

  10. Skin dose mapping for non-uniform x-ray fields using a backscatter point spread function

    NASA Astrophysics Data System (ADS)

    Vijayan, Sarath; Xiong, Zhenyu; Shankar, Alok; Rudin, Stephen; Bednarek, Daniel R.

    2017-03-01

    Beam shaping devices like ROI attenuators and compensation filters modulate the intensity distribution of the xray beam incident on the patient. This results in a spatial variation of skin dose due to the variation of primary radiation and also a variation in backscattered radiation from the patient. To determine the backscatter component, backscatter point spread functions (PSF) are generated using EGS Monte-Carlo software. For this study, PSF's were determined by simulating a 1 mm beam incident on the lateral surface of an anthropomorphic head phantom and a 20 cm thick PMMA block phantom. The backscatter PSF's for the head phantom and PMMA phantom are curve fit with a Lorentzian function after being normalized to the primary dose intensity (PSFn). PSFn is convolved with the primary dose distribution to generate the scatter dose distribution, which is added to the primary to obtain the total dose distribution. The backscatter convolution technique is incorporated in the dose tracking system (DTS), which tracks skin dose during fluoroscopic procedures and provides a color map of the dose distribution on a 3D patient graphic model. A convolution technique is developed for the backscatter dose determination for the nonuniformly spaced graphic-model surface vertices. A Gafchromic film validation was performed for shaped x-ray beams generated with an ROI attenuator and with two compensation filters inserted into the field. The total dose distribution calculated by the backscatter convolution technique closely agreed with that measured with the film.

  11. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons

    NASA Astrophysics Data System (ADS)

    Sanchez-Parcerisa, D.; Cortés-Giraldo, M. A.; Dolney, D.; Kondrla, M.; Fager, M.; Carabe, A.

    2016-02-01

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm-1) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.

  12. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons.

    PubMed

    Sanchez-Parcerisa, D; Cortés-Giraldo, M A; Dolney, D; Kondrla, M; Fager, M; Carabe, A

    2016-02-21

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm(-1)) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.

  13. Semantic segmentation of mFISH images using convolutional networks.

    PubMed

    Pardo, Esteban; Morgado, José Mário T; Malpica, Norberto

    2018-04-30

    Multicolor in situ hybridization (mFISH) is a karyotyping technique used to detect major chromosomal alterations using fluorescent probes and imaging techniques. Manual interpretation of mFISH images is a time consuming step that can be automated using machine learning; in previous works, pixel or patch wise classification was employed, overlooking spatial information which can help identify chromosomes. In this work, we propose a fully convolutional semantic segmentation network for the interpretation of mFISH images, which uses both spatial and spectral information to classify each pixel in an end-to-end fashion. The semantic segmentation network developed was tested on samples extracted from a public dataset using cross validation. Despite having no labeling information of the image it was tested on, our algorithm yielded an average correct classification ratio (CCR) of 87.41%. Previously, this level of accuracy was only achieved with state of the art algorithms when classifying pixels from the same image in which the classifier has been trained. These results provide evidence that fully convolutional semantic segmentation networks may be employed in the computer aided diagnosis of genetic diseases with improved performance over the current image analysis methods. © 2018 International Society for Advancement of Cytometry. © 2018 International Society for Advancement of Cytometry.

  14. Evolutionary image simplification for lung nodule classification with convolutional neural networks.

    PubMed

    Lückehe, Daniel; von Voigt, Gabriele

    2018-05-29

    Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.

  15. Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems.

    PubMed

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.

  16. Spectral Kernel Approach to Study Radiative Response of Climate Variables and Interannual Variability of Reflected Solar Spectrum

    NASA Technical Reports Server (NTRS)

    Jin, Zhonghai; Wielicki, Bruce A.; Loukachine, Constantin; Charlock, Thomas P.; Young, David; Noeel, Stefan

    2011-01-01

    The radiative kernel approach provides a simple way to separate the radiative response to different climate parameters and to decompose the feedback into radiative and climate response components. Using CERES/MODIS/Geostationary data, we calculated and analyzed the solar spectral reflectance kernels for various climate parameters on zonal, regional, and global spatial scales. The kernel linearity is tested. Errors in the kernel due to nonlinearity can vary strongly depending on climate parameter, wavelength, surface, and solar elevation; they are large in some absorption bands for some parameters but are negligible in most conditions. The spectral kernels are used to calculate the radiative responses to different climate parameter changes in different latitudes. The results show that the radiative response in high latitudes is sensitive to the coverage of snow and sea ice. The radiative response in low latitudes is contributed mainly by cloud property changes, especially cloud fraction and optical depth. The large cloud height effect is confined to absorption bands, while the cloud particle size effect is found mainly in the near infrared. The kernel approach, which is based on calculations using CERES retrievals, is then tested by direct comparison with spectral measurements from Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY) (a different instrument on a different spacecraft). The monthly mean interannual variability of spectral reflectance based on the kernel technique is consistent with satellite observations over the ocean, but not over land, where both model and data have large uncertainty. RMS errors in kernel ]derived monthly global mean reflectance over the ocean compared to observations are about 0.001, and the sampling error is likely a major component.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal

    Open Computing Language (OpenCL) is a high-level language that enables software programmers to explore Field Programmable Gate Arrays (FPGAs) for application acceleration. The Intel FPGA software development kit (SDK) for OpenCL allows a user to specify applications at a high level and explore the performance of low-level hardware acceleration. In this report, we present the FPGA performance and power consumption results of the single-precision floating-point vector add OpenCL kernel using the Intel FPGA SDK for OpenCL on the Nallatech 385A FPGA board. The board features an Arria 10 FPGA. We evaluate the FPGA implementations using the compute unit duplication andmore » kernel vectorization optimization techniques. On the Nallatech 385A FPGA board, the maximum compute kernel bandwidth we achieve is 25.8 GB/s, approximately 76% of the peak memory bandwidth. The power consumption of the FPGA device when running the kernels ranges from 29W to 42W.« less

  18. EOS: A project to investigate the design and construction of real-time distributed embedded operating systems

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.; Essick, R. B.; Grass, J.; Johnston, G.; Kenny, K.; Russo, V.

    1986-01-01

    The EOS project is investigating the design and construction of a family of real-time distributed embedded operating systems for reliable, distributed aerospace applications. Using the real-time programming techniques developed in co-operation with NASA in earlier research, the project staff is building a kernel for a multiple processor networked system. The first six months of the grant included a study of scheduling in an object-oriented system, the design philosophy of the kernel, and the architectural overview of the operating system. In this report, the operating system and kernel concepts are described. An environment for the experiments has been built and several of the key concepts of the system have been prototyped. The kernel and operating system is intended to support future experimental studies in multiprocessing, load-balancing, routing, software fault-tolerance, distributed data base design, and real-time processing.

  19. Performance Assessment of Kernel Density Clustering for Gene Expression Profile Data

    PubMed Central

    Zeng, Beiyan; Chen, Yiping P.; Smith, Oscar H.

    2003-01-01

    Kernel density smoothing techniques have been used in classification or supervised learning of gene expression profile (GEP) data, but their applications to clustering or unsupervised learning of those data have not been explored and assessed. Here we report a kernel density clustering method for analysing GEP data and compare its performance with the three most widely-used clustering methods: hierarchical clustering, K-means clustering, and multivariate mixture model-based clustering. Using several methods to measure agreement, between-cluster isolation, and withincluster coherence, such as the Adjusted Rand Index, the Pseudo F test, the r2 test, and the profile plot, we have assessed the effectiveness of kernel density clustering for recovering clusters, and its robustness against noise on clustering both simulated and real GEP data. Our results show that the kernel density clustering method has excellent performance in recovering clusters from simulated data and in grouping large real expression profile data sets into compact and well-isolated clusters, and that it is the most robust clustering method for analysing noisy expression profile data compared to the other three methods assessed. PMID:18629292

  20. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    PubMed Central

    Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng

    2012-01-01

    In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969

  1. Improving the visualization of 3D ultrasound data with 3D filtering

    NASA Astrophysics Data System (ADS)

    Shamdasani, Vijay; Bae, Unmin; Managuli, Ravi; Kim, Yongmin

    2005-04-01

    3D ultrasound imaging is quickly gaining widespread clinical acceptance as a visualization tool that allows clinicians to obtain unique views not available with traditional 2D ultrasound imaging and an accurate understanding of patient anatomy. The ability to acquire, manipulate and interact with the 3D data in real time is an important feature of 3D ultrasound imaging. Volume rendering is often used to transform the 3D volume into 2D images for visualization. Unlike computed tomography (CT) and magnetic resonance imaging (MRI), volume rendering of 3D ultrasound data creates noisy images in which surfaces cannot be readily discerned due to speckles and low signal-to-noise ratio. The degrading effect of speckles is especially severe when gradient shading is performed to add depth cues to the image. Several researchers have reported that smoothing the pre-rendered volume with a 3D convolution kernel, such as 5x5x5, can significantly improve the image quality, but at the cost of decreased resolution. In this paper, we have analyzed the reasons for the improvement in image quality with 3D filtering and determined that the improvement is due to two effects. The filtering reduces speckles in the volume data, which leads to (1) more accurate gradient computation and better shading and (2) decreased noise during compositing. We have found that applying a moderate-size smoothing kernel (e.g., 7x7x7) to the volume data before gradient computation combined with some smoothing of the volume data (e.g., with a 3x3x3 lowpass filter) before compositing yielded images with good depth perception and no appreciable loss in resolution. Providing the clinician with the flexibility to control both of these effects (i.e., shading and compositing) independently could improve the visualization of the 3D ultrasound data. Introducing this flexibility into the ultrasound machine requires 3D filtering to be performed twice on the volume data, once before gradient computation and again before compositing. 3D filtering of an ultrasound volume containing millions of voxels requires a large amount of computation, and doing it twice decreases the number of frames that can be visualized per second. To address this, we have developed several techniques to make computation efficient. For example, we have used the moving average method to filter a 128x128x128 volume with a 3x3x3 boxcar kernel in 17 ms on a single MAP processor running at 400 MHz. The same methods reduced the computing time on a Pentium 4 running at 3 GHz from 110 ms to 62 ms. We believe that our proposed method can improve 3D ultrasound visualization without sacrificing resolution and incurring an excessive computing time.

  2. A stochastic convolution/superposition method with isocenter sampling to evaluate intrafraction motion effects in IMRT.

    PubMed

    Naqvi, Shahid A; D'Souza, Warren D

    2005-04-01

    Current methods to calculate dose distributions with organ motion can be broadly classified as "dose convolution" and "fluence convolution" methods. In the former, a static dose distribution is convolved with the probability distribution function (PDF) that characterizes the motion. However, artifacts are produced near the surface and around inhomogeneities because the method assumes shift invariance. Fluence convolution avoids these artifacts by convolving the PDF with the incident fluence instead of the patient dose. In this paper we present an alternative method that improves the accuracy, generality as well as the speed of dose calculation with organ motion. The algorithm starts by sampling an isocenter point from a parametrically defined space curve corresponding to the patient-specific motion trajectory. Then a photon is sampled in the linac head and propagated through the three-dimensional (3-D) collimator structure corresponding to a particular MLC segment chosen randomly from the planned IMRT leaf sequence. The photon is then made to interact at a point in the CT-based simulation phantom. Randomly sampled monoenergetic kernel rays issued from this point are then made to deposit energy in the voxels. Our method explicitly accounts for MLC-specific effects (spectral hardening, tongue-and-groove, head scatter) as well as changes in SSD with isocentric displacement, assuming that the body moves rigidly with the isocenter. Since the positions are randomly sampled from a continuum, there is no motion discretization, and the computation takes no more time than a static calculation. To validate our method, we obtained ten separate film measurements of an IMRT plan delivered on a phantom moving sinusoidally, with each fraction starting with a random phase. For 2 cm motion amplitude, we found that a ten-fraction average of the film measurements gave an agreement with the calculated infinite fraction average to within 2 mm in the isodose curves. The results also corroborate the existing notion that the interfraction dose variability due to the interplay between the MLC motion and breathing motion averages out over typical multifraction treatments. Simulation with motion waveforms more representative of real breathing indicate that the motion can produce penumbral spreading asymmetric about the static dose distributions. Such calculations can help a clinician decide to use, for example, a larger margin in the superior direction than in the inferior direction. In the paper we demonstrate that a 15 min run on a single CPU can readily illustrate the effect of a patient-specific breathing waveform, and can guide the physician in making informed decisions about margin expansion and dose escalation.

  3. Effective Alternating Direction Optimization Methods for Sparsity-Constrained Blind Image Deblurring.

    PubMed

    Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi

    2017-01-18

    Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.

  4. Flood susceptibility mapping using a novel ensemble weights-of-evidence and support vector machine models in GIS

    NASA Astrophysics Data System (ADS)

    Tehrany, Mahyat Shafapour; Pradhan, Biswajeet; Jebur, Mustafa Neamah

    2014-05-01

    Flood is one of the most devastating natural disasters that occur frequently in Terengganu, Malaysia. Recently, ensemble based techniques are getting extremely popular in flood modeling. In this paper, weights-of-evidence (WoE) model was utilized first, to assess the impact of classes of each conditioning factor on flooding through bivariate statistical analysis (BSA). Then, these factors were reclassified using the acquired weights and entered into the support vector machine (SVM) model to evaluate the correlation between flood occurrence and each conditioning factor. Through this integration, the weak point of WoE can be solved and the performance of the SVM will be enhanced. The spatial database included flood inventory, slope, stream power index (SPI), topographic wetness index (TWI), altitude, curvature, distance from the river, geology, rainfall, land use/cover (LULC), and soil type. Four kernel types of SVM (linear kernel (LN), polynomial kernel (PL), radial basis function kernel (RBF), and sigmoid kernel (SIG)) were used to investigate the performance of each kernel type. The efficiency of the new ensemble WoE and SVM method was tested using area under curve (AUC) which measured the prediction and success rates. The validation results proved the strength and efficiency of the ensemble method over the individual methods. The best results were obtained from RBF kernel when compared with the other kernel types. Success rate and prediction rate for ensemble WoE and RBF-SVM method were 96.48% and 95.67% respectively. The proposed ensemble flood susceptibility mapping method could assist researchers and local governments in flood mitigation strategies.

  5. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    PubMed

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Blur kernel estimation with algebraic tomography technique and intensity profiles of object boundaries

    NASA Astrophysics Data System (ADS)

    Ingacheva, Anastasia; Chukalina, Marina; Khanipov, Timur; Nikolaev, Dmitry

    2018-04-01

    Motion blur caused by camera vibration is a common source of degradation in photographs. In this paper we study the problem of finding the point spread function (PSF) of a blurred image using the tomography technique. The PSF reconstruction result strongly depends on the particular tomography technique used. We present a tomography algorithm with regularization adapted specifically for this task. We use the algebraic reconstruction technique (ART algorithm) as the starting algorithm and introduce regularization. We use the conjugate gradient method for numerical implementation of the proposed approach. The algorithm is tested using a dataset which contains 9 kernels extracted from real photographs by the Adobe corporation where the point spread function is known. We also investigate influence of noise on the quality of image reconstruction and investigate how the number of projections influence the magnitude change of the reconstruction error.

  7. Strategies for Effectively Visualizing a 3D Flow Using Volume Line Integral Convolution

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria; Grosch, Chester

    1997-01-01

    This paper discusses strategies for effectively portraying 3D flow using volume line integral convolution. Issues include defining an appropriate input texture, clarifying the distinct identities and relative depths of the advected texture elements, and selectively highlighting regions of interest in both the input and output volumes. Apart from offering insights into the greater potential of 3D LIC as a method for effectively representing flow in a volume, a principal contribution of this work is the suggestion of a technique for generating and rendering 3D visibility-impeding 'halos' that can help to intuitively indicate the presence of depth discontinuities between contiguous elements in a projection and thereby clarify the 3D spatial organization of elements in the flow. The proposed techniques are applied to the visualization of a hot, supersonic, laminar jet exiting into a colder, subsonic coflow.

  8. A deep convolutional neural network approach to single-particle recognition in cryo-electron microscopy.

    PubMed

    Zhu, Yanan; Ouyang, Qi; Mao, Youdong

    2017-07-21

    Single-particle cryo-electron microscopy (cryo-EM) has become a mainstream tool for the structural determination of biological macromolecular complexes. However, high-resolution cryo-EM reconstruction often requires hundreds of thousands of single-particle images. Particle extraction from experimental micrographs thus can be laborious and presents a major practical bottleneck in cryo-EM structural determination. Existing computational methods for particle picking often use low-resolution templates for particle matching, making them susceptible to reference-dependent bias. It is critical to develop a highly efficient template-free method for the automatic recognition of particle images from cryo-EM micrographs. We developed a deep learning-based algorithmic framework, DeepEM, for single-particle recognition from noisy cryo-EM micrographs, enabling automated particle picking, selection and verification in an integrated fashion. The kernel of DeepEM is built upon a convolutional neural network (CNN) composed of eight layers, which can be recursively trained to be highly "knowledgeable". Our approach exhibits an improved performance and accuracy when tested on the standard KLH dataset. Application of DeepEM to several challenging experimental cryo-EM datasets demonstrated its ability to avoid the selection of un-wanted particles and non-particles even when true particles contain fewer features. The DeepEM methodology, derived from a deep CNN, allows automated particle extraction from raw cryo-EM micrographs in the absence of a template. It demonstrates an improved performance, objectivity and accuracy. Application of this novel method is expected to free the labor involved in single-particle verification, significantly improving the efficiency of cryo-EM data processing.

  9. Optimized statistical parametric mapping procedure for NIRS data contaminated by motion artifacts : Neurometric analysis of body schema extension.

    PubMed

    Suzuki, Satoshi

    2017-09-01

    This study investigated the spatial distribution of brain activity on body schema (BS) modification induced by natural body motion using two versions of a hand-tracing task. In Task 1, participants traced Japanese Hiragana characters using the right forefinger, requiring no BS expansion. In Task 2, participants performed the tracing task with a long stick, requiring BS expansion. Spatial distribution was analyzed using general linear model (GLM)-based statistical parametric mapping of near-infrared spectroscopy data contaminated with motion artifacts caused by the hand-tracing task. Three methods were utilized in series to counter the artifacts, and optimal conditions and modifications were investigated: a model-free method (Step 1), a convolution matrix method (Step 2), and a boxcar-function-based Gaussian convolution method (Step 3). The results revealed four methodological findings: (1) Deoxyhemoglobin was suitable for the GLM because both Akaike information criterion and the variance against the averaged hemodynamic response function were smaller than for other signals, (2) a high-pass filter with a cutoff frequency of .014 Hz was effective, (3) the hemodynamic response function computed from a Gaussian kernel function and its first- and second-derivative terms should be included in the GLM model, and (4) correction of non-autocorrelation and use of effective degrees of freedom were critical. Investigating z-maps computed according to these guidelines revealed that contiguous areas of BA7-BA40-BA21 in the right hemisphere became significantly activated ([Formula: see text], [Formula: see text], and [Formula: see text], respectively) during BS modification while performing the hand-tracing task.

  10. Ultrasound scatter in heterogeneous 3D microstructures: Parameters affecting multiple scattering

    NASA Astrophysics Data System (ADS)

    Engle, B. J.; Roberts, R. A.; Grandin, R. J.

    2018-04-01

    This paper reports on a computational study of ultrasound propagation in heterogeneous metal microstructures. Random spatial fluctuations in elastic properties over a range of length scales relative to ultrasound wavelength can give rise to scatter-induced attenuation, backscatter noise, and phase front aberration. It is of interest to quantify the dependence of these phenomena on the microstructure parameters, for the purpose of quantifying deleterious consequences on flaw detectability, and for the purpose of material characterization. Valuable tools for estimation of microstructure parameters (e.g. grain size) through analysis of ultrasound backscatter have been developed based on approximate weak-scattering models. While useful, it is understood that these tools display inherent inaccuracy when multiple scattering phenomena significantly contribute to the measurement. It is the goal of this work to supplement weak scattering model predictions with corrections derived through application of an exact computational scattering model to explicitly prescribed microstructures. The scattering problem is formulated as a volume integral equation (VIE) displaying a convolutional Green-function-derived kernel. The VIE is solved iteratively employing FFT-based con-volution. Realizations of random microstructures are specified on the micron scale using statistical property descriptions (e.g. grain size and orientation distributions), which are then spatially filtered to provide rigorously equivalent scattering media on a length scale relevant to ultrasound propagation. Scattering responses from ensembles of media representations are averaged to obtain mean and variance of quantities such as attenuation and backscatter noise levels, as a function of microstructure descriptors. The computational approach will be summarized, and examples of application will be presented.

  11. Finite-frequency sensitivity kernels for global seismic wave propagation based upon adjoint methods

    NASA Astrophysics Data System (ADS)

    Liu, Qinya; Tromp, Jeroen

    2008-07-01

    We determine adjoint equations and Fréchet kernels for global seismic wave propagation based upon a Lagrange multiplier method. We start from the equations of motion for a rotating, self-gravitating earth model initially in hydrostatic equilibrium, and derive the corresponding adjoint equations that involve motions on an earth model that rotates in the opposite direction. Variations in the misfit function χ then may be expressed as , where δlnm = δm/m denotes relative model perturbations in the volume V, δlnd denotes relative topographic variations on solid-solid or fluid-solid boundaries Σ, and ∇Σδlnd denotes surface gradients in relative topographic variations on fluid-solid boundaries ΣFS. The 3-D Fréchet kernel Km determines the sensitivity to model perturbations δlnm, and the 2-D kernels Kd and Kd determine the sensitivity to topographic variations δlnd. We demonstrate also how anelasticity may be incorporated within the framework of adjoint methods. Finite-frequency sensitivity kernels are calculated by simultaneously computing the adjoint wavefield forward in time and reconstructing the regular wavefield backward in time. Both the forward and adjoint simulations are based upon a spectral-element method. We apply the adjoint technique to generate finite-frequency traveltime kernels for global seismic phases (P, Pdiff, PKP, S, SKS, depth phases, surface-reflected phases, surface waves, etc.) in both 1-D and 3-D earth models. For 1-D models these adjoint-generated kernels generally agree well with results obtained from ray-based methods. However, adjoint methods do not have the same theoretical limitations as ray-based methods, and can produce sensitivity kernels for any given phase in any 3-D earth model. The Fréchet kernels presented in this paper illustrate the sensitivity of seismic observations to structural parameters and topography on internal discontinuities. These kernels form the basis of future 3-D tomographic inversions.

  12. Covert Android Rootkit Detection: Evaluating Linux Kernel Level Rootkits on the Android Operating System

    DTIC Science & Technology

    2012-06-14

    the attacker . Thus, this race condition causes a privilege escalation . 2.2.5 Summary This section reviewed software exploitation of a Linux kernel...has led to increased targeting by malware writers. Android attacks have naturally sparked interest in researching protections for Android . This...release, Android 4.0 Ice Cream Sandwich. These rootkits focused on covert techniques to hide the presence of data used by an attacker to infect a

  13. Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator

    DOE PAGES

    Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; ...

    2014-12-08

    Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator’s vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator’s vacuum-insulator stack (at a radius of 1.6 m) by using standard D-dot and B-dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator’s magnetically insulated transmission lines (MITLs)more » and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z. These results are complementary to previous studies [R.D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that showed efficient transmission of large currents through the MITLs on Z. Taken together, the two studies demonstrate the overall efficient delivery of very large electrical powers through the MITLs on Z.« less

  14. Image Display and Manipulation System (IDAMS) program documentation, Appendixes A-D. [including routines, convolution filtering, image expansion, and fast Fourier transformation

    NASA Technical Reports Server (NTRS)

    Cecil, R. W.; White, R. A.; Szczur, M. R.

    1972-01-01

    The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.

  15. Emergence of Convolutional Neural Network in Future Medicine: Why and How. A Review on Brain Tumor Segmentation

    NASA Astrophysics Data System (ADS)

    Alizadeh Savareh, Behrouz; Emami, Hassan; Hajiabadi, Mohamadreza; Ghafoori, Mahyar; Majid Azimi, Seyed

    2018-03-01

    Manual analysis of brain tumors magnetic resonance images is usually accompanied by some problem. Several techniques have been proposed for the brain tumor segmentation. This study will be focused on searching popular databases for related studies, theoretical and practical aspects of Convolutional Neural Network surveyed in brain tumor segmentation. Based on our findings, details about related studies including the datasets used, evaluation parameters, preferred architectures and complementary steps analyzed. Deep learning as a revolutionary idea in image processing, achieved brilliant results in brain tumor segmentation too. This can be continuing until the next revolutionary idea emerging.

  16. New syndrome decoding techniques for the (n, k) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acciarri, R.; Adams, C.; An, R.

    Here, we present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. Lastly, we also address technical issues that arise when applying this technique to data from a large LArTPCmore » at or near ground level.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acciarri, R.; Adams, C.; An, R.

    We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at ormore » near ground level.« less

  19. Vibration analysis of FG cylindrical shells with power-law index using discrete singular convolution technique

    NASA Astrophysics Data System (ADS)

    Mercan, Kadir; Demir, Çiǧdem; Civalek, Ömer

    2016-01-01

    In the present manuscript, free vibration response of circular cylindrical shells with functionally graded material (FGM) is investigated. The method of discrete singular convolution (DSC) is used for numerical solution of the related governing equation of motion of FGM cylindrical shell. The constitutive relations are based on the Love's first approximation shell theory. The material properties are graded in the thickness direction according to a volume fraction power law indexes. Frequency values are calculated for different types of boundary conditions, material and geometric parameters. In general, close agreement between the obtained results and those of other researchers has been found.

  20. Salient regions detection using convolutional neural networks and color volume

    NASA Astrophysics Data System (ADS)

    Liu, Guang-Hai; Hou, Yingkun

    2018-03-01

    Convolutional neural network is an important technique in machine learning, pattern recognition and image processing. In order to reduce the computational burden and extend the classical LeNet-5 model to the field of saliency detection, we propose a simple and novel computing model based on LeNet-5 network. In the proposed model, hue, saturation and intensity are utilized to extract depth cues, and then we integrate depth cues and color volume to saliency detection following the basic structure of the feature integration theory. Experimental results show that the proposed computing model outperforms some existing state-of-the-art methods on MSRA1000 and ECSSD datasets.

  1. Change detection in multitemporal synthetic aperture radar images using dual-channel convolutional neural network

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Li, Ying; Cao, Ying; Shen, Qiang

    2017-10-01

    This paper proposes a model of dual-channel convolutional neural network (CNN) that is designed for change detection in SAR images, in an effort to acquire higher detection accuracy and lower misclassification rate. This network model contains two parallel CNN channels, which can extract deep features from two multitemporal SAR images. For comparison and validation, the proposed method is tested along with other change detection algorithms on both simulated SAR images and real-world SAR images captured by different sensors. The experimental results demonstrate that the presented method outperforms the state-of-the-art techniques by a considerable margin.

  2. Computational approach to Thornley's problem by bivariate operational calculus

    NASA Astrophysics Data System (ADS)

    Bazhlekova, E.; Dimovski, I.

    2012-10-01

    Thornley's problem is an initial-boundary value problem with a nonlocal boundary condition for linear onedimensional reaction-diffusion equation, used as a mathematical model of spiral phyllotaxis in botany. Applying a bivariate operational calculus we find explicit representation of the solution, containing two convolution products of special solutions and the arbitrary initial and boundary functions. We use a non-classical convolution with respect to the space variable, extending in this way the classical Duhamel principle. The special solutions involved are represented in the form of fast convergent series. Numerical examples are considered to show the application of the present technique and to analyze the character of the solution.

  3. Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga-Koontz Transform

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Ochilov, Shuhrat; Alam, Mohammad S.; Bal, Abdullah

    2017-02-01

    Principal component analysis (PCA) is a popular technique in remote sensing for dimensionality reduction. While PCA is suitable for data compression, it is not necessarily an optimal technique for feature extraction, particularly when the features are exploited in supervised learning applications (Cheriyadat and Bruce, 2003) [1]. Preserving features belonging to the target is very crucial to the performance of target detection/recognition techniques. Fukunaga-Koontz Transform (FKT) based supervised band reduction technique can be used to provide this requirement. FKT achieves feature selection by transforming into a new space in where feature classes have complimentary eigenvectors. Analysis of these eigenvectors under two classes, target and background clutter, can be utilized for target oriented band reduction since each basis functions best represent target class while carrying least information of the background class. By selecting few eigenvectors which are the most relevant to the target class, dimension of hyperspectral data can be reduced and thus, it presents significant advantages for near real time target detection applications. The nonlinear properties of the data can be extracted by kernel approach which provides better target features. Thus, we propose constructing kernel FKT (KFKT) to present target oriented band reduction. The performance of the proposed KFKT based target oriented dimensionality reduction algorithm has been tested employing two real-world hyperspectral data and results have been reported consequently.

  4. UFCN: a fully convolutional neural network for road extraction in RGB imagery acquired by remote sensing from an unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Kestur, Ramesh; Farooq, Shariq; Abdal, Rameen; Mehraj, Emad; Narasipura, Omkar; Mudigere, Meenavathi

    2018-01-01

    Road extraction in imagery acquired by low altitude remote sensing (LARS) carried out using an unmanned aerial vehicle (UAV) is presented. LARS is carried out using a fixed wing UAV with a high spatial resolution vision spectrum (RGB) camera as the payload. Deep learning techniques, particularly fully convolutional network (FCN), are adopted to extract roads by dense semantic segmentation. The proposed model, UFCN (U-shaped FCN) is an FCN architecture, which is comprised of a stack of convolutions followed by corresponding stack of mirrored deconvolutions with the usage of skip connections in between for preserving the local information. The limited dataset (76 images and their ground truths) is subjected to real-time data augmentation during training phase to increase the size effectively. Classification performance is evaluated using precision, recall, accuracy, F1 score, and brier score parameters. The performance is compared with support vector machine (SVM) classifier, a one-dimensional convolutional neural network (1D-CNN) model, and a standard two-dimensional CNN (2D-CNN). The UFCN model outperforms the SVM, 1D-CNN, and 2D-CNN models across all the performance parameters. Further, the prediction time of the proposed UFCN model is comparable with SVM, 1D-CNN, and 2D-CNN models.

  5. Deep convolutional neural network based antenna selection in multiple-input multiple-output system

    NASA Astrophysics Data System (ADS)

    Cai, Jiaxin; Li, Yan; Hu, Ying

    2018-03-01

    Antenna selection of wireless communication system has attracted increasing attention due to the challenge of keeping a balance between communication performance and computational complexity in large-scale Multiple-Input MultipleOutput antenna systems. Recently, deep learning based methods have achieved promising performance for large-scale data processing and analysis in many application fields. This paper is the first attempt to introduce the deep learning technique into the field of Multiple-Input Multiple-Output antenna selection in wireless communications. First, the label of attenuation coefficients channel matrix is generated by minimizing the key performance indicator of training antenna systems. Then, a deep convolutional neural network that explicitly exploits the massive latent cues of attenuation coefficients is learned on the training antenna systems. Finally, we use the adopted deep convolutional neural network to classify the channel matrix labels of test antennas and select the optimal antenna subset. Simulation experimental results demonstrate that our method can achieve better performance than the state-of-the-art baselines for data-driven based wireless antenna selection.

  6. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  7. Investigation of background acoustical effect on online surveys: A case study of a farmers' market customer survey

    NASA Astrophysics Data System (ADS)

    Tang, Xingdi

    Since the middle of 1990s, internet has become a new platform for surveys. Previous studies have discussed the visual design features of internet surveys. However, the application of acoustics as a design characteristic of online surveys has been rarely investigated. The present study aimed to fill that research gap. The purpose of the study was to assess the impact of background sound on respondents' engagement and satisfaction with online surveys. Two forms of background sound were evaluated; audio recorded in studios and audio edited with convolution reverb technique. The author recruited 80 undergraduate students for the experiment. These students were assigned to one of three groups. Each of the three groups was asked to evaluate their engagement and satisfaction with a specific online survey. The content of the online survey was the same. However, the three groups was exposed to the online survey with no background sound, with background sound recorded in studios; and with background sound edited with convolution reverb technique. The results showed no significant difference in engagement and satisfaction in the three groups of online surveys; without background sound, background sound recorded in studios, and background sound edited with convolution reverb technique. The author suggests that background sound does not contribute to online surveys in all the contexts. The industry practitioners should be careful to evaluate the survey context to decide whether the background sound should be added. Particularly, ear-piercing noise or acoustics which may link to respondents' unpleasant experience should be avoided. Moreover, although the results did not support the advantage of the revolution reverb technique in improving respondents' engagement and satisfaction, the author suggests that the potential of the revolution reverb technique in the applications of online surveys can't be totally denied, since it may be useful for some contexts which need further explorations in future research.

  8. An investigation of a mathematical model for atmospheric absorption spectra

    NASA Technical Reports Server (NTRS)

    Niple, E. R.

    1979-01-01

    A computer program that calculates absorption spectra for slant paths through the atmosphere is described. The program uses an efficient convolution technique (Romberg integration) to simulate instrument resolution effects. A brief information analysis is performed on a set of calculated spectra to illustrate how such techniques may be used to explore the quality of the information in a spectrum.

  9. SPECIAL ISSUE ON OPTICAL PROCESSING OF INFORMATION: Optoelectronic processors with scanning CCD photodetectors

    NASA Astrophysics Data System (ADS)

    Esepkina, N. A.; Lavrov, A. P.; Anan'ev, M. N.; Blagodarnyi, V. S.; Ivanov, S. I.; Mansyrev, M. I.; Molodyakov, S. A.

    1995-10-01

    Two new types of optoelectronic radio-signal processors were investigated. Charge-coupled device (CCD) photodetectors are used in these processors under continuous scanning conditions, i.e. in a time delay and storage mode. One of these processors is based on a CCD photodetector array with a reference-signal amplitude transparency and the other is an adaptive acousto-optical signal processor with linear frequency modulation. The processor with the transparency performs multichannel discrete—analogue convolution of an input signal with a corresponding kernel of the transformation determined by the transparency. If a light source is an array of light-emitting diodes of special (stripe) geometry, the optical stages of the processor can be made from optical fibre components and the whole processor then becomes a rigid 'sandwich' (a compact hybrid optoelectronic microcircuit). A report is given also of a study of a prototype processor with optical fibre components for the reception of signals from a system with antenna aperture synthesis, which forms a radio image of the Earth.

  10. A comparative mathematical analysis of RL and RC electrical circuits via Atangana-Baleanu and Caputo-Fabrizio fractional derivatives

    NASA Astrophysics Data System (ADS)

    Abro, Kashif Ali; Memon, Anwar Ahmed; Uqaili, Muhammad Aslam

    2018-03-01

    This research article is analyzed for the comparative study of RL and RC electrical circuits by employing newly presented Atangana-Baleanu and Caputo-Fabrizio fractional derivatives. The governing ordinary differential equations of RL and RC electrical circuits have been fractionalized in terms of fractional operators in the range of 0 ≤ ξ ≤ 1 and 0 ≤ η ≤ 1. The analytic solutions of fractional differential equations for RL and RC electrical circuits have been solved by using the Laplace transform with its inversions. General solutions have been investigated for periodic and exponential sources by implementing the Atangana-Baleanu and Caputo-Fabrizio fractional operators separately. The investigated solutions have been expressed in terms of simple elementary functions with convolution product. On the basis of newly fractional derivatives with and without singular kernel, the voltage and current have interesting behavior with several similarities and differences for the periodic and exponential sources.

  11. Superfast algorithms of multidimensional discrete k-wave transforms and Volterra filtering based on superfast radon transform

    NASA Astrophysics Data System (ADS)

    Labunets, Valeri G.; Labunets-Rundblad, Ekaterina V.; Astola, Jaakko T.

    2001-12-01

    Fast algorithms for a wide class of non-separable n-dimensional (nD) discrete unitary K-transforms (DKT) are introduced. They need less 1D DKTs than in the case of the classical radix-2 FFT-type approach. The method utilizes a decomposition of the nD K-transform into the product of a new nD discrete Radon transform and of a set of parallel/independ 1D K-transforms. If the nD K-transform has a separable kernel (e.g., the case of the discrete Fourier transform) our approach leads to decrease of multiplicative complexity by the factor of n comparing to the classical row/column separable approach. It is well known that an n-th order Volterra filter of one dimensional signal can be evaluated by an appropriate nD linear convolution. This work describes new superfast algorithm for Volterra filtering. New approach is based on the superfast discrete Radon and Nussbaumer polynomial transforms.

  12. Real-time digital signal recovery for a multi-pole low-pass transfer function system.

    PubMed

    Lee, Jhinhwan

    2017-08-01

    In order to solve the problems of waveform distortion and signal delay by many physical and electrical systems with multi-pole linear low-pass transfer characteristics, a simple digital-signal-processing (DSP)-based method of real-time recovery of the original source waveform from the distorted output waveform is proposed. A mathematical analysis on the convolution kernel representation of the single-pole low-pass transfer function shows that the original source waveform can be accurately recovered in real time using a particular moving average algorithm applied on the input stream of the distorted waveform, which can also significantly reduce the overall delay time constant. This method is generalized for multi-pole low-pass systems and has noise characteristics of the inverse of the low-pass filter characteristics. This method can be applied to most sensors and amplifiers operating close to their frequency response limits to improve the overall performance of data acquisition systems and digital feedback control systems.

  13. Foreign object detection via texture recognition and a neural classifier

    NASA Astrophysics Data System (ADS)

    Patel, Devesh; Hannah, I.; Davies, E. R.

    1993-10-01

    It is rate to find pieces of stone, wood, metal, or glass in food packets, but when they occur, these foreign objects (FOs) cause distress to the consumer and concern to the manufacturer. Using x-ray imaging to detect FOs within food bags, hard contaminants such as stone or metal appear darker, whereas soft contaminants such as wood or rubber appear slightly lighter than the food substrate. In this paper we concentrate on the detection of soft contaminants such as small pieces of wood in bags of frozen corn kernels. Convolution masks are used to generate textural features which are then classified into corresponding homogeneous regions on the image using an artificial neural network (ANN) classifier. The separate ANN outputs are combined using a majority operator, and region discrepancies are removed by a median filter. Comparisons with classical classifiers showed the ANN approach to have the best overall combination of characteristics for our particular problem. The detected boundaries are in good agreement with the visually perceived segmentations.

  14. A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Zuo, B.; Hu, X.; Li, H.

    2011-12-01

    A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.

  15. Dynamic Experiment Design Regularization Approach to Adaptive Imaging with Array Radar/SAR Sensor Systems

    PubMed Central

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859

  16. Helium: lifting high-performance stencil kernels from stripped x86 binaries to halide DSL code

    DOE PAGES

    Mendis, Charith; Bosboom, Jeffrey; Wu, Kevin; ...

    2015-06-03

    Highly optimized programs are prone to bit rot, where performance quickly becomes suboptimal in the face of new hardware and compiler techniques. In this paper we show how to automatically lift performance-critical stencil kernels from a stripped x86 binary and generate the corresponding code in the high-level domain-specific language Halide. Using Halide's state-of-the-art optimizations targeting current hardware, we show that new optimized versions of these kernels can replace the originals to rejuvenate the application for newer hardware. The original optimized code for kernels in stripped binaries is nearly impossible to analyze statically. Instead, we rely on dynamic traces to regeneratemore » the kernels. We perform buffer structure reconstruction to identify input, intermediate and output buffer shapes. Here, we abstract from a forest of concrete dependency trees which contain absolute memory addresses to symbolic trees suitable for high-level code generation. This is done by canonicalizing trees, clustering them based on structure, inferring higher-dimensional buffer accesses and finally by solving a set of linear equations based on buffer accesses to lift them up to simple, high-level expressions. Helium can handle highly optimized, complex stencil kernels with input-dependent conditionals. We lift seven kernels from Adobe Photoshop giving a 75 % performance improvement, four kernels from Irfan View, leading to 4.97 x performance, and one stencil from the mini GMG multigrid benchmark netting a 4.25 x improvement in performance. We manually rejuvenated Photoshop by replacing eleven of Photoshop's filters with our lifted implementations, giving 1.12 x speedup without affecting the user experience.« less

  17. Online selective kernel-based temporal difference learning.

    PubMed

    Chen, Xingguo; Gao, Yang; Wang, Ruili

    2013-12-01

    In this paper, an online selective kernel-based temporal difference (OSKTD) learning algorithm is proposed to deal with large scale and/or continuous reinforcement learning problems. OSKTD includes two online procedures: online sparsification and parameter updating for the selective kernel-based value function. A new sparsification method (i.e., a kernel distance-based online sparsification method) is proposed based on selective ensemble learning, which is computationally less complex compared with other sparsification methods. With the proposed sparsification method, the sparsified dictionary of samples is constructed online by checking if a sample needs to be added to the sparsified dictionary. In addition, based on local validity, a selective kernel-based value function is proposed to select the best samples from the sample dictionary for the selective kernel-based value function approximator. The parameters of the selective kernel-based value function are iteratively updated by using the temporal difference (TD) learning algorithm combined with the gradient descent technique. The complexity of the online sparsification procedure in the OSKTD algorithm is O(n). In addition, two typical experiments (Maze and Mountain Car) are used to compare with both traditional and up-to-date O(n) algorithms (GTD, GTD2, and TDC using the kernel-based value function), and the results demonstrate the effectiveness of our proposed algorithm. In the Maze problem, OSKTD converges to an optimal policy and converges faster than both traditional and up-to-date algorithms. In the Mountain Car problem, OSKTD converges, requires less computation time compared with other sparsification methods, gets a better local optima than the traditional algorithms, and converges much faster than the up-to-date algorithms. In addition, OSKTD can reach a competitive ultimate optima compared with the up-to-date algorithms.

  18. Three-Class Mammogram Classification Based on Descriptive CNN Features

    PubMed Central

    Zhang, Qianni; Jadoon, Adeel

    2017-01-01

    In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques. PMID:28191461

  19. Three-Class Mammogram Classification Based on Descriptive CNN Features.

    PubMed

    Jadoon, M Mohsin; Zhang, Qianni; Haq, Ihsan Ul; Butt, Sharjeel; Jadoon, Adeel

    2017-01-01

    In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques.

  20. A particle swarm optimized kernel-based clustering method for crop mapping from multi-temporal polarimetric L-band SAR observations

    NASA Astrophysics Data System (ADS)

    Tamiminia, Haifa; Homayouni, Saeid; McNairn, Heather; Safari, Abdoreza

    2017-06-01

    Polarimetric Synthetic Aperture Radar (PolSAR) data, thanks to their specific characteristics such as high resolution, weather and daylight independence, have become a valuable source of information for environment monitoring and management. The discrimination capability of observations acquired by these sensors can be used for land cover classification and mapping. The aim of this paper is to propose an optimized kernel-based C-means clustering algorithm for agriculture crop mapping from multi-temporal PolSAR data. Firstly, several polarimetric features are extracted from preprocessed data. These features are linear polarization intensities, and several statistical and physical based decompositions such as Cloude-Pottier, Freeman-Durden and Yamaguchi techniques. Then, the kernelized version of hard and fuzzy C-means clustering algorithms are applied to these polarimetric features in order to identify crop types. The kernel function, unlike the conventional partitioning clustering algorithms, simplifies the non-spherical and non-linearly patterns of data structure, to be clustered easily. In addition, in order to enhance the results, Particle Swarm Optimization (PSO) algorithm is used to tune the kernel parameters, cluster centers and to optimize features selection. The efficiency of this method was evaluated by using multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Manitoba, Canada, during June and July in 2012. The results demonstrate more accurate crop maps using the proposed method when compared to the classical approaches, (e.g. 12% improvement in general). In addition, when the optimization technique is used, greater improvement is observed in crop classification, e.g. 5% in overall. Furthermore, a strong relationship between Freeman-Durden volume scattering component, which is related to canopy structure, and phenological growth stages is observed.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.

    Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less

  2. Predicting drug-target interactions by dual-network integrated logistic matrix factorization

    NASA Astrophysics Data System (ADS)

    Hao, Ming; Bryant, Stephen H.; Wang, Yanli

    2017-01-01

    In this work, we propose a dual-network integrated logistic matrix factorization (DNILMF) algorithm to predict potential drug-target interactions (DTI). The prediction procedure consists of four steps: (1) inferring new drug/target profiles and constructing profile kernel matrix; (2) diffusing drug profile kernel matrix with drug structure kernel matrix; (3) diffusing target profile kernel matrix with target sequence kernel matrix; and (4) building DNILMF model and smoothing new drug/target predictions based on their neighbors. We compare our algorithm with the state-of-the-art method based on the benchmark dataset. Results indicate that the DNILMF algorithm outperforms the previously reported approaches in terms of AUPR (area under precision-recall curve) and AUC (area under curve of receiver operating characteristic) based on the 5 trials of 10-fold cross-validation. We conclude that the performance improvement depends on not only the proposed objective function, but also the used nonlinear diffusion technique which is important but under studied in the DTI prediction field. In addition, we also compile a new DTI dataset for increasing the diversity of currently available benchmark datasets. The top prediction results for the new dataset are confirmed by experimental studies or supported by other computational research.

  3. Super-resolution fusion of complementary panoramic images based on cross-selection kernel regression interpolation.

    PubMed

    Chen, Lidong; Basu, Anup; Zhang, Maojun; Wang, Wei; Liu, Yu

    2014-03-20

    A complementary catadioptric imaging technique was proposed to solve the problem of low and nonuniform resolution in omnidirectional imaging. To enhance this research, our paper focuses on how to generate a high-resolution panoramic image from the captured omnidirectional image. To avoid the interference between the inner and outer images while fusing the two complementary views, a cross-selection kernel regression method is proposed. First, in view of the complementarity of sampling resolution in the tangential and radial directions between the inner and the outer images, respectively, the horizontal gradients in the expected panoramic image are estimated based on the scattered neighboring pixels mapped from the outer, while the vertical gradients are estimated using the inner image. Then, the size and shape of the regression kernel are adaptively steered based on the local gradients. Furthermore, the neighboring pixels in the next interpolation step of kernel regression are also selected based on the comparison between the horizontal and vertical gradients. In simulation and real-image experiments, the proposed method outperforms existing kernel regression methods and our previous wavelet-based fusion method in terms of both visual quality and objective evaluation.

  4. Accurately estimating PSF with straight lines detected by Hough transform

    NASA Astrophysics Data System (ADS)

    Wang, Ruichen; Xu, Liangpeng; Fan, Chunxiao; Li, Yong

    2018-04-01

    This paper presents an approach to estimating point spread function (PSF) from low resolution (LR) images. Existing techniques usually rely on accurate detection of ending points of the profile normal to edges. In practice however, it is often a great challenge to accurately localize profiles of edges from a LR image, which hence leads to a poor PSF estimation of the lens taking the LR image. For precisely estimating the PSF, this paper proposes firstly estimating a 1-D PSF kernel with straight lines, and then robustly obtaining the 2-D PSF from the 1-D kernel by least squares techniques and random sample consensus. Canny operator is applied to the LR image for obtaining edges and then Hough transform is utilized to extract straight lines of all orientations. Estimating 1-D PSF kernel with straight lines effectively alleviates the influence of the inaccurate edge detection on PSF estimation. The proposed method is investigated on both natural and synthetic images for estimating PSF. Experimental results show that the proposed method outperforms the state-ofthe- art and does not rely on accurate edge detection.

  5. Integration of Network Topological and Connectivity Properties for Neuroimaging Classification

    PubMed Central

    Jie, Biao; Gao, Wei; Wang, Qian; Wee, Chong-Yaw

    2014-01-01

    Rapid advances in neuroimaging techniques have provided an efficient and noninvasive way for exploring the structural and functional connectivity of the human brain. Quantitative measurement of abnormality of brain connectivity in patients with neurodegenerative diseases, such as mild cognitive impairment (MCI) and Alzheimer’s disease (AD), have also been widely reported, especially at a group level. Recently, machine learning techniques have been applied to the study of AD and MCI, i.e., to identify the individuals with AD/MCI from the healthy controls (HCs). However, most existing methods focus on using only a single property of a connectivity network, although multiple network properties, such as local connectivity and global topological properties, can potentially be used. In this paper, by employing multikernel based approach, we propose a novel connectivity based framework to integrate multiple properties of connectivity network for improving the classification performance. Specifically, two different types of kernels (i.e., vector-based kernel and graph kernel) are used to quantify two different yet complementary properties of the network, i.e., local connectivity and global topological properties. Then, multikernel learning (MKL) technique is adopted to fuse these heterogeneous kernels for neuroimaging classification. We test the performance of our proposed method on two different data sets. First, we test it on the functional connectivity networks of 12 MCI and 25 HC subjects. The results show that our method achieves significant performance improvement over those using only one type of network property. Specifically, our method achieves a classification accuracy of 91.9%, which is 10.8% better than those by single network-property-based methods. Then, we test our method for gender classification on a large set of functional connectivity networks with 133 infants scanned at birth, 1 year, and 2 years, also demonstrating very promising results. PMID:24108708

  6. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization

    PubMed Central

    Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996

  7. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization.

    PubMed

    Zhang, Chunyuan; Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms.

  8. Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings

    NASA Astrophysics Data System (ADS)

    Slavakis, Konstantinos; Theodoridis, Sergios

    2008-12-01

    Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.

  9. Postlumpectomy Focal Brachytherapy for Simultaneous Treatment of Surgical Cavity and Draining Lymph Nodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hrycushko, Brian A.; Li Shihong; Shi Chengyu

    2011-03-01

    Purpose: The primary objective was to investigate a novel focal brachytherapy technique using lipid nanoparticle (liposome)-carried {beta}-emitting radionuclides (rhenium-186 [{sup 186}Re]/rhenium-188 [{sup 188}Re]) to simultaneously treat the postlumpectomy surgical cavity and draining lymph nodes. Methods and Materials: Cumulative activity distributions in the lumpectomy cavity and lymph nodes were extrapolated from small animal imaging and human lymphoscintigraphy data. Absorbed dose calculations were performed for lumpectomy cavities with spherical and ellipsoidal shapes and lymph nodes within human subjects by use of the dose point kernel convolution method. Results: Dose calculations showed that therapeutic dose levels within the lumpectomy cavity wall can covermore » 2- and 5-mm depths for {sup 186}Re and {sup 188}Re liposomes, respectively. The absorbed doses at 1 cm sharply decreased to only 1.3% to 3.7% of the doses at 2 mm for {sup 186}Re liposomes and 5 mm for {sup 188}Re liposomes. Concurrently, the draining sentinel lymph nodes would receive a high focal therapeutic absorbed dose, whereas the average dose to 1 cm of surrounding tissue received less than 1% of that within the nodes. Conclusions: Focal brachytherapy by use of {sup 186}Re/{sup 188}Re liposomes was theoretically shown to be capable of simultaneously treating the lumpectomy cavity wall and draining sentinel lymph nodes with high absorbed doses while significantly lowering dose to surrounding healthy tissue. In turn, this allows for dose escalation to regions of higher probability of containing residual tumor cells after lumpectomy while reducing normal tissue complications.« less

  10. Measurement of Device Parameters Using Image Recovery Techniques in Large-Scale IC Devices

    NASA Technical Reports Server (NTRS)

    Scheick, Leif; Edmonds, Larry

    2004-01-01

    Devices that respond to radiation on a cell level will produce histograms showing the relative frequency of cell damage as a function of damage. The measured distribution is the convolution of distributions from radiation responses, measurement noise, and manufacturing parameters. A method of extracting device characteristics and parameters from measured distributions via mathematical and image subtraction techniques is described.

  11. Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Halicek, Martin; Lu, Guolan; Little, James V.; Wang, Xu; Patel, Mihir; Griffith, Christopher C.; El-Deiry, Mark W.; Chen, Amy Y.; Fei, Baowei

    2017-06-01

    Surgical cancer resection requires an accurate and timely diagnosis of the cancer margins in order to achieve successful patient remission. Hyperspectral imaging (HSI) has emerged as a useful, noncontact technique for acquiring spectral and optical properties of tissue. A convolutional neural network (CNN) classifier is developed to classify excised, squamous-cell carcinoma, thyroid cancer, and normal head and neck tissue samples using HSI. The CNN classification was validated by the manual annotation of a pathologist specialized in head and neck cancer. The preliminary results of 50 patients indicate the potential of HSI and deep learning for automatic tissue-labeling of surgical specimens of head and neck patients.

  12. ConvNetQuake: Convolutional Neural Network for Earthquake Detection and Location

    NASA Astrophysics Data System (ADS)

    Denolle, M.; Perol, T.; Gharbi, M.

    2017-12-01

    Over the last decades, the volume of seismic data has increased exponentially, creating a need for efficient algorithms to reliably detect and locate earthquakes. Today's most elaborate methods scan through the plethora of continuous seismic records, searching for repeating seismic signals. In this work, we leverage the recent advances in artificial intelligence and present ConvNetQuake, a highly scalable convolutional neural network for probabilistic earthquake detection and location from single stations. We apply our technique to study two years of induced seismicity in Oklahoma (USA). We detect 20 times more earthquakes than previously cataloged by the Oklahoma Geological Survey. Our algorithm detection performances are at least one order of magnitude faster than other established methods.

  13. Static facial expression recognition with convolution neural networks

    NASA Astrophysics Data System (ADS)

    Zhang, Feng; Chen, Zhong; Ouyang, Chao; Zhang, Yifei

    2018-03-01

    Facial expression recognition is a currently active research topic in the fields of computer vision, pattern recognition and artificial intelligence. In this paper, we have developed a convolutional neural networks (CNN) for classifying human emotions from static facial expression into one of the seven facial emotion categories. We pre-train our CNN model on the combined FER2013 dataset formed by train, validation and test set and fine-tune on the extended Cohn-Kanade database. In order to reduce the overfitting of the models, we utilized different techniques including dropout and batch normalization in addition to data augmentation. According to the experimental result, our CNN model has excellent classification performance and robustness for facial expression recognition.

  14. Salient Feature Identification and Analysis using Kernel-Based Classification Techniques for Synthetic Aperture Radar Automatic Target Recognition

    DTIC Science & Technology

    2014-03-27

    and machine learning for a range of research including such topics as medical imaging [10] and handwriting recognition [11]. The type of feature...1989. [11] C. Bahlmann, B. Haasdonk, and H. Burkhardt, “Online handwriting recognition with support vector machines-a kernel approach,” in Eighth...International Workshop on Frontiers in Handwriting Recognition, pp. 49–54, IEEE, 2002. [12] C. Cortes and V. Vapnik, “Support-vector networks,” Machine

  15. Fast Interrupt Priority Management in Operating System Kernels

    DTIC Science & Technology

    1993-05-01

    We present results for the Mach 3.0 microkernel operating system, although the technique is applicable to other kernel architectures, both micro and...protection in the Mach 3.0 microkernel for several different processor architectures. For example, on the Omron Luna88k, we observed a 50% reduction in...general interrupt mask raise/lower pair within the Mach 3.0 microkernel on a variety of architectures. DTIC QUALM i.N1’R%.*1IMD 5 k81tltC Avail andl

  16. Kinetic Rate Kernels via Hierarchical Liouville-Space Projection Operator Approach.

    PubMed

    Zhang, Hou-Dao; Yan, YiJing

    2016-05-19

    Kinetic rate kernels in general multisite systems are formulated on the basis of a nonperturbative quantum dissipation theory, the hierarchical equations of motion (HEOM) formalism, together with the Nakajima-Zwanzig projection operator technique. The present approach exploits the HEOM-space linear algebra. The quantum non-Markovian site-to-site transfer rate can be faithfully evaluated via projected HEOM dynamics. The developed method is exact, as evident by the comparison to the direct HEOM evaluation results on the population evolution.

  17. Linked-cluster formulation of electron-hole interaction kernel in real-space representation without using unoccupied states.

    PubMed

    Bayne, Michael G; Scher, Jeremy A; Ellis, Benjamin H; Chakraborty, Arindam

    2018-05-21

    Electron-hole or quasiparticle representation plays a central role in describing electronic excitations in many-electron systems. For charge-neutral excitation, the electron-hole interaction kernel is the quantity of interest for calculating important excitation properties such as optical gap, optical spectra, electron-hole recombination and electron-hole binding energies. The electron-hole interaction kernel can be formally derived from the density-density correlation function using both Green's function and TDDFT formalism. The accurate determination of the electron-hole interaction kernel remains a significant challenge for precise calculations of optical properties in the GW+BSE formalism. From the TDDFT perspective, the electron-hole interaction kernel has been viewed as a path to systematic development of frequency-dependent exchange-correlation functionals. Traditional approaches, such as MBPT formalism, use unoccupied states (which are defined with respect to Fermi vacuum) to construct the electron-hole interaction kernel. However, the inclusion of unoccupied states has long been recognized as the leading computational bottleneck that limits the application of this approach for larger finite systems. In this work, an alternative derivation that avoids using unoccupied states to construct the electron-hole interaction kernel is presented. The central idea of this approach is to use explicitly correlated geminal functions for treating electron-electron correlation for both ground and excited state wave functions. Using this ansatz, it is derived using both diagrammatic and algebraic techniques that the electron-hole interaction kernel can be expressed only in terms of linked closed-loop diagrams. It is proved that the cancellation of unlinked diagrams is a consequence of linked-cluster theorem in real-space representation. The electron-hole interaction kernel derived in this work was used to calculate excitation energies in many-electron systems and results were found to be in good agreement with the EOM-CCSD and GW+BSE methods. The numerical results highlight the effectiveness of the developed method for overcoming the computational barrier of accurately determining the electron-hole interaction kernel to applications of large finite systems such as quantum dots and nanorods.

  18. Multiple Point Statistics algorithm based on direct sampling and multi-resolution images

    NASA Astrophysics Data System (ADS)

    Julien, S.; Renard, P.; Chugunova, T.

    2017-12-01

    Multiple Point Statistics (MPS) has become popular for more than one decade in Earth Sciences, because these methods allow to generate random fields reproducing highly complex spatial features given in a conceptual model, the training image, while classical geostatistics techniques based on bi-point statistics (covariance or variogram) fail to generate realistic models. Among MPS methods, the direct sampling consists in borrowing patterns from the training image to populate a simulation grid. This latter is sequentially filled by visiting each of these nodes in a random order, and then the patterns, whose the number of nodes is fixed, become narrower during the simulation process, as the simulation grid is more densely informed. Hence, large scale structures are caught in the beginning of the simulation and small scale ones in the end. However, MPS may mix spatial characteristics distinguishable at different scales in the training image, and then loose the spatial arrangement of different structures. To overcome this limitation, we propose to perform MPS simulation using a decomposition of the training image in a set of images at multiple resolutions. Applying a Gaussian kernel onto the training image (convolution) results in a lower resolution image, and iterating this process, a pyramid of images depicting fewer details at each level is built, as it can be done in image processing for example to lighten the space storage of a photography. The direct sampling is then employed to simulate the lowest resolution level, and then to simulate each level, up to the finest resolution, conditioned to the level one rank coarser. This scheme helps reproduce the spatial structures at any scale of the training image and then generate more realistic models. We illustrate the method with aerial photographies (satellite images) and natural textures. Indeed, these kinds of images often display typical structures at different scales and are well-suited for MPS simulation techniques.

  19. Entanglement-assisted quantum convolutional coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilde, Mark M.; Brun, Todd A.

    2010-04-15

    We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.

  20. Incorporating deep learning with convolutional neural networks and position specific scoring matrices for identifying electron transport proteins.

    PubMed

    Le, Nguyen-Quoc-Khanh; Ho, Quang-Thai; Ou, Yu-Yen

    2017-09-05

    In several years, deep learning is a modern machine learning technique using in a variety of fields with state-of-the-art performance. Therefore, utilization of deep learning to enhance performance is also an important solution for current bioinformatics field. In this study, we try to use deep learning via convolutional neural networks and position specific scoring matrices to identify electron transport proteins, which is an important molecular function in transmembrane proteins. Our deep learning method can approach a precise model for identifying of electron transport proteins with achieved sensitivity of 80.3%, specificity of 94.4%, and accuracy of 92.3%, with MCC of 0.71 for independent dataset. The proposed technique can serve as a powerful tool for identifying electron transport proteins and can help biologists understand the function of the electron transport proteins. Moreover, this study provides a basis for further research that can enrich a field of applying deep learning in bioinformatics. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. High-speed architecture for the decoding of trellis-coded modulation

    NASA Technical Reports Server (NTRS)

    Osborne, William P.

    1992-01-01

    Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.

  2. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  3. Illustrating Surface Shape in Volume Data via Principal Direction-Driven 3D Line Integral Convolution

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria

    1997-01-01

    The three-dimensional shape and relative depth of a smoothly curving layered transparent surface may be communicated particularly effectively when the surface is artistically enhanced with sparsely distributed opaque detail. This paper describes how the set of principal directions and principal curvatures specified by local geometric operators can be understood to define a natural 'flow' over the surface of an object, and can be used to guide the placement of the lines of a stroke texture that seeks to represent 3D shape information in a perceptually intuitive way. The driving application for this work is the visualization of layered isovalue surfaces in volume data, where the particular identity of an individual surface is not generally known a priori and observers will typically wish to view a variety of different level surfaces from the same distribution, superimposed over underlying opaque structures. By advecting an evenly distributed set of tiny opaque particles, and the empty space between them, via 3D line integral convolution through the vector field defined by the principal directions and principal curvatures of the level surfaces passing through each gridpoint of a 3D volume, it is possible to generate a single scan-converted solid stroke texture that may intuitively represent the essential shape information of any level surface in the volume. To generate longer strokes over more highly curved areas, where the directional information is both most stable and most relevant, and to simultaneously downplay the visual impact of directional information in the flatter regions, one may dynamically redefine the length of the filter kernel according to the magnitude of the maximum principal curvature of the level surface at the point around which it is applied.

  4. Is Kinesio Taping to Generate Skin Convolutions Effective for Increasing Local Blood Circulation?

    PubMed Central

    Yang, Jae-Man; Lee, Jung-Hoon

    2018-01-01

    Background It is unclear whether traditional application of Kinesio taping, which produces wrinkles in the skin, is effective for improving blood circulation. This study investigated local skin temperature changes after the application of an elastic therapeutic tape using convolution and non-convolution taping methods (CTM/NCTM). Material/Methods Twenty-eight pain-free men underwent CTM and NCTM randomly applied to the right and left sides of the lower back. Using infrared thermography, skin temperature was measured before, immediately after application, 5 min later, 15 min later, and after the removal of the tape. Results Both CTM and NCTM showed a slight, but significant, decrease in skin temperature for up to 5 min. The skin temperature at 15 min and after the removal of the tape was not significantly different from the initial temperature for CTM and NCTM. There were also no significant differences in the skin temperatures between CTM and NCTM. Conclusions Our findings do not support a therapeutic effect of wrinkling the skin with elastic tape application as a technique to increase local blood flow. PMID:29332101

  5. Is Kinesio Taping to Generate Skin Convolutions Effective for Increasing Local Blood Circulation?

    PubMed

    Yang, Jae-Man; Lee, Jung-Hoon

    2018-01-14

    BACKGROUND It is unclear whether traditional application of Kinesio taping, which produces wrinkles in the skin, is effective for improving blood circulation. This study investigated local skin temperature changes after the application of an elastic therapeutic tape using convolution and non-convolution taping methods (CTM/NCTM). MATERIAL AND METHODS Twenty-eight pain-free men underwent CTM and NCTM randomly applied to the right and left sides of the lower back. Using infrared thermography, skin temperature was measured before, immediately after application, 5 min later, 15 min later, and after the removal of the tape. RESULTS Both CTM and NCTM showed a slight, but significant, decrease in skin temperature for up to 5 min. The skin temperature at 15 min and after the removal of the tape was not significantly different from the initial temperature for CTM and NCTM. There were also no significant differences in the skin temperatures between CTM and NCTM. CONCLUSIONS Our findings do not support a therapeutic effect of wrinkling the skin with elastic tape application as a technique to increase local blood flow.

  6. Automatic detection of aflatoxin contaminated corn kernels using dual-band imagery

    NASA Astrophysics Data System (ADS)

    Ononye, Ambrose E.; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Brown, Robert L.; Cleveland, Thomas E.

    2009-05-01

    Aflatoxin is a mycotoxin predominantly produced by Aspergillus flavus and Aspergillus parasitiucus fungi that grow naturally in corn, peanuts and in a wide variety of other grain products. Corn, like other grains is used as food for human and feed for animal consumption. It is known that aflatoxin is carcinogenic; therefore, ingestion of corn infected with the toxin can lead to very serious health problems such as liver damage if the level of the contamination is high. The US Food and Drug Administration (FDA) has strict guidelines for permissible levels in the grain products for both humans and animals. The conventional approach used to determine these contamination levels is one of the destructive and invasive methods that require corn kernels to be ground and then chemically analyzed. Unfortunately, each of the analytical methods can take several hours depending on the quantity, to yield a result. The development of high spectral and spatial resolution imaging sensors has created an opportunity for hyperspectral image analysis to be employed for aflatoxin detection. However, this brings about a high dimensionality problem as a setback. In this paper, we propose a technique that automatically detects aflatoxin contaminated corn kernels by using dual-band imagery. The method exploits the fluorescence emission spectra from corn kernels captured under 365 nm ultra-violet light excitation. Our approach could lead to a non-destructive and non-invasive way of quantifying the levels of aflatoxin contamination. The preliminary results shown here, demonstrate the potential of our technique for aflatoxin detection.

  7. Optimisation of shape kernel and threshold in image-processing motion analysers.

    PubMed

    Pedrocchi, A; Baroni, G; Sada, S; Marcon, E; Pedotti, A; Ferrigno, G

    2001-09-01

    The aim of the work is to optimise the image processing of a motion analyser. This is to improve accuracy, which is crucial for neurophysiological and rehabilitation applications. A new motion analyser, ELITE-S2, for installation on the International Space Station is described, with the focus on image processing. Important improvements are expected in the hardware of ELITE-S2 compared with ELITE and previous versions (ELITE-S and Kinelite). The core algorithm for marker recognition was based on the current ELITE version, using the cross-correlation technique. This technique was based on the matching of the expected marker shape, the so-called kernel, with image features. Optimisation of the kernel parameters was achieved using a genetic algorithm, taking into account noise rejection and accuracy. Optimisation was achieved by performing tests on six highly precise grids (with marker diameters ranging from 1.5 to 4 mm), representing all allowed marker image sizes, and on a noise image. The results of comparing the optimised kernels and the current ELITE version showed a great improvement in marker recognition accuracy, while noise rejection characteristics were preserved. An average increase in marker co-ordinate accuracy of +22% was achieved, corresponding to a mean accuracy of 0.11 pixel in comparison with 0.14 pixel, measured over all grids. An improvement of +37%, corresponding to an improvement from 0.22 pixel to 0.14 pixel, was observed over the grid with the biggest markers.

  8. Auto-Context Convolutional Neural Network (Auto-Net) for Brain Extraction in Magnetic Resonance Imaging.

    PubMed

    Mohseni Salehi, Seyed Sadegh; Erdogmus, Deniz; Gholipour, Ali

    2017-11-01

    Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and the robustness of brain extraction, therefore, are crucial for the accuracy of the entire brain analysis process. The state-of-the-art brain extraction techniques rely heavily on the accuracy of alignment or registration between brain atlases and query brain anatomy, and/or make assumptions about the image geometry, and therefore have limited success when these assumptions do not hold or image registration fails. With the aim of designing an accurate, learning-based, geometry-independent, and registration-free brain extraction tool, in this paper, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2-D patches of different window sizes. We consider two different architectures: 1) a voxelwise approach based on three parallel 2-D convolutional pathways for three different directions (axial, coronal, and sagittal) that implicitly learn 3-D image information without the need for computationally expensive 3-D convolutions and 2) a fully convolutional network based on the U-net architecture. Posterior probability maps generated by the networks are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain to extract it from non-brain tissue. The brain extraction results we have obtained from our CNNs are superior to the recently reported results in the literature on two publicly available benchmark data sets, namely, LPBA40 and OASIS, in which we obtained the Dice overlap coefficients of 97.73% and 97.62%, respectively. Significant improvement was achieved via our auto-context algorithm. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily oriented fetal brains in reconstructed fetal brain magnetic resonance imaging (MRI) data sets. In this application, our voxelwise auto-context CNN performed much better than the other methods (Dice coefficient: 95.97%), where the other methods performed poorly due to the non-standard orientation and geometry of the fetal brain in MRI. Through training, our method can provide accurate brain extraction in challenging applications. This, in turn, may reduce the problems associated with image registration in segmentation tasks.

  9. Dynamic frame resizing with convolutional neural network for efficient video compression

    NASA Astrophysics Data System (ADS)

    Kim, Jaehwan; Park, Youngo; Choi, Kwang Pyo; Lee, JongSeok; Jeon, Sunyoung; Park, JeongHoon

    2017-09-01

    In the past, video codecs such as vc-1 and H.263 used a technique to encode reduced-resolution video and restore original resolution from the decoder for improvement of coding efficiency. The techniques of vc-1 and H.263 Annex Q are called dynamic frame resizing and reduced-resolution update mode, respectively. However, these techniques have not been widely used due to limited performance improvements that operate well only under specific conditions. In this paper, video frame resizing (reduced/restore) technique based on machine learning is proposed for improvement of coding efficiency. The proposed method features video of low resolution made by convolutional neural network (CNN) in encoder and reconstruction of original resolution using CNN in decoder. The proposed method shows improved subjective performance over all the high resolution videos which are dominantly consumed recently. In order to assess subjective quality of the proposed method, Video Multi-method Assessment Fusion (VMAF) which showed high reliability among many subjective measurement tools was used as subjective metric. Moreover, to assess general performance, diverse bitrates are tested. Experimental results showed that BD-rate based on VMAF was improved by about 51% compare to conventional HEVC. Especially, VMAF values were significantly improved in low bitrate. Also, when the method is subjectively tested, it had better subjective visual quality in similar bit rate.

  10. A deconvolution technique to correct deep images of galaxies from instrumental scattered light

    NASA Astrophysics Data System (ADS)

    Karabal, E.; Duc, P.-A.; Kuntschner, H.; Chanial, P.; Cuillandre, J.-C.; Gwyn, S.

    2017-05-01

    Deep imaging of the diffuse light that is emitted by stellar fine structures and outer halos around galaxies is often now used to probe their past mass assembly. Because the extended halos survive longer than the relatively fragile tidal features, they trace more ancient mergers. We use images that reach surface brightness limits as low as 28.5-29 mag arcsec-2 (g-band) to obtain light and color profiles up to 5-10 effective radii of a sample of nearby early-type galaxies. These were acquired with MegaCam as part of the CFHT MATLAS large programme. These profiles may be compared to those produced using simulations of galaxy formation and evolution, once corrected for instrumental effects. Indeed they can be heavily contaminated by the scattered light caused by internal reflections within the instrument. In particular, the nucleus of galaxies generates artificial flux in the outer halo, which has to be precisely subtracted. We present a deconvolution technique to remove the artificial halos that makes use of very large kernels. The technique, which is based on PyOperators, is more time efficient than the model-convolution methods that are also used for that purpose. This is especially the case for galaxies with complex structures that are hard to model. Having a good knowledge of the point spread function (PSF), including its outer wings, is critical for the method. A database of MegaCam PSF models corresponding to different seeing conditions and bands was generated directly from the deep images. We show that the difference in the PSFs in different bands causes artificial changes in the color profiles, in particular a reddening of the outskirts of galaxies having a bright nucleus. The method is validated with a set of simulated images and applied to three representative test cases: NGC 3599, NGC 3489, and NGC 4274, which exhibits a prominent ghost halo for two of them. This method successfully removes this. The library of PSFs (FITS files) is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/601/A86

  11. Prostate cancer detection using machine learning techniques by employing combination of features extracting strategies.

    PubMed

    Hussain, Lal; Ahmed, Adeel; Saeed, Sharjil; Rathore, Saima; Awan, Imtiaz Ahmed; Shah, Saeed Arif; Majid, Abdul; Idris, Adnan; Awan, Anees Ahmed

    2018-02-06

    Prostate is a second leading causes of cancer deaths among men. Early detection of cancer can effectively reduce the rate of mortality caused by Prostate cancer. Due to high and multiresolution of MRIs from prostate cancer require a proper diagnostic systems and tools. In the past researchers developed Computer aided diagnosis (CAD) systems that help the radiologist to detect the abnormalities. In this research paper, we have employed novel Machine learning techniques such as Bayesian approach, Support vector machine (SVM) kernels: polynomial, radial base function (RBF) and Gaussian and Decision Tree for detecting prostate cancer. Moreover, different features extracting strategies are proposed to improve the detection performance. The features extracting strategies are based on texture, morphological, scale invariant feature transform (SIFT), and elliptic Fourier descriptors (EFDs) features. The performance was evaluated based on single as well as combination of features using Machine Learning Classification techniques. The Cross validation (Jack-knife k-fold) was performed and performance was evaluated in term of receiver operating curve (ROC) and specificity, sensitivity, Positive predictive value (PPV), negative predictive value (NPV), false positive rate (FPR). Based on single features extracting strategies, SVM Gaussian Kernel gives the highest accuracy of 98.34% with AUC of 0.999. While, using combination of features extracting strategies, SVM Gaussian kernel with texture + morphological, and EFDs + morphological features give the highest accuracy of 99.71% and AUC of 1.00.

  12. Using Deep Learning for Tropical Cyclone Intensity Estimation

    NASA Astrophysics Data System (ADS)

    Miller, J.; Maskey, M.; Berendes, T.

    2017-12-01

    Satellite-based techniques are the primary approach to estimating tropical cyclone (TC) intensity. Tropical cyclone warning centers worldwide still apply variants of the Dvorak technique for such estimations that include visual inspection of the satellite images. The National Hurricane Center (NHC) estimates about 10-20% uncertainty in its post analyses when only satellite-based estimates are available. The success of the Dvorak technique proves that spatial patterns in infrared (IR) imagery strongly relate to TC intensity. With the ever-increasing quality and quantity of satellite observations of TCs, deep learning techniques designed to excel at pattern recognition have become more relevant in this area of study. In our current study, we aim to provide a fully objective approach to TC intensity estimation by utilizing deep learning in the form of a convolutional neural network trained to predict TC intensity (maximum sustained wind speed) using IR satellite imagery. Large amounts of training data are needed to train a convolutional neural network, so we use GOES IR images from historical tropical storms from the Atlantic and Pacific basins spanning years 2000 to 2015. Images are labeled using a special subset of the HURDAT2 dataset restricted to time periods with airborne reconnaissance data available in order to improve the quality of the HURDAT2 data. Results and the advantages of this technique are to be discussed.

  13. Bose–Einstein condensation temperature of finite systems

    NASA Astrophysics Data System (ADS)

    Xie, Mi

    2018-05-01

    In studies of the Bose–Einstein condensation of ideal gases in finite systems, the divergence problem usually arises in the equation of state. In this paper, we present a technique based on the heat kernel expansion and zeta function regularization to solve the divergence problem, and obtain the analytical expression of the Bose–Einstein condensation temperature for general finite systems. The result is represented by the heat kernel coefficients, where the asymptotic energy spectrum of the system is used. Besides the general case, for systems with exact spectra, e.g. ideal gases in an infinite slab or in a three-sphere, the sums of the spectra can be obtained exactly and the calculation of corrections to the critical temperatures is more direct. For a system confined in a bounded potential, the form of the heat kernel is different from the usual heat kernel expansion. We show that as long as the asymptotic form of the global heat kernel can be found, our method works. For Bose gases confined in three- and two-dimensional isotropic harmonic potentials, we obtain the higher-order corrections to the usual results of the critical temperatures. Our method can also be applied to the problem of generalized condensation, and we give the correction of the boundary on the second critical temperature in a highly anisotropic slab.

  14. Classification of Microarray Data Using Kernel Fuzzy Inference System

    PubMed Central

    Kumar Rath, Santanu

    2014-01-01

    The DNA microarray classification technique has gained more popularity in both research and practice. In real data analysis, such as microarray data, the dataset contains a huge number of insignificant and irrelevant features that tend to lose useful information. Classes with high relevance and feature sets with high significance are generally referred for the selected features, which determine the samples classification into their respective classes. In this paper, kernel fuzzy inference system (K-FIS) algorithm is applied to classify the microarray data (leukemia) using t-test as a feature selection method. Kernel functions are used to map original data points into a higher-dimensional (possibly infinite-dimensional) feature space defined by a (usually nonlinear) function ϕ through a mathematical process called the kernel trick. This paper also presents a comparative study for classification using K-FIS along with support vector machine (SVM) for different set of features (genes). Performance parameters available in the literature such as precision, recall, specificity, F-measure, ROC curve, and accuracy are considered to analyze the efficiency of the classification model. From the proposed approach, it is apparent that K-FIS model obtains similar results when compared with SVM model. This is an indication that the proposed approach relies on kernel function. PMID:27433543

  15. Kernel-Based Discriminant Techniques for Educational Placement

    ERIC Educational Resources Information Center

    Lin, Miao-hsiang; Huang, Su-yun; Chang, Yuan-chin

    2004-01-01

    This article considers the problem of educational placement. Several discriminant techniques are applied to a data set from a survey project of science ability. A profile vector for each student consists of five science-educational indicators. The students are intended to be placed into three reference groups: advanced, regular, and remedial.…

  16. Convolution-based estimation of organ dose in tube current modulated CT

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan

    2016-05-01

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The discrepancy between the estimated organ dose and dose simulated using TCM Monte Carlo program was quantified. We further compared the convolution-based organ dose estimation method with two other strategies with different approaches of quantifying the irradiation field. The proposed convolution-based estimation method showed good accuracy with the organ dose simulated using the TCM Monte Carlo simulation. The average percentage error (normalized by CTDIvol) was generally within 10% across all organs and modulation profiles, except for organs located in the pelvic and shoulder regions. This study developed an improved method that accurately quantifies the irradiation field under TCM scans. The results suggested that organ dose could be estimated in real-time both prospectively (with the localizer information only) and retrospectively (with acquired CT data).

  17. Using deep learning in image hyper spectral segmentation, classification, and detection

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Su, Zhenyu

    2018-02-01

    Recent years have shown that deep learning neural networks are a valuable tool in the field of computer vision. Deep learning method can be used in applications like remote sensing such as Land cover Classification, Detection of Vehicle in Satellite Images, Hyper spectral Image classification. This paper addresses the use of the deep learning artificial neural network in Satellite image segmentation. Image segmentation plays an important role in image processing. The hue of the remote sensing image often has a large hue difference, which will result in the poor display of the images in the VR environment. Image segmentation is a pre processing technique applied to the original images and splits the image into many parts which have different hue to unify the color. Several computational models based on supervised, unsupervised, parametric, probabilistic region based image segmentation techniques have been proposed. Recently, one of the machine learning technique known as, deep learning with convolution neural network has been widely used for development of efficient and automatic image segmentation models. In this paper, we focus on study of deep neural convolution network and its variants for automatic image segmentation rather than traditional image segmentation strategies.

  18. Solutions of evolution equations associated to infinite-dimensional Laplacian

    NASA Astrophysics Data System (ADS)

    Ouerdiane, Habib

    2016-05-01

    We study an evolution equation associated with the integer power of the Gross Laplacian ΔGp and a potential function V on an infinite-dimensional space. The initial condition is a generalized function. The main technique we use is the representation of the Gross Laplacian as a convolution operator. This representation enables us to apply the convolution calculus on a suitable distribution space to obtain the explicit solution of the perturbed evolution equation. Our results generalize those previously obtained by Hochberg [K. J. Hochberg, Ann. Probab. 6 (1978) 433.] in the one-dimensional case with V=0, as well as by Barhoumi-Kuo-Ouerdiane for the case p=1 (See Ref. [A. Barhoumi, H. H. Kuo and H. Ouerdiane, Soochow J. Math. 32 (2006) 113.]).

  19. TH-C-BRD-04: Beam Modeling and Validation with Triple and Double Gaussian Dose Kernel for Spot Scanning Proton Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, S; Takayanagi, T; Fujii, Y

    2014-06-15

    Purpose: To present the validity of our beam modeling with double and triple Gaussian dose kernels for spot scanning proton beams in Nagoya Proton Therapy Center. This study investigates the conformance between the measurements and calculation results in absolute dose with two types of beam kernel. Methods: A dose kernel is one of the important input data required for the treatment planning software. The dose kernel is the 3D dose distribution of an infinitesimal pencil beam of protons in water and consists of integral depth doses and lateral distributions. We have adopted double and triple Gaussian model as lateral distributionmore » in order to take account of the large angle scattering due to nuclear reaction by fitting simulated inwater lateral dose profile for needle proton beam at various depths. The fitted parameters were interpolated as a function of depth in water and were stored as a separate look-up table for the each beam energy. The process of beam modeling is based on the method of MDACC [X.R.Zhu 2013]. Results: From the comparison results between the absolute doses calculated by double Gaussian model and those measured at the center of SOBP, the difference is increased up to 3.5% in the high-energy region because the large angle scattering due to nuclear reaction is not sufficiently considered at intermediate depths in the double Gaussian model. In case of employing triple Gaussian dose kernels, the measured absolute dose at the center of SOBP agrees with calculation within ±1% regardless of the SOBP width and maximum range. Conclusion: We have demonstrated the beam modeling results of dose distribution employing double and triple Gaussian dose kernel. Treatment planning system with the triple Gaussian dose kernel has been successfully verified and applied to the patient treatment with a spot scanning technique in Nagoya Proton Therapy Center.« less

  20. Multidimensional deconvolution of optical microscope and ultrasound imaging using adaptive least-mean-square (LMS) inverse filtering

    NASA Astrophysics Data System (ADS)

    Sapia, Mark Angelo

    2000-11-01

    Three-dimensional microscope images typically suffer from reduced resolution due to the effects of convolution, optical aberrations and out-of-focus blurring. Two- dimensional ultrasound images are also degraded by convolutional bluffing and various sources of noise. Speckle noise is a major problem in ultrasound images. In microscopy and ultrasound, various methods of digital filtering have been used to improve image quality. Several methods of deconvolution filtering have been used to improve resolution by reversing the convolutional effects, many of which are based on regularization techniques and non-linear constraints. The technique discussed here is a unique linear filter for deconvolving 3D fluorescence microscopy or 2D ultrasound images. The process is to solve for the filter completely in the spatial-domain using an adaptive algorithm to converge to an optimum solution for de-blurring and resolution improvement. There are two key advantages of using an adaptive solution: (1)it efficiently solves for the filter coefficients by taking into account all sources of noise and degraded resolution at the same time, and (2)achieves near-perfect convergence to the ideal linear deconvolution filter. This linear adaptive technique has other advantages such as avoiding artifacts of frequency-domain transformations and concurrent adaptation to suppress noise. Ultimately, this approach results in better signal-to-noise characteristics with virtually no edge-ringing. Many researchers have not adopted linear techniques because of poor convergence, noise instability and negative valued data in the results. The methods presented here overcome many of these well-documented disadvantages and provide results that clearly out-perform other linear methods and may also out-perform regularization and constrained algorithms. In particular, the adaptive solution is most responsible for overcoming the poor performance associated with linear techniques. This linear adaptive approach to deconvolution is demonstrated with results of restoring blurred phantoms for both microscopy and ultrasound and restoring 3D microscope images of biological cells and 2D ultrasound images of human subjects (courtesy of General Electric and Diasonics, Inc.).

  1. An SVM-based solution for fault detection in wind turbines.

    PubMed

    Santos, Pedro; Villa, Luisa F; Reñones, Aníbal; Bustillo, Andres; Maudes, Jesús

    2015-03-09

    Research into fault diagnosis in machines with a wide range of variable loads and speeds, such as wind turbines, is of great industrial interest. Analysis of the power signals emitted by wind turbines for the diagnosis of mechanical faults in their mechanical transmission chain is insufficient. A successful diagnosis requires the inclusion of accelerometers to evaluate vibrations. This work presents a multi-sensory system for fault diagnosis in wind turbines, combined with a data-mining solution for the classification of the operational state of the turbine. The selected sensors are accelerometers, in which vibration signals are processed using angular resampling techniques and electrical, torque and speed measurements. Support vector machines (SVMs) are selected for the classification task, including two traditional and two promising new kernels. This multi-sensory system has been validated on a test-bed that simulates the real conditions of wind turbines with two fault typologies: misalignment and imbalance. Comparison of SVM performance with the results of artificial neural networks (ANNs) shows that linear kernel SVM outperforms other kernels and ANNs in terms of accuracy, training and tuning times. The suitability and superior performance of linear SVM is also experimentally analyzed, to conclude that this data acquisition technique generates linearly separable datasets.

  2. Transient infrared spectroscopy for detection of toxigenic fungi in corn: potential for on-line evaluation.

    PubMed

    Gordon, S H; Jones, R W; McClelland, J F; Wicklow, D T; Greene, R V

    1999-12-01

    An urgent need for rapid sensors to detect contamination of food grains by toxigenic fungi such as Aspergillus flavus prompted research and development of Fourier transform infrared photoacoustic spectroscopy (FTIR-PAS) as a highly sensitive probe for fungi growing on the surfaces of individual corn kernels. However, the photoacoustic technique has limited potential for screening bulk corn because currently available photoacoustic detectors can accommodate only a single intact kernel at a time. Transient infrared spectroscopy (TIRS), on the other hand, is a promising new technique that can acquire analytically useful infrared spectra from a moving mass of solid materials. Therefore, the potential of TIRS for on-line, noncontact detection of A. flavus contamination in a moving bed of corn kernels was explored. Early test results based on visual inspection of TIRS spectral differences predict an 85% or 95% success rate in distinguishing healthy corn from grain infected with A. flavus. Four unique infrared spectral features which identified infected corn in FTIR-PAS were also found to be diagnostic in TIRS. Although the technology is still in its infancy, the preliminary results indicate that TIRS is a potentially effective screening method for bulk quantities of corn grain.

  3. Santalbic acid from quandong kernels and oil fed to rats affects kidney and liver P450.

    PubMed

    Jones, G P; Watson, T G; Sinclair, A J; Birkett, A; Dunt, N; Nair, S S; Tonkin, S Y

    1999-09-01

    Kernels of the plant Santalum acuminatum (quandong) are eaten as Australian 'bush foods'. They are rich in oil and contain relatively large amounts of the acetylenic fatty acid, santalbic acid (trans-11-octadecen-9-ynoic acid), whose chemical structure is unlike that of normal dietary fatty acids. When rats were fed high fat diets in which oil from quandong kernels supplied 50% of dietary energy, the proportion of santalbic acid absorbed was more than 90%. Feeding quandong oil elevated not only total hepatic cytochrome P450 but also the cytochrome P450 4A subgroup of enzymes as shown by a specific immunoblotting technique. A purified methyl santalbate preparation isolated from quandong oil was fed to rats at 9% of dietary energy for 4 days and this also elevated cytochrome P450 4A in both kidney and liver microsomes in comparison with methyl esters from canola oil. Santalbic acid appears to be metabolized differently from the usual dietary fatty acids and the consumption of oil from quandong kernels may cause perturbations in normal fatty acid biochemistry.

  4. On the solution of integral equations with a generalized Cauchy kernel

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1987-01-01

    A numerical technique is developed analytically to solve a class of singular integral equations occurring in mixed boundary-value problems for nonhomogeneous elastic media with discontinuities. The approach of Kaya and Erdogan (1987) is extended to treat equations with generalized Cauchy kernels, reformulating the boundary-value problems in terms of potentials as the unknown functions. The numerical implementation of the solution is discussed, and results for an epoxy-Al plate with a crack terminating at the interface and loading normal to the crack are presented in tables.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heroux, Michael Allen; Marker, Bryan

    This report summarizes the progress made as part of a one year lab-directed research and development (LDRD) project to fund the research efforts of Bryan Marker at the University of Texas at Austin. The goal of the project was to develop new techniques for automatically tuning the performance of dense linear algebra kernels. These kernels often represent the majority of computational time in an application. The primary outcome from this work is a demonstration of the value of model driven engineering as an approach to accurately predict and study performance trade-offs for dense linear algebra computations.

  6. Azadirachtin derivatives from seed kernels of Azadirachta excelsa.

    PubMed

    Kanokmedhakul, Somdej; Kanokmedhakul, Kwanjai; Prajuabsuk, Thirada; Panichajakul, Sanha; Panyamee, Piyanan; Prabpai, Samran; Kongsaeree, Palangpon

    2005-07-01

    Three new azadirachtin derivatives, named azadirachtins O-Q (1-3), along with the known azadirachtin B (4), azadirachtin L (5), azadirachtin M (6) 11alpha-azadirachtin H (7), 11beta-azadirachtin H (8), and azadirachtol (9) were isolated from seed kernels of Azadirachta excelsa. Their structures were established by spectroscopic techniques, and the structure of 3 was confirmed by X-ray analysis. Compounds 1-7 and 9 exhibited toxicity to the diamondback moth (Plutella xylostella) with an LD50 of 0.75-1.92 microg/g body weight, in 92 h.

  7. Tweaked residual convolutional network for face alignment

    NASA Astrophysics Data System (ADS)

    Du, Wenchao; Li, Ke; Zhao, Qijun; Zhang, Yi; Chen, Hu

    2017-08-01

    We propose a novel Tweaked Residual Convolutional Network approach for face alignment with two-level convolutional networks architecture. Specifically, the first-level Tweaked Convolutional Network (TCN) module predicts the landmark quickly but accurately enough as a preliminary, by taking low-resolution version of the detected face holistically as the input. The following Residual Convolutional Networks (RCN) module progressively refines the landmark by taking as input the local patch extracted around the predicted landmark, particularly, which allows the Convolutional Neural Network (CNN) to extract local shape-indexed features to fine tune landmark position. Extensive evaluations show that the proposed Tweaked Residual Convolutional Network approach outperforms existing methods.

  8. Proteomic profiling and pathway analysis of the response of rat renal proximal convoluted tubules to metabolic acidosis

    PubMed Central

    Schauer, Kevin L.; Freund, Dana M.; Prenni, Jessica E.

    2013-01-01

    Metabolic acidosis is a relatively common pathological condition that is defined as a decrease in blood pH and bicarbonate concentration. The renal proximal convoluted tubule responds to this condition by increasing the extraction of plasma glutamine and activating ammoniagenesis and gluconeogenesis. The combined processes increase the excretion of acid and produce bicarbonate ions that are added to the blood to partially restore acid-base homeostasis. Only a few cytosolic proteins, such as phosphoenolpyruvate carboxykinase, have been determined to play a role in the renal response to metabolic acidosis. Therefore, further analysis was performed to better characterize the response of the cytosolic proteome. Proximal convoluted tubule cells were isolated from rat kidney cortex at various times after onset of acidosis and fractionated to separate the soluble cytosolic proteins from the remainder of the cellular components. The cytosolic proteins were analyzed using two-dimensional liquid chromatography and tandem mass spectrometry (MS/MS). Spectral counting along with average MS/MS total ion current were used to quantify temporal changes in relative protein abundance. In all, 461 proteins were confidently identified, of which 24 exhibited statistically significant changes in abundance. To validate these techniques, several of the observed abundance changes were confirmed by Western blotting. Data from the cytosolic fractions were then combined with previous proteomic data, and pathway analyses were performed to identify the primary pathways that are activated or inhibited in the proximal convoluted tubule during the onset of metabolic acidosis. PMID:23804448

  9. Source imaging of potential fields through a matrix space-domain algorithm

    NASA Astrophysics Data System (ADS)

    Baniamerian, Jamaledin; Oskooi, Behrooz; Fedi, Maurizio

    2017-01-01

    Imaging of potential fields yields a fast 3D representation of the source distribution of potential fields. Imaging methods are all based on multiscale methods allowing the source parameters of potential fields to be estimated from a simultaneous analysis of the field at various scales or, in other words, at many altitudes. Accuracy in performing upward continuation and differentiation of the field has therefore a key role for this class of methods. We here describe an accurate method for performing upward continuation and vertical differentiation in the space-domain. We perform a direct discretization of the integral equations for upward continuation and Hilbert transform; from these equations we then define matrix operators performing the transformation, which are symmetric (upward continuation) or anti-symmetric (differentiation), respectively. Thanks to these properties, just the first row of the matrices needs to be computed, so to decrease dramatically the computation cost. Our approach allows a simple procedure, with the advantage of not involving large data extension or tapering, as due instead in case of Fourier domain computation. It also allows level-to-drape upward continuation and a stable differentiation at high frequencies; finally, upward continuation and differentiation kernels may be merged into a single kernel. The accuracy of our approach is shown to be important for multi-scale algorithms, such as the continuous wavelet transform or the DEXP (depth from extreme point method), because border errors, which tend to propagate largely at the largest scales, are radically reduced. The application of our algorithm to synthetic and real-case gravity and magnetic data sets confirms the accuracy of our space domain strategy over FFT algorithms and standard convolution procedures.

  10. Multiscale approach to contour fitting for MR images

    NASA Astrophysics Data System (ADS)

    Rueckert, Daniel; Burger, Peter

    1996-04-01

    We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.

  11. Combining deep residual neural network features with supervised machine learning algorithms to classify diverse food image datasets.

    PubMed

    McAllister, Patrick; Zheng, Huiru; Bond, Raymond; Moorhead, Anne

    2018-04-01

    Obesity is increasing worldwide and can cause many chronic conditions such as type-2 diabetes, heart disease, sleep apnea, and some cancers. Monitoring dietary intake through food logging is a key method to maintain a healthy lifestyle to prevent and manage obesity. Computer vision methods have been applied to food logging to automate image classification for monitoring dietary intake. In this work we applied pretrained ResNet-152 and GoogleNet convolutional neural networks (CNNs), initially trained using ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset with MatConvNet package, to extract features from food image datasets; Food 5K, Food-11, RawFooT-DB, and Food-101. Deep features were extracted from CNNs and used to train machine learning classifiers including artificial neural network (ANN), support vector machine (SVM), Random Forest, and Naive Bayes. Results show that using ResNet-152 deep features with SVM with RBF kernel can accurately detect food items with 99.4% accuracy using Food-5K validation food image dataset and 98.8% with Food-5K evaluation dataset using ANN, SVM-RBF, and Random Forest classifiers. Trained with ResNet-152 features, ANN can achieve 91.34%, 99.28% when applied to Food-11 and RawFooT-DB food image datasets respectively and SVM with RBF kernel can achieve 64.98% with Food-101 image dataset. From this research it is clear that using deep CNN features can be used efficiently for diverse food item image classification. The work presented in this research shows that pretrained ResNet-152 features provide sufficient generalisation power when applied to a range of food image classification tasks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Comparative Evaluation of Pavement Crack Detection Using Kernel-Based Techniques in Asphalt Road Surfaces

    NASA Astrophysics Data System (ADS)

    Miraliakbari, A.; Sok, S.; Ouma, Y. O.; Hahn, M.

    2016-06-01

    With the increasing demand for the digital survey and acquisition of road pavement conditions, there is also the parallel growing need for the development of automated techniques for the analysis and evaluation of the actual road conditions. This is due in part to the resulting large volumes of road pavement data captured through digital surveys, and also to the requirements for rapid data processing and evaluations. In this study, the Canon 5D Mark II RGB camera with a resolution of 21 megapixels is used for the road pavement condition mapping. Even though many imaging and mapping sensors are available, the development of automated pavement distress detection, recognition and extraction systems for pavement condition is still a challenge. In order to detect and extract pavement cracks, a comparative evaluation of kernel-based segmentation methods comprising line filtering (LF), local binary pattern (LBP) and high-pass filtering (HPF) is carried out. While the LF and LBP methods are based on the principle of rotation-invariance for pattern matching, the HPF applies the same principle for filtering, but with a rotational invariant matrix. With respect to the processing speeds, HPF is fastest due to the fact that it is based on a single kernel, as compared to LF and LBP which are based on several kernels. Experiments with 20 sample images which contain linear, block and alligator cracks are carried out. On an average a completeness of distress extraction with values of 81.2%, 76.2% and 81.1% have been found for LF, HPF and LBP respectively.

  13. Classification and quantification analysis of peach kernel from different origins with near-infrared diffuse reflection spectroscopy

    PubMed Central

    Liu, Wei; Wang, Zhen-Zhong; Qing, Jian-Ping; Li, Hong-Juan; Xiao, Wei

    2014-01-01

    Background: Peach kernels which contain kinds of fatty acids play an important role in the regulation of a variety of physiological and biological functions. Objective: To establish an innovative and rapid diffuse reflectance near-infrared spectroscopy (DR-NIR) analysis method along with chemometric techniques for the qualitative and quantitative determination of a peach kernel. Materials and Methods: Peach kernel samples from nine different origins were analyzed with high-performance liquid chromatography (HPLC) as a reference method. DR-NIR is in the spectral range 1100-2300 nm. Principal component analysis (PCA) and partial least squares regression (PLSR) algorithm were applied to obtain prediction models, The Savitzky-Golay derivative and first derivative were adopted for the spectral pre-processing, PCA was applied to classify the varieties of those samples. For the quantitative calibration, the models of linoleic and oleinic acids were established with the PLSR algorithm and the optimal principal component (PC) numbers were selected with leave-one-out (LOO) cross-validation. The established models were evaluated with the root mean square error of deviation (RMSED) and corresponding correlation coefficients (R2). Results: The PCA results of DR-NIR spectra yield clear classification of the two varieties of peach kernel. PLSR had a better predictive ability. The correlation coefficients of the two calibration models were above 0.99, and the RMSED of linoleic and oleinic acids were 1.266% and 1.412%, respectively. Conclusion: The DR-NIR combined with PCA and PLSR algorithm could be used efficiently to identify and quantify peach kernels and also help to solve variety problem. PMID:25422544

  14. Generation and optimization of superpixels as image processing kernels for Jones matrix optical coherence tomography

    PubMed Central

    Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa; Yasuno, Yoshiaki

    2017-01-01

    Jones matrix-based polarization sensitive optical coherence tomography (JM-OCT) simultaneously measures optical intensity, birefringence, degree of polarization uniformity, and OCT angiography. The statistics of the optical features in a local region, such as the local mean of the OCT intensity, are frequently used for image processing and the quantitative analysis of JM-OCT. Conventionally, local statistics have been computed with fixed-size rectangular kernels. However, this results in a trade-off between image sharpness and statistical accuracy. We introduce a superpixel method to JM-OCT for generating the flexible kernels of local statistics. A superpixel is a cluster of image pixels that is formed by the pixels’ spatial and signal value proximities. An algorithm for superpixel generation specialized for JM-OCT and its optimization methods are presented in this paper. The spatial proximity is in two-dimensional cross-sectional space and the signal values are the four optical features. Hence, the superpixel method is a six-dimensional clustering technique for JM-OCT pixels. The performance of the JM-OCT superpixels and its optimization methods are evaluated in detail using JM-OCT datasets of posterior eyes. The superpixels were found to well preserve tissue structures, such as layer structures, sclera, vessels, and retinal pigment epithelium. And hence, they are more suitable for local statistics kernels than conventional uniform rectangular kernels. PMID:29082073

  15. Quantitative trait loci mapping for Gibberella ear rot resistance and associated agronomic traits using genotyping-by-sequencing in maize.

    PubMed

    Kebede, Aida Z; Woldemariam, Tsegaye; Reid, Lana M; Harris, Linda J

    2016-01-01

    Unique and co-localized chromosomal regions affecting Gibberella ear rot disease resistance and correlated agronomic traits were identified in maize. Dissecting the mechanisms underlying resistance to Gibberella ear rot (GER) disease in maize provides insight towards more informed breeding. To this goal, we evaluated 410 recombinant inbred lines (RIL) for GER resistance over three testing years using silk channel and kernel inoculation techniques. RILs were also evaluated for agronomic traits like days to silking, husk cover, and kernel drydown rate. The RILs showed significant genotypic differences for all traits with above average to high heritability estimates. Significant (P < 0.01) but weak genotypic correlations were observed between disease severity and agronomic traits, indicating the involvement of agronomic traits in disease resistance. Common QTLs were detected for GER resistance and kernel drydown rate, suggesting the existence of pleiotropic genes that could be exploited to improve both traits at the same time. The QTLs identified for silk and kernel resistance shared some common regions on chromosomes 1, 2, and 8 and also had some regions specific to each tissue on chromosomes 9 and 10. Thus, effective GER resistance breeding could be achieved by considering screening methods that allow exploitation of tissue-specific disease resistance mechanisms and include kernel drydown rate either in an index or as indirect selection criterion.

  16. Fast algorithm for chirp transforms with zooming-in ability and its applications.

    PubMed

    Deng, X; Bihari, B; Gan, J; Zhao, F; Chen, R T

    2000-04-01

    A general fast numerical algorithm for chirp transforms is developed by using two fast Fourier transforms and employing an analytical kernel. This new algorithm unifies the calculations of arbitrary real-order fractional Fourier transforms and Fresnel diffraction. Its computational complexity is better than a fast convolution method using Fourier transforms. Furthermore, one can freely choose the sampling resolutions in both x and u space and zoom in on any portion of the data of interest. Computational results are compared with analytical ones. The errors are essentially limited by the accuracy of the fast Fourier transforms and are higher than the order 10(-12) for most cases. As an example of its application to scalar diffraction, this algorithm can be used to calculate near-field patterns directly behind the aperture, 0 < or = z < d2/lambda. It compensates another algorithm for Fresnel diffraction that is limited to z > d2/lambdaN [J. Opt. Soc. Am. A 15, 2111 (1998)]. Experimental results from waveguide-output microcoupler diffraction are in good agreement with the calculations.

  17. Modeling velocity space-time correlations in wind farms

    NASA Astrophysics Data System (ADS)

    Lukassen, Laura J.; Stevens, Richard J. A. M.; Meneveau, Charles; Wilczek, Michael

    2016-11-01

    Turbulent fluctuations of wind velocities cause power-output fluctuations in wind farms. The statistics of velocity fluctuations can be described by velocity space-time correlations in the atmospheric boundary layer. In this context, it is important to derive simple physics-based models. The so-called Tennekes-Kraichnan random sweeping hypothesis states that small-scale velocity fluctuations are passively advected by large-scale velocity perturbations in a random fashion. In the present work, this hypothesis is used with an additional mean wind velocity to derive a model for the spatial and temporal decorrelation of velocities in wind farms. It turns out that in the framework of this model, space-time correlations are a convolution of the spatial correlation function with a temporal decorrelation kernel. In this presentation, first results on the comparison to large eddy simulations will be presented and the potential of the approach to characterize power output fluctuations of wind farms will be discussed. Acknowledgements: 'Fellowships for Young Energy Scientists' (YES!) of FOM, the US National Science Foundation Grant IIA 1243482, and support by the Max Planck Society.

  18. SELF-GRAVITATIONAL FORCE CALCULATION OF SECOND-ORDER ACCURACY FOR INFINITESIMALLY THIN GASEOUS DISKS IN POLAR COORDINATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hsiang-Hsu; Taam, Ronald E.; Yen, David C. C., E-mail: yen@math.fju.edu.tw

    Investigating the evolution of disk galaxies and the dynamics of proto-stellar disks can involve the use of both a hydrodynamical and a Poisson solver. These systems are usually approximated as infinitesimally thin disks using two-dimensional Cartesian or polar coordinates. In Cartesian coordinates, the calculations of the hydrodynamics and self-gravitational forces are relatively straightforward for attaining second-order accuracy. However, in polar coordinates, a second-order calculation of self-gravitational forces is required for matching the second-order accuracy of hydrodynamical schemes. We present a direct algorithm for calculating self-gravitational forces with second-order accuracy without artificial boundary conditions. The Poisson integral in polar coordinates ismore » expressed in a convolution form and the corresponding numerical complexity is nearly linear using a fast Fourier transform. Examples with analytic solutions are used to verify that the truncated error of this algorithm is of second order. The kernel integral around the singularity is applied to modify the particle method. The use of a softening length is avoided and the accuracy of the particle method is significantly improved.« less

  19. Reconstruction of transient vibration and sound radiation of an impacted plate using time domain plane wave superposition method

    NASA Astrophysics Data System (ADS)

    Geng, Lin; Zhang, Xiao-Zheng; Bi, Chuan-Xing

    2015-05-01

    Time domain plane wave superposition method is extended to reconstruct the transient pressure field radiated by an impacted plate and the normal acceleration of the plate. In the extended method, the pressure measured on the hologram plane is expressed as a superposition of time convolutions between the time-wavenumber normal acceleration spectrum on a virtual source plane and the time domain propagation kernel relating the pressure on the hologram plane to the normal acceleration spectrum on the virtual source plane. By performing an inverse operation, the normal acceleration spectrum on the virtual source plane can be obtained by an iterative solving process, and then taken as the input to reconstruct the whole pressure field and the normal acceleration of the plate. An experiment of a clamped rectangular steel plate impacted by a steel ball is presented. The experimental results demonstrate that the extended method is effective in visualizing the transient vibration and sound radiation of an impacted plate in both time and space domains, thus providing the important information for overall understanding the vibration and sound radiation of the plate.

  20. A saliency-based approach to detection of infrared target

    NASA Astrophysics Data System (ADS)

    Chen, Yanfei; Sang, Nong; Dan, Zhiping

    2013-10-01

    Automatic target detection in infrared images is a hot research field of national defense technology. We propose a new saliency-based infrared target detection model in this paper, which is based on the fact that human focus of attention is directed towards the relevant target to interpret the most promising information. For a given image, the convolution of the image log amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale is equivalent to an image saliency detector in the frequency domain. At the same time, orientation and shape features extracted are combined into a saliency map in the spatial domain. Our proposed model decides salient targets based on a final saliency map, which is generated by integration of the saliency maps in the frequency and spatial domain. At last, the size of each salient target is obtained by maximizing entropy of the final saliency map. Experimental results show that the proposed model can highlight both small and large salient regions in infrared image, as well as inhibit repeated distractors in cluttered image. In addition, its detecting efficiency has improved significantly.

  1. Automatic flatness detection system for micro part

    NASA Astrophysics Data System (ADS)

    Luo, Yi; Wang, Xiaodong; Shan, Zhendong; Li, Kehong

    2016-01-01

    An automatic flatness detection system for micro rings is developed. It is made up of machine vision module, ring supporting module and control system. An industry CCD camera with the resolution of 1628×1236 pixel, a telecentric with magnification of two, and light sources are used to collect the vision information. A rotary stage with a polished silicon wafer is used to support the ring. The silicon wafer provides a mirror image and doubles the gap caused by unevenness of the ring. The control system comprise an industry computer and software written in LabVIEW Get Kernel and Convolute Function are selected to reduce noise and distortion, Laplacian Operator is used to sharp the image, and IMAQ Threshold function is used to separate the target object from the background. Based on this software, system repeating precision is 2.19 μm, less than one pixel. The designed detection system can easily identify the ring warpage larger than 5 μm, and if the warpage is less than 25 μm, it can be used in ring assembly and satisfied the final positionary and perpendicularity error requirement of the component.

  2. A spatial model for a stream networks of Citarik River with the environmental variables: potential of hydrogen (PH) and temperature

    NASA Astrophysics Data System (ADS)

    Bachrudin, A.; Mohamed, N. B.; Supian, S.; Sukono; Hidayat, Y.

    2018-03-01

    Application of existing geostatistical theory of stream networks provides a number of interesting and challenging problems. Most of statistical tools in the traditional geostatistics have been based on a Euclidean distance such as autocovariance functions, but for stream data is not permissible since it deals with a stream distance. To overcome this autocovariance developed a model based on the distance the flow with using convolution kernel approach (moving average construction). Spatial model for a stream networks is widely used to monitor environmental on a river networks. In a case study of a river in province of West Java, the objective of this paper is to analyze a capability of a predictive on two environmental variables, potential of hydrogen (PH) and temperature using ordinary kriging. Several the empirical results show: (1) The best fit of autocovariance functions for temperature and potential hydrogen (ph) of Citarik River is linear which also yields the smallest root mean squared prediction error (RMSPE), (2) the spatial correlation values between the locations on upstream and on downstream of Citarik river exhibit decreasingly

  3. A linear-RBF multikernel SVM to classify big text corpora.

    PubMed

    Romero, R; Iglesias, E L; Borrajo, L

    2015-01-01

    Support vector machine (SVM) is a powerful technique for classification. However, SVM is not suitable for classification of large datasets or text corpora, because the training complexity of SVMs is highly dependent on the input size. Recent developments in the literature on the SVM and other kernel methods emphasize the need to consider multiple kernels or parameterizations of kernels because they provide greater flexibility. This paper shows a multikernel SVM to manage highly dimensional data, providing an automatic parameterization with low computational cost and improving results against SVMs parameterized under a brute-force search. The model consists in spreading the dataset into cohesive term slices (clusters) to construct a defined structure (multikernel). The new approach is tested on different text corpora. Experimental results show that the new classifier has good accuracy compared with the classic SVM, while the training is significantly faster than several other SVM classifiers.

  4. Convolutional neural network for earthquake detection and location

    PubMed Central

    Perol, Thibaut; Gharbi, Michaël; Denolle, Marine

    2018-01-01

    The recent evolution of induced seismicity in Central United States calls for exhaustive catalogs to improve seismic hazard assessment. Over the last decades, the volume of seismic data has increased exponentially, creating a need for efficient algorithms to reliably detect and locate earthquakes. Today’s most elaborate methods scan through the plethora of continuous seismic records, searching for repeating seismic signals. We leverage the recent advances in artificial intelligence and present ConvNetQuake, a highly scalable convolutional neural network for earthquake detection and location from a single waveform. We apply our technique to study the induced seismicity in Oklahoma, USA. We detect more than 17 times more earthquakes than previously cataloged by the Oklahoma Geological Survey. Our algorithm is orders of magnitude faster than established methods. PMID:29487899

  5. Korean letter handwritten recognition using deep convolutional neural network on android platform

    NASA Astrophysics Data System (ADS)

    Purnamawati, S.; Rachmawati, D.; Lumanauw, G.; Rahmat, R. F.; Taqyuddin, R.

    2018-03-01

    Currently, popularity of Korean culture attracts many people to learn everything about Korea, particularly its language. To acquire Korean Language, every single learner needs to be able to understand Korean non-Latin character. A digital approach needs to be carried out in order to make Korean learning process easier. This study is done by using Deep Convolutional Neural Network (DCNN). DCNN performs the recognition process on the image based on the model that has been trained such as Inception-v3 Model. Subsequently, re-training process using transfer learning technique with the trained and re-trained value of model is carried though in order to develop a new model with a better performance without any specific systemic errors. The testing accuracy of this research results in 86,9%.

  6. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    PubMed

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  7. Modulation/demodulation techniques for satellite communications. Part 1: Background

    NASA Technical Reports Server (NTRS)

    Omura, J. K.; Simon, M. K.

    1981-01-01

    Basic characteristics of digital data transmission systems described include the physical communication links, the notion of bandwidth, FCC regulations, and performance measurements such as bit rates, bit error probabilities, throughputs, and delays. The error probability performance and spectral characteristics of various modulation/demodulation techniques commonly used or proposed for use in radio and satellite communication links are summarized. Forward error correction with block or convolutional codes is also discussed along with the important coding parameter, channel cutoff rate.

  8. Residual Shuffling Convolutional Neural Networks for Deep Semantic Image Segmentation Using Multi-Modal Data

    NASA Astrophysics Data System (ADS)

    Chen, K.; Weinmann, M.; Gao, X.; Yan, M.; Hinz, S.; Jutzi, B.; Weinmann, M.

    2018-05-01

    In this paper, we address the deep semantic segmentation of aerial imagery based on multi-modal data. Given multi-modal data composed of true orthophotos and the corresponding Digital Surface Models (DSMs), we extract a variety of hand-crafted radiometric and geometric features which are provided separately and in different combinations as input to a modern deep learning framework. The latter is represented by a Residual Shuffling Convolutional Neural Network (RSCNN) combining the characteristics of a Residual Network with the advantages of atrous convolution and a shuffling operator to achieve a dense semantic labeling. Via performance evaluation on a benchmark dataset, we analyze the value of different feature sets for the semantic segmentation task. The derived results reveal that the use of radiometric features yields better classification results than the use of geometric features for the considered dataset. Furthermore, the consideration of data on both modalities leads to an improvement of the classification results. However, the derived results also indicate that the use of all defined features is less favorable than the use of selected features. Consequently, data representations derived via feature extraction and feature selection techniques still provide a gain if used as the basis for deep semantic segmentation.

  9. Deep convolutional neural network processing of aerial stereo imagery to monitor vulnerable zones near power lines

    NASA Astrophysics Data System (ADS)

    Qayyum, Abdul; Saad, Naufal M.; Kamel, Nidal; Malik, Aamir Saeed

    2018-01-01

    The monitoring of vegetation near high-voltage transmission power lines and poles is tedious. Blackouts present a huge challenge to power distribution companies and often occur due to tree growth in hilly and rural areas. There are numerous methods of monitoring hazardous overgrowth that are expensive and time-consuming. Accurate estimation of tree and vegetation heights near power poles can prevent the disruption of power transmission in vulnerable zones. This paper presents a cost-effective approach based on a convolutional neural network (CNN) algorithm to compute the height (depth maps) of objects proximal to power poles and transmission lines. The proposed CNN extracts and classifies features by employing convolutional pooling inputs to fully connected data layers that capture prominent features from stereo image patches. Unmanned aerial vehicle or satellite stereo image datasets can thus provide a feasible and cost-effective approach that identifies threat levels based on height and distance estimations of hazardous vegetation and other objects. Results were compared with extant disparity map estimation techniques, such as graph cut, dynamic programming, belief propagation, and area-based methods. The proposed method achieved an accuracy rate of 90%.

  10. Performance of MIMO-OFDM using convolution codes with QAM modulation

    NASA Astrophysics Data System (ADS)

    Astawa, I. Gede Puja; Moegiharto, Yoedy; Zainudin, Ahmad; Salim, Imam Dui Agus; Anggraeni, Nur Annisa

    2014-04-01

    Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier which transmits Rayleigh multipath fading channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2×2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4×4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4×4 MIMO-OFDM system without coding, power saving 7 dB of 2×2 MIMO-OFDM and significant power savings from SISO-OFDM system.

  11. A method of estimating GPS instrumental biases with a convolution algorithm

    NASA Astrophysics Data System (ADS)

    Li, Qi; Ma, Guanyi; Lu, Weijun; Wan, Qingtao; Fan, Jiangtao; Wang, Xiaolan; Li, Jinghua; Li, Changhua

    2018-03-01

    This paper presents a method of deriving the instrumental differential code biases (DCBs) of GPS satellites and dual frequency receivers. Considering that the total electron content (TEC) varies smoothly over a small area, one ionospheric pierce point (IPP) and four more nearby IPPs were selected to build an equation with a convolution algorithm. In addition, unknown DCB parameters were arranged into a set of equations with GPS observations in a day unit by assuming that DCBs do not vary within a day. Then, the DCBs of satellites and receivers were determined by solving the equation set with the least-squares fitting technique. The performance of this method is examined by applying it to 361 days in 2014 using the observation data from 1311 GPS Earth Observation Network (GEONET) receivers. The result was crosswise-compared with the DCB estimated by the mesh method and the IONEX products from the Center for Orbit Determination in Europe (CODE). The DCB values derived by this method agree with those of the mesh method and the CODE products, with biases of 0.091 ns and 0.321 ns, respectively. The convolution method's accuracy and stability were quite good and showed improvements over the mesh method.

  12. Multichannel Convolutional Neural Network for Biological Relation Extraction.

    PubMed

    Quan, Chanqin; Hua, Lei; Sun, Xiao; Bai, Wenjun

    2016-01-01

    The plethora of biomedical relations which are embedded in medical logs (records) demands researchers' attention. Previous theoretical and practical focuses were restricted on traditional machine learning techniques. However, these methods are susceptible to the issues of "vocabulary gap" and data sparseness and the unattainable automation process in feature extraction. To address aforementioned issues, in this work, we propose a multichannel convolutional neural network (MCCNN) for automated biomedical relation extraction. The proposed model has the following two contributions: (1) it enables the fusion of multiple (e.g., five) versions in word embeddings; (2) the need for manual feature engineering can be obviated by automated feature learning with convolutional neural network (CNN). We evaluated our model on two biomedical relation extraction tasks: drug-drug interaction (DDI) extraction and protein-protein interaction (PPI) extraction. For DDI task, our system achieved an overall f -score of 70.2% compared to the standard linear SVM based system (e.g., 67.0%) on DDIExtraction 2013 challenge dataset. And for PPI task, we evaluated our system on Aimed and BioInfer PPI corpus; our system exceeded the state-of-art ensemble SVM system by 2.7% and 5.6% on f -scores.

  13. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion.

    PubMed

    Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.

  14. Retinal blood vessel segmentation using fully convolutional network with transfer learning.

    PubMed

    Jiang, Zhexin; Zhang, Hao; Wang, Yi; Ko, Seok-Bum

    2018-04-26

    Since the retinal blood vessel has been acknowledged as an indispensable element in both ophthalmological and cardiovascular disease diagnosis, the accurate segmentation of the retinal vessel tree has become the prerequisite step for automated or computer-aided diagnosis systems. In this paper, a supervised method is presented based on a pre-trained fully convolutional network through transfer learning. This proposed method has simplified the typical retinal vessel segmentation problem from full-size image segmentation to regional vessel element recognition and result merging. Meanwhile, additional unsupervised image post-processing techniques are applied to this proposed method so as to refine the final result. Extensive experiments have been conducted on DRIVE, STARE, CHASE_DB1 and HRF databases, and the accuracy of the cross-database test on these four databases is state-of-the-art, which also presents the high robustness of the proposed approach. This successful result has not only contributed to the area of automated retinal blood vessel segmentation but also supports the effectiveness of transfer learning when applying deep learning technique to medical imaging. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Deep convolutional neural network for classifying Fusarium wilt of radish from unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Ha, Jin Gwan; Moon, Hyeonjoon; Kwak, Jin Tae; Hassan, Syed Ibrahim; Dang, Minh; Lee, O. New; Park, Han Yong

    2017-10-01

    Recently, unmanned aerial vehicles (UAVs) have gained much attention. In particular, there is a growing interest in utilizing UAVs for agricultural applications such as crop monitoring and management. We propose a computerized system that is capable of detecting Fusarium wilt of radish with high accuracy. The system adopts computer vision and machine learning techniques, including deep learning, to process the images captured by UAVs at low altitudes and to identify the infected radish. The whole radish field is first segmented into three distinctive regions (radish, bare ground, and mulching film) via a softmax classifier and K-means clustering. Then, the identified radish regions are further classified into healthy radish and Fusarium wilt of radish using a deep convolutional neural network (CNN). In identifying radish, bare ground, and mulching film from a radish field, we achieved an accuracy of ≥97.4%. In detecting Fusarium wilt of radish, the CNN obtained an accuracy of 93.3%. It also outperformed the standard machine learning algorithm, obtaining 82.9% accuracy. Therefore, UAVs equipped with computational techniques are promising tools for improving the quality and efficiency of agriculture today.

  16. Estimation of neutron energy distributions from prompt gamma emissions

    NASA Astrophysics Data System (ADS)

    Panikkath, Priyada; Udupi, Ashwini; Sarkar, P. K.

    2017-11-01

    A technique of estimating the incident neutron energy distribution from emitted prompt gamma intensities from a system exposed to neutrons is presented. The emitted prompt gamma intensities or the measured photo peaks in a gamma detector are related to the incident neutron energy distribution through a convolution of the response of the system generating the prompt gammas to mono-energetic neutrons. Presently, the system studied is a cylinder of high density polyethylene (HDPE) placed inside another cylinder of borated HDPE (BHDPE) having an outer Pb-cover and exposed to neutrons. The emitted five prompt gamma peaks from hydrogen, boron, carbon and lead can be utilized to unfold the incident neutron energy distribution as an under-determined deconvolution problem. Such an under-determined set of equations are solved using the genetic algorithm based Monte Carlo de-convolution code GAMCD. Feasibility of the proposed technique is demonstrated theoretically using the Monte Carlo calculated response matrix and intensities of emitted prompt gammas from the Pb-covered BHDPE-HDPE system in the case of several incident neutron spectra spanning different energy ranges.

  17. On the solution of integral equations with strongly singular kernels

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1986-01-01

    Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m ,m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup -m , terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.

  18. On the solution of integral equations with strong ly singular kernels

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1985-01-01

    In this paper some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m, m or = 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t,x) sup-m, terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.

  19. On the solution of integral equations with strongly singular kernels

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1987-01-01

    Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m, m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup-m, terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.

  20. Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party

    NASA Astrophysics Data System (ADS)

    Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi

    The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.

  1. Chromatin accessibility prediction via convolutional long short-term memory networks with k-mer embedding.

    PubMed

    Min, Xu; Zeng, Wanwen; Chen, Ning; Chen, Ting; Jiang, Rui

    2017-07-15

    Experimental techniques for measuring chromatin accessibility are expensive and time consuming, appealing for the development of computational approaches to predict open chromatin regions from DNA sequences. Along this direction, existing methods fall into two classes: one based on handcrafted k -mer features and the other based on convolutional neural networks. Although both categories have shown good performance in specific applications thus far, there still lacks a comprehensive framework to integrate useful k -mer co-occurrence information with recent advances in deep learning. We fill this gap by addressing the problem of chromatin accessibility prediction with a convolutional Long Short-Term Memory (LSTM) network with k -mer embedding. We first split DNA sequences into k -mers and pre-train k -mer embedding vectors based on the co-occurrence matrix of k -mers by using an unsupervised representation learning approach. We then construct a supervised deep learning architecture comprised of an embedding layer, three convolutional layers and a Bidirectional LSTM (BLSTM) layer for feature learning and classification. We demonstrate that our method gains high-quality fixed-length features from variable-length sequences and consistently outperforms baseline methods. We show that k -mer embedding can effectively enhance model performance by exploring different embedding strategies. We also prove the efficacy of both the convolution and the BLSTM layers by comparing two variations of the network architecture. We confirm the robustness of our model to hyper-parameters by performing sensitivity analysis. We hope our method can eventually reinforce our understanding of employing deep learning in genomic studies and shed light on research regarding mechanisms of chromatin accessibility. The source code can be downloaded from https://github.com/minxueric/ismb2017_lstm . tingchen@tsinghua.edu.cn or ruijiang@tsinghua.edu.cn. Supplementary materials are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  2. Chromatin accessibility prediction via convolutional long short-term memory networks with k-mer embedding

    PubMed Central

    Min, Xu; Zeng, Wanwen; Chen, Ning; Chen, Ting; Jiang, Rui

    2017-01-01

    Abstract Motivation: Experimental techniques for measuring chromatin accessibility are expensive and time consuming, appealing for the development of computational approaches to predict open chromatin regions from DNA sequences. Along this direction, existing methods fall into two classes: one based on handcrafted k-mer features and the other based on convolutional neural networks. Although both categories have shown good performance in specific applications thus far, there still lacks a comprehensive framework to integrate useful k-mer co-occurrence information with recent advances in deep learning. Results: We fill this gap by addressing the problem of chromatin accessibility prediction with a convolutional Long Short-Term Memory (LSTM) network with k-mer embedding. We first split DNA sequences into k-mers and pre-train k-mer embedding vectors based on the co-occurrence matrix of k-mers by using an unsupervised representation learning approach. We then construct a supervised deep learning architecture comprised of an embedding layer, three convolutional layers and a Bidirectional LSTM (BLSTM) layer for feature learning and classification. We demonstrate that our method gains high-quality fixed-length features from variable-length sequences and consistently outperforms baseline methods. We show that k-mer embedding can effectively enhance model performance by exploring different embedding strategies. We also prove the efficacy of both the convolution and the BLSTM layers by comparing two variations of the network architecture. We confirm the robustness of our model to hyper-parameters by performing sensitivity analysis. We hope our method can eventually reinforce our understanding of employing deep learning in genomic studies and shed light on research regarding mechanisms of chromatin accessibility. Availability and implementation: The source code can be downloaded from https://github.com/minxueric/ismb2017_lstm. Contact: tingchen@tsinghua.edu.cn or ruijiang@tsinghua.edu.cn Supplementary information: Supplementary materials are available at Bioinformatics online. PMID:28881969

  3. Evaluation of Statistical Downscaling Skill at Reproducing Extreme Events

    NASA Astrophysics Data System (ADS)

    McGinnis, S. A.; Tye, M. R.; Nychka, D. W.; Mearns, L. O.

    2015-12-01

    Climate model outputs usually have much coarser spatial resolution than is needed by impacts models. Although higher resolution can be achieved using regional climate models for dynamical downscaling, further downscaling is often required. The final resolution gap is often closed with a combination of spatial interpolation and bias correction, which constitutes a form of statistical downscaling. We use this technique to downscale regional climate model data and evaluate its skill in reproducing extreme events. We downscale output from the North American Regional Climate Change Assessment Program (NARCCAP) dataset from its native 50-km spatial resolution to the 4-km resolution of University of Idaho's METDATA gridded surface meterological dataset, which derives from the PRISM and NLDAS-2 observational datasets. We operate on the major variables used in impacts analysis at a daily timescale: daily minimum and maximum temperature, precipitation, humidity, pressure, solar radiation, and winds. To interpolate the data, we use the patch recovery method from the Earth System Modeling Framework (ESMF) regridding package. We then bias correct the data using Kernel Density Distribution Mapping (KDDM), which has been shown to exhibit superior overall performance across multiple metrics. Finally, we evaluate the skill of this technique in reproducing extreme events by comparing raw and downscaled output with meterological station data in different bioclimatic regions according to the the skill scores defined by Perkins et al in 2013 for evaluation of AR4 climate models. We also investigate techniques for improving bias correction of values in the tails of the distributions. These techniques include binned kernel density estimation, logspline kernel density estimation, and transfer functions constructed by fitting the tails with a generalized pareto distribution.

  4. Comparison of different inoculating methods to evaluate the pathogenicity and virulence of Aspergillus niger on two maize hybrids

    USDA-ARS?s Scientific Manuscript database

    A two-year field study was conducted to determine the effects of inoculation techniques on the aggressiveness of Aspergillus niger kernel infection in A. flavus resistant and susceptible maize hybrids. Ears were inoculated with the silk-channel, side-needle, and spray techniques 7 days after midsilk...

  5. General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.

    PubMed

    Marmarelis, V Z; Berger, T W

    2005-07-01

    This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.

  6. A fast non-local means algorithm based on integral image and reconstructed similar kernel

    NASA Astrophysics Data System (ADS)

    Lin, Zheng; Song, Enmin

    2018-03-01

    Image denoising is one of the essential methods in digital image processing. The non-local means (NLM) denoising approach is a remarkable denoising technique. However, its time complexity of the computation is high. In this paper, we design a fast NLM algorithm based on integral image and reconstructed similar kernel. First, the integral image is introduced in the traditional NLM algorithm. In doing so, it reduces a great deal of repetitive operations in the parallel processing, which will greatly improves the running speed of the algorithm. Secondly, in order to amend the error of the integral image, we construct a similar window resembling the Gaussian kernel in the pyramidal stacking pattern. Finally, in order to eliminate the influence produced by replacing the Gaussian weighted Euclidean distance with Euclidean distance, we propose a scheme to construct a similar kernel with a size of 3 x 3 in a neighborhood window which will reduce the effect of noise on a single pixel. Experimental results demonstrate that the proposed algorithm is about seventeen times faster than the traditional NLM algorithm, yet produce comparable results in terms of Peak Signal-to- Noise Ratio (the PSNR increased 2.9% in average) and perceptual image quality.

  7. Spatio-temporal Event Classification using Time-series Kernel based Structured Sparsity

    PubMed Central

    Jeni, László A.; Lőrincz, András; Szabó, Zoltán; Cohn, Jeffrey F.; Kanade, Takeo

    2016-01-01

    In many behavioral domains, such as facial expression and gesture, sparse structure is prevalent. This sparsity would be well suited for event detection but for one problem. Features typically are confounded by alignment error in space and time. As a consequence, high-dimensional representations such as SIFT and Gabor features have been favored despite their much greater computational cost and potential loss of information. We propose a Kernel Structured Sparsity (KSS) method that can handle both the temporal alignment problem and the structured sparse reconstruction within a common framework, and it can rely on simple features. We characterize spatio-temporal events as time-series of motion patterns and by utilizing time-series kernels we apply standard structured-sparse coding techniques to tackle this important problem. We evaluated the KSS method using both gesture and facial expression datasets that include spontaneous behavior and differ in degree of difficulty and type of ground truth coding. KSS outperformed both sparse and non-sparse methods that utilize complex image features and their temporal extensions. In the case of early facial event classification KSS had 10% higher accuracy as measured by F1 score over kernel SVM methods1. PMID:27830214

  8. Sparse kernel methods for high-dimensional survival data.

    PubMed

    Evers, Ludger; Messow, Claudia-Martina

    2008-07-15

    Sparse kernel methods like support vector machines (SVM) have been applied with great success to classification and (standard) regression settings. Existing support vector classification and regression techniques however are not suitable for partly censored survival data, which are typically analysed using Cox's proportional hazards model. As the partial likelihood of the proportional hazards model only depends on the covariates through inner products, it can be 'kernelized'. The kernelized proportional hazards model however yields a solution that is dense, i.e. the solution depends on all observations. One of the key features of an SVM is that it yields a sparse solution, depending only on a small fraction of the training data. We propose two methods. One is based on a geometric idea, where-akin to support vector classification-the margin between the failed observation and the observations currently at risk is maximised. The other approach is based on obtaining a sparse model by adding observations one after another akin to the Import Vector Machine (IVM). Data examples studied suggest that both methods can outperform competing approaches. Software is available under the GNU Public License as an R package and can be obtained from the first author's website http://www.maths.bris.ac.uk/~maxle/software.html.

  9. Screening of the aerodynamic and biophysical properties of barley malt

    NASA Astrophysics Data System (ADS)

    Ghodsvali, Alireza; Farzaneh, Vahid; Bakhshabadi, Hamid; Zare, Zahra; Karami, Zahra; Mokhtarian, Mohsen; Carvalho, Isabel. S.

    2016-10-01

    An understanding of the aerodynamic and biophysical properties of barley malt is necessary for the appropriate design of equipment for the handling, shipping, dehydration, grading, sorting and warehousing of this strategic crop. Malting is a complex biotechnological process that includes steeping; germination and finally, the dehydration of cereal grains under controlled temperature and humidity conditions. In this investigation, the biophysical properties of barley malt were predicted using two models of artificial neural networks as well as response surface methodology. Stepping time and germination time were selected as the independent variables and 1 000 kernel weight, kernel density and terminal velocity were selected as the dependent variables (responses). The obtained outcomes showed that the artificial neural network model, with a logarithmic sigmoid activation function, presents more precise results than the response surface model in the prediction of the aerodynamic and biophysical properties of produced barley malt. This model presented the best result with 8 nodes in the hidden layer and significant correlation coefficient values of 0.783, 0.767 and 0.991 were obtained for responses one thousand kernel weight, kernel density, and terminal velocity, respectively. The outcomes indicated that this novel technique could be successfully applied in quantitative and qualitative monitoring within the malting process.

  10. Oversampling the Minority Class in the Feature Space.

    PubMed

    Perez-Ortiz, Maria; Gutierrez, Pedro Antonio; Tino, Peter; Hervas-Martinez, Cesar

    2016-09-01

    The imbalanced nature of some real-world data is one of the current challenges for machine learning researchers. One common approach oversamples the minority class through convex combination of its patterns. We explore the general idea of synthetic oversampling in the feature space induced by a kernel function (as opposed to input space). If the kernel function matches the underlying problem, the classes will be linearly separable and synthetically generated patterns will lie on the minority class region. Since the feature space is not directly accessible, we use the empirical feature space (EFS) (a Euclidean space isomorphic to the feature space) for oversampling purposes. The proposed method is framed in the context of support vector machines, where the imbalanced data sets can pose a serious hindrance. The idea is investigated in three scenarios: 1) oversampling in the full and reduced-rank EFSs; 2) a kernel learning technique maximizing the data class separation to study the influence of the feature space structure (implicitly defined by the kernel function); and 3) a unified framework for preferential oversampling that spans some of the previous approaches in the literature. We support our investigation with extensive experiments over 50 imbalanced data sets.

  11. An SVM-Based Solution for Fault Detection in Wind Turbines

    PubMed Central

    Santos, Pedro; Villa, Luisa F.; Reñones, Aníbal; Bustillo, Andres; Maudes, Jesús

    2015-01-01

    Research into fault diagnosis in machines with a wide range of variable loads and speeds, such as wind turbines, is of great industrial interest. Analysis of the power signals emitted by wind turbines for the diagnosis of mechanical faults in their mechanical transmission chain is insufficient. A successful diagnosis requires the inclusion of accelerometers to evaluate vibrations. This work presents a multi-sensory system for fault diagnosis in wind turbines, combined with a data-mining solution for the classification of the operational state of the turbine. The selected sensors are accelerometers, in which vibration signals are processed using angular resampling techniques and electrical, torque and speed measurements. Support vector machines (SVMs) are selected for the classification task, including two traditional and two promising new kernels. This multi-sensory system has been validated on a test-bed that simulates the real conditions of wind turbines with two fault typologies: misalignment and imbalance. Comparison of SVM performance with the results of artificial neural networks (ANNs) shows that linear kernel SVM outperforms other kernels and ANNs in terms of accuracy, training and tuning times. The suitability and superior performance of linear SVM is also experimentally analyzed, to conclude that this data acquisition technique generates linearly separable datasets. PMID:25760051

  12. DANCING IN THE DARK: NEW BROWN DWARF BINARIES FROM KERNEL PHASE INTERFEROMETRY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, Benjamin; Tuthill, Peter; Martinache, Frantz, E-mail: bjsp@physics.usyd.edu.au, E-mail: p.tuthill@physics.usyd.edu.au, E-mail: frantz@naoj.org

    2013-04-20

    This paper revisits a sample of ultracool dwarfs in the solar neighborhood previously observed with the Hubble Space Telescope's NICMOS NIC1 instrument. We have applied a novel high angular resolution data analysis technique based on the extraction and fitting of kernel phases to archival data. This was found to deliver a dramatic improvement over earlier analysis methods, permitting a search for companions down to projected separations of {approx}1 AU on NIC1 snapshot images. We reveal five new close binary candidates and present revised astrometry on previously known binaries, all of which were recovered with the technique. The new candidate binariesmore » have sufficiently close separation to determine dynamical masses in a short-term observing campaign. We also present four marginal detections of objects which may be very close binaries or high-contrast companions. Including only confident detections within 19 pc, we report a binary fraction of at least #Greek Lunate Epsilon Symbol#{sub b} = 17.2{sub -3.7}{sup +5.7}%. The results reported here provide new insights into the population of nearby ultracool binaries, while also offering an incisive case study of the benefits conferred by the kernel phase approach in the recovery of companions within a few resolution elements of the point-spread function core.« less

  13. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  14. Bending the Rules: Widefield Microscopy and the Abbe Limit of Resolution

    PubMed Central

    Verdaasdonk, Jolien S.; Stephens, Andrew D.; Haase, Julian; Bloom, Kerry

    2014-01-01

    One of the most fundamental concepts of microscopy is that of resolution–the ability to clearly distinguish two objects as separate. Recent advances such as structured illumination microscopy (SIM) and point localization techniques including photoactivated localization microscopy (PALM), and stochastic optical reconstruction microscopy (STORM) strive to overcome the inherent limits of resolution of the modern light microscope. These techniques, however, are not always feasible or optimal for live cell imaging. Thus, in this review, we explore three techniques for extracting high resolution data from images acquired on a widefield microscope–deconvolution, model convolution, and Gaussian fitting. Deconvolution is a powerful tool for restoring a blurred image using knowledge of the point spread function (PSF) describing the blurring of light by the microscope, although care must be taken to ensure accuracy of subsequent quantitative analysis. The process of model convolution also requires knowledge of the PSF to blur a simulated image which can then be compared to the experimentally acquired data to reach conclusions regarding its geometry and fluorophore distribution. Gaussian fitting is the basis for point localization microscopy, and can also be applied to tracking spot motion over time or measuring spot shape and size. All together, these three methods serve as powerful tools for high-resolution imaging using widefield microscopy. PMID:23893718

  15. An ensemble method for extracting adverse drug events from social media.

    PubMed

    Liu, Jing; Zhao, Songzheng; Zhang, Xiaodi

    2016-06-01

    Because adverse drug events (ADEs) are a serious health problem and a leading cause of death, it is of vital importance to identify them correctly and in a timely manner. With the development of Web 2.0, social media has become a large data source for information on ADEs. The objective of this study is to develop a relation extraction system that uses natural language processing techniques to effectively distinguish between ADEs and non-ADEs in informal text on social media. We develop a feature-based approach that utilizes various lexical, syntactic, and semantic features. Information-gain-based feature selection is performed to address high-dimensional features. Then, we evaluate the effectiveness of four well-known kernel-based approaches (i.e., subset tree kernel, tree kernel, shortest dependency path kernel, and all-paths graph kernel) and several ensembles that are generated by adopting different combination methods (i.e., majority voting, weighted averaging, and stacked generalization). All of the approaches are tested using three data sets: two health-related discussion forums and one general social networking site (i.e., Twitter). When investigating the contribution of each feature subset, the feature-based approach attains the best area under the receiver operating characteristics curve (AUC) values, which are 78.6%, 72.2%, and 79.2% on the three data sets. When individual methods are used, we attain the best AUC values of 82.1%, 73.2%, and 77.0% using the subset tree kernel, shortest dependency path kernel, and feature-based approach on the three data sets, respectively. When using classifier ensembles, we achieve the best AUC values of 84.5%, 77.3%, and 84.5% on the three data sets, outperforming the baselines. Our experimental results indicate that ADE extraction from social media can benefit from feature selection. With respect to the effectiveness of different feature subsets, lexical features and semantic features can enhance the ADE extraction capability. Kernel-based approaches, which can stay away from the feature sparsity issue, are qualified to address the ADE extraction problem. Combining different individual classifiers using suitable combination methods can further enhance the ADE extraction effectiveness. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. On- and off-axis spectral emission features from laser-produced gas breakdown plasmas

    NASA Astrophysics Data System (ADS)

    Harilal, S. S.; Skrodzki, P. J.; Miloshevsky, A.; Brumfield, B. E.; Phillips, M. C.; Miloshevsky, G.

    2017-06-01

    Laser-heated gas breakdown plasmas or sparks emit profoundly in the ultraviolet and visible region of the electromagnetic spectrum with contributions from ionic, atomic, and molecular species. Laser created kernels expand into a cold ambient with high velocities during their early lifetime followed by confinement of the plasma kernel and eventually collapse. However, the plasma kernels produced during laser breakdown of gases are also capable of exciting and ionizing the surrounding ambient medium. Two mechanisms can be responsible for excitation and ionization of the surrounding ambient: photoexcitation and ionization by intense ultraviolet emission from the sparks produced during the early times of their creation and/or heating by strong shocks generated by the kernel during its expansion into the ambient. In this study, an investigation is made on the spectral features of on- and off-axis emission of laser-induced plasma breakdown kernels generated in atmospheric pressure conditions with an aim to elucidate the mechanisms leading to ambient excitation and emission. Pulses from an Nd:YAG laser emitting at 1064 nm with a pulse duration of 6 ns are used to generate plasma kernels. Laser sparks were generated in air, argon, and helium gases to provide different physical properties of expansion dynamics and plasma chemistry considering the differences in laser absorption properties, mass density, and speciation. Point shadowgraphy and time-resolved imaging were used to evaluate the shock wave and spark self-emission morphology at early and late times, while space and time resolved spectroscopy is used for evaluating the emission features and for inferring plasma physical conditions at on- and off-axis positions. The structure and dynamics of the plasma kernel obtained using imaging techniques are also compared to numerical simulations using the computational fluid dynamics code. The emission from the kernel showed that spectral features from ions, atoms, and molecules are separated in time with early time temperatures and densities in excess of 35 000 K and 4 × 1018/cm3 with an existence of thermal equilibrium. However, the emission from the off-kernel positions from the breakdown plasmas showed enhanced ultraviolet radiation with the presence of N2 bands and is represented by non-local thermodynamic equilibrium (non-LTE) conditions. Our results also highlight that the ultraviolet radiation emitted during the early time of spark evolution is the predominant source of the photo-excitation of the surrounding medium.

  17. On- and off-axis spectral emission features from laser-produced gas breakdown plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harilal, S. S.; Skrodzki, P. J.; Miloshevsky, A.

    Laser-heated gas breakdown plasmas or sparks emit profoundly in the ultraviolet and visible region of the electromagnetic spectrum with contributions from ionic, atomic, and molecular species. Laser created kernels expand into a cold ambient with high velocities during its early lifetime followed by confinement of the plasma kernel and eventually collapse. However, the plasma kernels produced during laser breakdown of gases are also capable of exciting and ionizing the surrounding ambient medium. Two mechanisms can be responsible for excitation and ionization of surrounding ambient: viz. photoexcitation and ionization by intense ultraviolet emission from the sparks produced during the early timesmore » of its creation and/or heating by strong shocks generated by the kernel during its expansion into the ambient. In this study, an investigation is made on the spectral features of on- and off-axis emission features of laser-induced plasma breakdown kernels generated in atmospheric pressure conditions with an aim to elucidate the mechanisms leading to ambient excitation and emission. Pulses from an Nd:YAG laser emitting at 1064 nm with 6 ns pulse duration are used to generate plasma kernels. Laser sparks were generated in air, argon, and helium gases to provide different physical properties of expansion dynamics and plasma chemistry considering the differences in laser absorption properties, mass density and speciation. Point shadowgraphy and time-resolved imaging were used to evaluate the shock wave and spark self-emission morphology at early and late times while space and time resolved spectroscopy is used for evaluating the emission features as well as for inferring plasma fundaments at on- and off-axis. Structure and dynamics of the plasma kernel obtained using imaging techniques are also compared to numerical simulations using computational fluid dynamics code. The emission from the kernel showed that spectral features from ions, atoms and molecules are separated in time with an early time temperatures and densities in excess of 35000 K and 4×10 18 /cm 3 with an existence of thermal equilibrium. However, the emission from the off-kernel positions from the breakdown plasmas showed enhanced ultraviolet radiation with the presence of N 2 bands and represented by non-LTE conditions. Finally, our results also highlight that the ultraviolet radiation emitted during early time of spark evolution is the predominant source of the photo-excitation of the surrounding medium.« less

  18. On- and off-axis spectral emission features from laser-produced gas breakdown plasmas

    DOE PAGES

    Harilal, S. S.; Skrodzki, P. J.; Miloshevsky, A.; ...

    2017-06-01

    Laser-heated gas breakdown plasmas or sparks emit profoundly in the ultraviolet and visible region of the electromagnetic spectrum with contributions from ionic, atomic, and molecular species. Laser created kernels expand into a cold ambient with high velocities during its early lifetime followed by confinement of the plasma kernel and eventually collapse. However, the plasma kernels produced during laser breakdown of gases are also capable of exciting and ionizing the surrounding ambient medium. Two mechanisms can be responsible for excitation and ionization of surrounding ambient: viz. photoexcitation and ionization by intense ultraviolet emission from the sparks produced during the early timesmore » of its creation and/or heating by strong shocks generated by the kernel during its expansion into the ambient. In this study, an investigation is made on the spectral features of on- and off-axis emission features of laser-induced plasma breakdown kernels generated in atmospheric pressure conditions with an aim to elucidate the mechanisms leading to ambient excitation and emission. Pulses from an Nd:YAG laser emitting at 1064 nm with 6 ns pulse duration are used to generate plasma kernels. Laser sparks were generated in air, argon, and helium gases to provide different physical properties of expansion dynamics and plasma chemistry considering the differences in laser absorption properties, mass density and speciation. Point shadowgraphy and time-resolved imaging were used to evaluate the shock wave and spark self-emission morphology at early and late times while space and time resolved spectroscopy is used for evaluating the emission features as well as for inferring plasma fundaments at on- and off-axis. Structure and dynamics of the plasma kernel obtained using imaging techniques are also compared to numerical simulations using computational fluid dynamics code. The emission from the kernel showed that spectral features from ions, atoms and molecules are separated in time with an early time temperatures and densities in excess of 35000 K and 4×1018 /cm3 with an existence of thermal equilibrium. However, the emission from the off-kernel positions from the breakdown plasmas showed enhanced ultraviolet radiation with the presence of N2 bands and represented by non-LTE conditions. Our results also highlight that the ultraviolet radiation emitted during early time of spark evolution is the predominant source of the photo-excitation of the surrounding medium.« less

  19. Finding strong lenses in CFHTLS using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Jacobs, C.; Glazebrook, K.; Collett, T.; More, A.; McCarthy, C.

    2017-10-01

    We train and apply convolutional neural networks, a machine learning technique developed to learn from and classify image data, to Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) imaging for the identification of potential strong lensing systems. An ensemble of four convolutional neural networks was trained on images of simulated galaxy-galaxy lenses. The training sets consisted of a total of 62 406 simulated lenses and 64 673 non-lens negative examples generated with two different methodologies. An ensemble of trained networks was applied to all of the 171 deg2 of the CFHTLS wide field image data, identifying 18 861 candidates including 63 known and 139 other potential lens candidates. A second search of 1.4 million early-type galaxies selected from the survey catalogue as potential deflectors, identified 2465 candidates including 117 previously known lens candidates, 29 confirmed lenses/high-quality lens candidates, 266 novel probable or potential lenses and 2097 candidates we classify as false positives. For the catalogue-based search we estimate a completeness of 21-28 per cent with respect to detectable lenses and a purity of 15 per cent, with a false-positive rate of 1 in 671 images tested. We predict a human astronomer reviewing candidates produced by the system would identify 20 probable lenses and 100 possible lenses per hour in a sample selected by the robot. Convolutional neural networks are therefore a promising tool for use in the search for lenses in current and forthcoming surveys such as the Dark Energy Survey and the Large Synoptic Survey Telescope.

  20. TernaryNet: faster deep model inference without GPUs for medical 3D segmentation using sparse and binary convolutions.

    PubMed

    Heinrich, Mattias P; Blendowski, Max; Oktay, Ozan

    2018-05-30

    Deep convolutional neural networks (DCNN) are currently ubiquitous in medical imaging. While their versatility and high-quality results for common image analysis tasks including segmentation, localisation and prediction is astonishing, the large representational power comes at the cost of highly demanding computational effort. This limits their practical applications for image-guided interventions and diagnostic (point-of-care) support using mobile devices without graphics processing units (GPU). We propose a new scheme that approximates both trainable weights and neural activations in deep networks by ternary values and tackles the open question of backpropagation when dealing with non-differentiable functions. Our solution enables the removal of the expensive floating-point matrix multiplications throughout any convolutional neural network and replaces them by energy- and time-preserving binary operators and population counts. We evaluate our approach for the segmentation of the pancreas in CT. Here, our ternary approximation within a fully convolutional network leads to more than 90% memory reductions and high accuracy (without any post-processing) with a Dice overlap of 71.0% that comes close to the one obtained when using networks with high-precision weights and activations. We further provide a concept for sub-second inference without GPUs and demonstrate significant improvements in comparison with binary quantisation and without our proposed ternary hyperbolic tangent continuation. We present a key enabling technique for highly efficient DCNN inference without GPUs that will help to bring the advances of deep learning to practical clinical applications. It has also great promise for improving accuracies in large-scale medical data retrieval.

  1. Radio Galaxy Zoo: compact and extended radio source classification with deep learning

    NASA Astrophysics Data System (ADS)

    Lukic, V.; Brüggen, M.; Banfield, J. K.; Wong, O. I.; Rudnick, L.; Norris, R. P.; Simmons, B.

    2018-05-01

    Machine learning techniques have been increasingly useful in astronomical applications over the last few years, for example in the morphological classification of galaxies. Convolutional neural networks have proven to be highly effective in classifying objects in image data. In the context of radio-interferometric imaging in astronomy, we looked for ways to identify multiple components of individual sources. To this effect, we design a convolutional neural network to differentiate between different morphology classes using sources from the Radio Galaxy Zoo (RGZ) citizen science project. In this first step, we focus on exploring the factors that affect the performance of such neural networks, such as the amount of training data, number and nature of layers, and the hyperparameters. We begin with a simple experiment in which we only differentiate between two extreme morphologies, using compact and multiple-component extended sources. We found that a three-convolutional layer architecture yielded very good results, achieving a classification accuracy of 97.4 per cent on a test data set. The same architecture was then tested on a four-class problem where we let the network classify sources into compact and three classes of extended sources, achieving a test accuracy of 93.5 per cent. The best-performing convolutional neural network set-up has been verified against RGZ Data Release 1 where a final test accuracy of 94.8 per cent was obtained, using both original and augmented images. The use of sigma clipping does not offer a significant benefit overall, except in cases with a small number of training images.

  2. Improving energy efficiency in handheld biometric applications

    NASA Astrophysics Data System (ADS)

    Hoyle, David C.; Gale, John W.; Schultz, Robert C.; Rakvic, Ryan N.; Ives, Robert W.

    2012-06-01

    With improved smartphone and tablet technology, it is becoming increasingly feasible to implement powerful biometric recognition algorithms on portable devices. Typical iris recognition algorithms, such as Ridge Energy Direction (RED), utilize two-dimensional convolution in their implementation. This paper explores the energy consumption implications of 12 different methods of implementing two-dimensional convolution on a portable device. Typically, convolution is implemented using floating point operations. If a given algorithm implemented integer convolution vice floating point convolution, it could drastically reduce the energy consumed by the processor. The 12 methods compared include 4 major categories: Integer C, Integer Java, Floating Point C, and Floating Point Java. Each major category is further divided into 3 implementations: variable size looped convolution, static size looped convolution, and unrolled looped convolution. All testing was performed using the HTC Thunderbolt with energy measured directly using a Tektronix TDS5104B Digital Phosphor oscilloscope. Results indicate that energy savings as high as 75% are possible by using Integer C versus Floating Point C. Considering the relative proportion of processing time that convolution is responsible for in a typical algorithm, the savings in energy would likely result in significantly greater time between battery charges.

  3. A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.

    PubMed

    Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying

    2015-09-01

    Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. © 2015 WILEY PERIODICALS, INC.

  4. High-Throughput Sequencing of Small RNA Transcriptomes in Maize Kernel Identifies miRNAs Involved in Embryo and Endosperm Development.

    PubMed

    Xing, Lijuan; Zhu, Ming; Zhang, Min; Li, Wenzong; Jiang, Haiyang; Zou, Junjie; Wang, Lei; Xu, Miaoyun

    2017-12-14

    Maize kernel development is a complex biological process that involves the temporal and spatial expression of many genes and fine gene regulation at a transcriptional and post-transcriptional level, and microRNAs (miRNAs) play vital roles during this process. To gain insight into miRNA-mediated regulation of maize kernel development, a deep-sequencing technique was used to investigate the dynamic expression of miRNAs in the embryo and endosperm at three developmental stages in B73. By miRNA transcriptomic analysis, we characterized 132 known miRNAs and six novel miRNAs in developing maize kernel, among which, 15 and 14 miRNAs were commonly differentially expressed between the embryo and endosperm at 9 days after pollination (DAP), 15 DAP and 20 DAP respectively. Conserved miRNA families such as miR159, miR160, miR166, miR390, miR319, miR528 and miR529 were highly expressed in developing embryos; miR164, miR171, miR393 and miR2118 were highly expressed in developing endosperm. Genes targeted by those highly expressed miRNAs were found to be largely related to a regulation category, including the transcription, macromolecule biosynthetic and metabolic process in the embryo as well as the vitamin biosynthetic and metabolic process in the endosperm. Quantitative reverse transcription-PCR (qRT-PCR) analysis showed that these miRNAs displayed a negative correlation with the levels of their corresponding target genes. Importantly, our findings revealed that members of the miR169 family were highly and dynamically expressed in the developing kernel, which will help to exploit new players functioning in maize kernel development.

  5. Research on image retrieval using deep convolutional neural network combining L1 regularization and PRelu activation function

    NASA Astrophysics Data System (ADS)

    QingJie, Wei; WenBin, Wang

    2017-06-01

    In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval

  6. Trigonometric Transforms for Image Reconstruction

    DTIC Science & Technology

    1998-06-01

    applying trigo - nometric transforms to image reconstruction problems. Many existing linear image reconstruc- tion techniques rely on knowledge of...ancestors. The research performed for this dissertation represents the first time the symmetric convolution-multiplication property of trigo - nometric...Fourier domain. The traditional representation of these filters will be similar to new trigo - nometric transform versions derived in later chapters

  7. A digital communications system for manned spaceflight applications.

    NASA Technical Reports Server (NTRS)

    Batson, B. H.; Moorehead, R. W.

    1973-01-01

    A highly efficient, all-digital communications signal design employing convolutional coding and PN spectrum spreading is described for two-way transmission of voice and data between a manned spacecraft and ground. Variable-slope delta modulation is selected for analog/digital conversion of the voice signal, and a convolutional decoder utilizing the Viterbi decoding algorithm is selected for use at each receiving terminal. A PN spread spectrum technique is implemented to protect against multipath effects and to reduce the energy density (per unit bandwidth) impinging on the earth's surface to a value within the guidelines adopted by international agreement. Performance predictions are presented for transmission via a TDRS (tracking and data relay satellite) system and for direct transmission between the spacecraft and earth. Hardware estimates are provided for a flight-qualified communications system employing the coded digital signal design.

  8. The analysis of convolutional codes via the extended Smith algorithm

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Onyszchuk, I.

    1993-01-01

    Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.

  9. A comparative study of linear and nonlinear anomaly detectors for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Goldberg, Hirsh; Nasrabadi, Nasser M.

    2007-04-01

    In this paper we implement various linear and nonlinear subspace-based anomaly detectors for hyperspectral imagery. First, a dual window technique is used to separate the local area around each pixel into two regions - an inner-window region (IWR) and an outer-window region (OWR). Pixel spectra from each region are projected onto a subspace which is defined by projection bases that can be generated in several ways. Here we use three common pattern classification techniques (Principal Component Analysis (PCA), Fisher Linear Discriminant (FLD) Analysis, and the Eigenspace Separation Transform (EST)) to generate projection vectors. In addition to these three algorithms, the well-known Reed-Xiaoli (RX) anomaly detector is also implemented. Each of the four linear methods is then implicitly defined in a high- (possibly infinite-) dimensional feature space by using a nonlinear mapping associated with a kernel function. Using a common machine-learning technique known as the kernel trick all dot products in the feature space are replaced with a Mercer kernel function defined in terms of the original input data space. To determine how anomalous a given pixel is, we then project the current test pixel spectra and the spectral mean vector of the OWR onto the linear and nonlinear projection vectors in order to exploit the statistical differences between the IWR and OWR pixels. Anomalies are detected if the separation of the projection of the current test pixel spectra and the OWR mean spectra are greater than a certain threshold. Comparisons are made using receiver operating characteristics (ROC) curves.

  10. SU-F-T-672: A Novel Kernel-Based Dose Engine for KeV Photon Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reinhart, M; Fast, M F; Nill, S

    2016-06-15

    Purpose: Mimicking state-of-the-art patient radiotherapy with high precision irradiators for small animals allows advanced dose-effect studies and radiobiological investigations. One example is the implementation of pre-clinical IMRT-like irradiations, which requires the development of inverse planning for keV photon beams. As a first step, we present a novel kernel-based dose calculation engine for keV x-rays with explicit consideration of energy and material dependencies. Methods: We follow a superposition-convolution approach adapted to keV x-rays, based on previously published work on micro-beam therapy. In small animal radiotherapy, we assume local energy deposition at the photon interaction point, since the electron ranges in tissuemore » are of the same order of magnitude as the voxel size. This allows us to use photon-only kernel sets generated by MC simulations, which are pre-calculated for six energy windows and ten base materials. We validate our stand-alone dose engine against Geant4 MC simulations for various beam configurations in water, slab phantoms with bone and lung inserts, and on a mouse CT with (0.275mm)3 voxels. Results: We observe good agreement for all cases. For field sizes of 1mm{sup 2} to 1cm{sup 2} in water, the depth dose curves agree within 1% (mean), with the largest deviations in the first voxel (4%) and at depths>5cm (<2.5%). The out-of-field doses at 1cm depth agree within 8% (mean) for all but the smallest field size. In slab geometries, the mean agreement was within 3%, with maximum deviations of 8% at water-bone interfaces. The γ-index (1mm/1%) passing rate for a single-field mouse irradiation is 71%. Conclusion: The presented dose engine yields an accurate representation of keV-photon doses suitable for inverse treatment planning for IMRT. It has the potential to become a significantly faster yet sufficiently accurate alternative to full MC simulations. Further investigations will focus on energy sampling as well as calculation times. Research at ICR is also supported by Cancer Research UK under Programme C33589/A19727 and NHS funding to the NIHR Biomedical Research Centre at RMH and ICR. MFF is supported by Cancer Research UK under Programme C33589/A19908.« less

  11. GASAKe: forecasting landslide activations by a genetic-algorithms-based hydrological model

    NASA Astrophysics Data System (ADS)

    Terranova, O. G.; Gariano, S. L.; Iaquinta, P.; Iovine, G. G. R.

    2015-07-01

    GASAKe is a new hydrological model aimed at forecasting the triggering of landslides. The model is based on genetic algorithms and allows one to obtain thresholds for the prediction of slope failures using dates of landslide activations and rainfall series. It can be applied to either single landslides or a set of similar slope movements in a homogeneous environment. Calibration of the model provides families of optimal, discretized solutions (kernels) that maximize the fitness function. Starting from the kernels, the corresponding mobility functions (i.e., the predictive tools) can be obtained through convolution with the rain series. The base time of the kernel is related to the magnitude of the considered slope movement, as well as to the hydro-geological complexity of the site. Generally, shorter base times are expected for shallow slope instabilities compared to larger-scale phenomena. Once validated, the model can be applied to estimate the timing of future landslide activations in the same study area, by employing measured or forecasted rainfall series. Examples of application of GASAKe to a medium-size slope movement (the Uncino landslide at San Fili, in Calabria, southern Italy) and to a set of shallow landslides (in the Sorrento Peninsula, Campania, southern Italy) are discussed. In both cases, a successful calibration of the model has been achieved, despite unavoidable uncertainties concerning the dates of occurrence of the slope movements. In particular, for the Sorrento Peninsula case, a fitness of 0.81 has been obtained by calibrating the model against 10 dates of landslide activation; in the Uncino case, a fitness of 1 (i.e., neither missing nor false alarms) has been achieved using five activations. As for temporal validation, the experiments performed by considering further dates of activation have also proved satisfactory. In view of early-warning applications for civil protection, the capability of the model to simulate the occurrences of the Uncino landslide has been tested by means of a progressive, self-adaptive procedure. Finally, a sensitivity analysis has been performed by taking into account the main parameters of the model. The obtained results are quite promising, given the high performance of the model against different types of slope instabilities characterized by several historical activations. Nevertheless, further refinements are still needed for application to landslide risk mitigation within early-warning and decision-support systems.

  12. Three-dimensional waveform sensitivity kernels

    NASA Astrophysics Data System (ADS)

    Marquering, Henk; Nolet, Guust; Dahlen, F. A.

    1998-03-01

    The sensitivity of intermediate-period (~10-100s) seismic waveforms to the lateral heterogeneity of the Earth is computed using an efficient technique based upon surface-wave mode coupling. This formulation yields a general, fully fledged 3-D relationship between data and model without imposing smoothness constraints on the lateral heterogeneity. The calculations are based upon the Born approximation, which yields a linear relation between data and model. The linear relation ensures fast forward calculations and makes the formulation suitable for inversion schemes; however, higher-order effects such as wave-front healing are neglected. By including up to 20 surface-wave modes, we obtain Fréchet, or sensitivity, kernels for waveforms in the time frame that starts at the S arrival and which includes direct and surface-reflected body waves. These 3-D sensitivity kernels provide new insights into seismic-wave propagation, and suggest that there may be stringent limitations on the validity of ray-theoretical interpretations. Even recently developed 2-D formulations, which ignore structure out of the source-receiver plane, differ substantially from our 3-D treatment. We infer that smoothness constraints on heterogeneity, required to justify the use of ray techniques, are unlikely to hold in realistic earth models. This puts the use of ray-theoretical techniques into question for the interpretation of intermediate-period seismic data. The computed 3-D sensitivity kernels display a number of phenomena that are counter-intuitive from a ray-geometrical point of view: (1) body waves exhibit significant sensitivity to structure up to 500km away from the source-receiver minor arc; (2) significant near-surface sensitivity above the two turning points of the SS wave is observed; (3) the later part of the SS wave packet is most sensitive to structure away from the source-receiver path; (4) the sensitivity of the higher-frequency part of the fundamental surface-wave mode is wider than for its faster, lower-frequency part; (5) delayed body waves may considerably influence fundamental Rayleigh and Love waveforms. The strong sensitivity of waveforms to crustal structure due to fundamental-mode-to-body-wave scattering precludes the use of phase-velocity filters to model body-wave arrivals. Results from the 3-D formulation suggest that the use of 2-D and 1-D techniques for the interpretation of intermediate-period waveforms should seriously be reconsidered.

  13. Dosimetric verification of radiation therapy including intensity modulated treatments, using an amorphous-silicon electronic portal imaging device

    NASA Astrophysics Data System (ADS)

    Chytyk-Praznik, Krista Joy

    Radiation therapy is continuously increasing in complexity due to technological innovation in delivery techniques, necessitating thorough dosimetric verification. Comparing accurately predicted portal dose images to measured images obtained during patient treatment can determine if a particular treatment was delivered correctly. The goal of this thesis was to create a method to predict portal dose images that was versatile and accurate enough to use in a clinical setting. All measured images in this work were obtained with an amorphous silicon electronic portal imaging device (a-Si EPID), but the technique is applicable to any planar imager. A detailed, physics-motivated fluence model was developed to characterize fluence exiting the linear accelerator head. The model was further refined using results from Monte Carlo simulations and schematics of the linear accelerator. The fluence incident on the EPID was converted to a portal dose image through a superposition of Monte Carlo-generated, monoenergetic dose kernels specific to the a-Si EPID. Predictions of clinical IMRT fields with no patient present agreed with measured portal dose images within 3% and 3 mm. The dose kernels were applied ignoring the geometrically divergent nature of incident fluence on the EPID. A computational investigation into this parallel dose kernel assumption determined its validity under clinically relevant situations. Introducing a patient or phantom into the beam required the portal image prediction algorithm to account for patient scatter and attenuation. Primary fluence was calculated by attenuating raylines cast through the patient CT dataset, while scatter fluence was determined through the superposition of pre-calculated scatter fluence kernels. Total dose in the EPID was calculated by convolving the total predicted incident fluence with the EPID-specific dose kernels. The algorithm was tested on water slabs with square fields, agreeing with measurement within 3% and 3 mm. The method was then applied to five prostate and six head-and-neck IMRT treatment courses (˜1900 clinical images). Deviations between the predicted and measured images were quantified. The portal dose image prediction model developed in this thesis work has been shown to be accurate, and it was demonstrated to be able to verify patients' delivered radiation treatments.

  14. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1994-01-01

    The unequal error protection capabilities of convolutional and trellis codes are studied. In certain environments, a discrepancy in the amount of error protection placed on different information bits is desirable. Examples of environments which have data of varying importance are a number of speech coding algorithms, packet switched networks, multi-user systems, embedded coding systems, and high definition television. Encoders which provide more than one level of error protection to information bits are called unequal error protection (UEP) codes. In this work, the effective free distance vector, d, is defined as an alternative to the free distance as a primary performance parameter for UEP convolutional and trellis encoders. For a given (n, k), convolutional encoder, G, the effective free distance vector is defined as the k-dimensional vector d = (d(sub 0), d(sub 1), ..., d(sub k-1)), where d(sub j), the j(exp th) effective free distance, is the lowest Hamming weight among all code sequences that are generated by input sequences with at least one '1' in the j(exp th) position. It is shown that, although the free distance for a code is unique to the code and independent of the encoder realization, the effective distance vector is dependent on the encoder realization.

  15. Pre-trained convolutional neural networks as feature extractors for tuberculosis detection.

    PubMed

    Lopes, U K; Valiati, J F

    2017-10-01

    It is estimated that in 2015, approximately 1.8 million people infected by tuberculosis died, most of them in developing countries. Many of those deaths could have been prevented if the disease had been detected at an earlier stage, but the most advanced diagnosis methods are still cost prohibitive for mass adoption. One of the most popular tuberculosis diagnosis methods is the analysis of frontal thoracic radiographs; however, the impact of this method is diminished by the need for individual analysis of each radiography by properly trained radiologists. Significant research can be found on automating diagnosis by applying computational techniques to medical images, thereby eliminating the need for individual image analysis and greatly diminishing overall costs. In addition, recent improvements on deep learning accomplished excellent results classifying images on diverse domains, but its application for tuberculosis diagnosis remains limited. Thus, the focus of this work is to produce an investigation that will advance the research in the area, presenting three proposals to the application of pre-trained convolutional neural networks as feature extractors to detect the disease. The proposals presented in this work are implemented and compared to the current literature. The obtained results are competitive with published works demonstrating the potential of pre-trained convolutional networks as medical image feature extractors. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A comparison of skyshine computational methods.

    PubMed

    Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J

    2005-01-01

    A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.

  17. A Fast Numerical Method for Max-Convolution and the Application to Efficient Max-Product Inference in Bayesian Networks.

    PubMed

    Serang, Oliver

    2015-08-01

    Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.

  18. Performance of Serially Concatenated Convolutional Codes with Binary Modulation in AWGN and Noise Jamming over Rayleigh Fading Channels

    DTIC Science & Technology

    2001-09-01

    Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE...ABSTRACT In this dissertation, the bit error rates for serially concatenated convolutional codes (SCCC) for both BPSK and DPSK modulation with...INTENTIONALLY LEFT BLANK i EXECUTIVE SUMMARY In this dissertation, the bit error rates of serially concatenated convolutional codes

  19. DSN telemetry system performance with convolutionally coded data using operational maximum-likelihood convolutional decoders

    NASA Technical Reports Server (NTRS)

    Benjauthrit, B.; Mulhall, B.; Madsen, B. D.; Alberda, M. E.

    1976-01-01

    The DSN telemetry system performance with convolutionally coded data using the operational maximum-likelihood convolutional decoder (MCD) being implemented in the Network is described. Data rates from 80 bps to 115.2 kbps and both S- and X-band receivers are reported. The results of both one- and two-way radio losses are included.

  20. Kinetic study of Chromium VI adsorption onto palm kernel shell activated carbon

    NASA Astrophysics Data System (ADS)

    Mohammad, Masita; Sadeghi Louyeh, Shiva; Yaakob, Zahira

    2018-04-01

    Heavy metal contamination of industrial effluent is one of the significant environmental problems due to their toxicity and its accumulation throughout the food chain. Adsorption is one of the promising methods for removal of heavy metals from aqua solution because of its simple technique, efficient, reliable and low-cost due to the utilization of residue from the agricultural industry. In this study, activated carbon from palm kernel shells has been produced through chemical activation process using zinc chloride as an activating agent and carbonized at 800 °C. Palm kernel shell activated carbon, PAC was assessed for its efficiency to remove Chromium (VI) ions from aqueous solutions through a batch adsorption process. The kinetic mechanisms have been analysed using Lagergren first-order kinetics model, second-order kinetics model and intra-particle diffusion model. The characterizations such as BET surface area, surface morphology, SEM-EDX have been done. The result shows that the activation process by ZnCl2 was successfully improved the porosity and modified the functional group of palm kernel shell. The result shows that the maximum adsorption capacity of Cr is 11.40mg/g at 30ppm initial metal ion concentration and 0.1g/50mL of adsorbent concentration. The adsorption process followed the pseudo second orders kinetic model.

Top