A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system
NASA Astrophysics Data System (ADS)
Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan
2018-01-01
This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.
A PML-FDTD ALGORITHM FOR SIMULATING PLASMA-COVERED CAVITY-BACKED SLOT ANTENNAS. (R825225)
A three-dimensional frequency-dependent finite-difference time-domain (FDTD) algorithm with perfectly matched layer (PML) absorbing boundary condition (ABC) and recursive convolution approaches is developed to model plasma-covered open-ended waveguide or cavity-backed slot antenn...
Gao, Yingjie; Zhang, Jinhai; Yao, Zhenxing
2015-12-01
The complex frequency shifted perfectly matched layer (CFS-PML) can improve the absorbing performance of PML for nearly grazing incident waves. However, traditional PML and CFS-PML are based on first-order wave equations; thus, they are not suitable for second-order wave equation. In this paper, an implementation of CFS-PML for second-order wave equation is presented using auxiliary differential equations. This method is free of both convolution calculations and third-order temporal derivatives. As an unsplit CFS-PML, it can reduce the nearly grazing incidence. Numerical experiments show that it has better absorption than typical PML implementations based on second-order wave equation.
Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng
2016-01-01
This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion. PMID:27585538
Double absorbing boundaries for finite-difference time-domain electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaGrone, John, E-mail: jlagrone@smu.edu; Hagstrom, Thomas, E-mail: thagstrom@smu.edu
We describe the implementation of optimal local radiation boundary condition sequences for second order finite difference approximations to Maxwell's equations and the scalar wave equation using the double absorbing boundary formulation. Numerical experiments are presented which demonstrate that the design accuracy of the boundary conditions is achieved and, for comparable effort, exceeds that of a convolution perfectly matched layer with reasonably chosen parameters. An advantage of the proposed approach is that parameters can be chosen using an accurate a priori error bound.
NASA Astrophysics Data System (ADS)
Han, Byeongho; Seol, Soon Jee; Byun, Joongmoo
2012-04-01
To simulate wave propagation in a tilted transversely isotropic (TTI) medium with a tilting symmetry-axis of anisotropy, we develop a 2D elastic forward modelling algorithm. In this algorithm, we use the staggered-grid finite-difference method which has fourth-order accuracy in space and second-order accuracy in time. Since velocity-stress formulations are defined for staggered grids, we include auxiliary grid points in the z-direction to meet the free surface boundary conditions for shear stress. Through comparisons of displacements obtained from our algorithm, not only with analytical solutions but also with finite element solutions, we are able to validate that the free surface conditions operate appropriately and elastic waves propagate correctly. In order to handle the artificial boundary reflections efficiently, we also implement convolutional perfectly matched layer (CPML) absorbing boundaries in our algorithm. The CPML sufficiently attenuates energy at the grazing incidence by modifying the damping profile of the PML boundary. Numerical experiments indicate that the algorithm accurately expresses elastic wave propagation in the TTI medium. At the free surface, the numerical results show good agreement with analytical solutions not only for body waves but also for the Rayleigh wave which has strong amplitude along the surface. In addition, we demonstrate the efficiency of CPML for a homogeneous TI medium and a dipping layered model. Only using 10 grid points to the CPML regions, the artificial reflections are successfully suppressed and the energy of the boundary reflection back into the effective modelling area is significantly decayed.
McGee, Monnie; Chen, Zhongxue
2006-01-01
There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.
Wright, Gavin; Harrold, Natalie; Bownes, Peter
2018-01-01
Aims To compare the accuracies of the convolution and TMR10 Gamma Knife treatment planning algorithms, and assess the impact upon clinical practice of implementing convolution-based treatment planning. Methods Doses calculated by both algorithms were compared against ionisation chamber measurements in homogeneous and heterogeneous phantoms. Relative dose distributions calculated by both algorithms were compared against film-derived 2D isodose plots in a heterogeneous phantom, with distance-to-agreement (DTA) measured at the 80%, 50% and 20% isodose levels. A retrospective planning study compared 19 clinically acceptable metastasis convolution plans against TMR10 plans with matched shot times, allowing novel comparison of true dosimetric parameters rather than total beam-on-time. Gamma analysis and dose-difference analysis were performed on each pair of dose distributions. Results Both algorithms matched point dose measurement within ±1.1% in homogeneous conditions. Convolution provided superior point-dose accuracy in the heterogeneous phantom (-1.1% v 4.0%), with no discernible differences in relative dose distribution accuracy. In our study convolution-calculated plans yielded D99% 6.4% (95% CI:5.5%-7.3%,p<0.001) less than shot matched TMR10 plans. For gamma passing criteria 1%/1mm, 16% of targets had passing rates >95%. The range of dose differences in the targets was 0.2-4.6Gy. Conclusions Convolution provides superior accuracy versus TMR10 in heterogeneous conditions. Implementing convolution would result in increased target doses therefore its implementation may require a revaluation of prescription doses. PMID:29657896
Performance advantages of CPML over UPML absorbing boundary conditions in FDTD algorithm
NASA Astrophysics Data System (ADS)
Gvozdic, Branko D.; Djurdjevic, Dusan Z.
2017-01-01
Implementation of absorbing boundary condition (ABC) has a very important role in simulation performance and accuracy in finite difference time domain (FDTD) method. The perfectly matched layer (PML) is the most efficient type of ABC. The aim of this paper is to give detailed insight in and discussion of boundary conditions and hence to simplify the choice of PML used for termination of computational domain in FDTD method. In particular, we demonstrate that using the convolutional PML (CPML) has significant advantages in terms of implementation in FDTD method and reducing computer resources than using uniaxial PML (UPML). An extensive number of numerical experiments has been performed and results have shown that CPML is more efficient in electromagnetic waves absorption. Numerical code is prepared, several problems are analyzed and relative error is calculated and presented.
Li, Yuankun; Xu, Tingfa; Deng, Honggao; Shi, Guokai; Guo, Jie
2018-02-23
Although correlation filter (CF)-based visual tracking algorithms have achieved appealing results, there are still some problems to be solved. When the target object goes through long-term occlusions or scale variation, the correlation model used in existing CF-based algorithms will inevitably learn some non-target information or partial-target information. In order to avoid model contamination and enhance the adaptability of model updating, we introduce the keypoints matching strategy and adjust the model learning rate dynamically according to the matching score. Moreover, the proposed approach extracts convolutional features from a deep convolutional neural network (DCNN) to accurately estimate the position and scale of the target. Experimental results demonstrate that the proposed tracker has achieved satisfactory performance in a wide range of challenging tracking scenarios.
SU-E-T-423: Fast Photon Convolution Calculation with a 3D-Ideal Kernel On the GPU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moriya, S; Sato, M; Tachibana, H
Purpose: The calculation time is a trade-off for improving the accuracy of convolution dose calculation with fine calculation spacing of the KERMA kernel. We investigated to accelerate the convolution calculation using an ideal kernel on the Graphic Processing Units (GPU). Methods: The calculation was performed on the AMD graphics hardware of Dual FirePro D700 and our algorithm was implemented using the Aparapi that convert Java bytecode to OpenCL. The process of dose calculation was separated with the TERMA and KERMA steps. The dose deposited at the coordinate (x, y, z) was determined in the process. In the dose calculation runningmore » on the central processing unit (CPU) of Intel Xeon E5, the calculation loops were performed for all calculation points. On the GPU computation, all of the calculation processes for the points were sent to the GPU and the multi-thread computation was done. In this study, the dose calculation was performed in a water equivalent homogeneous phantom with 150{sup 3} voxels (2 mm calculation grid) and the calculation speed on the GPU to that on the CPU and the accuracy of PDD were compared. Results: The calculation time for the GPU and the CPU were 3.3 sec and 4.4 hour, respectively. The calculation speed for the GPU was 4800 times faster than that for the CPU. The PDD curve for the GPU was perfectly matched to that for the CPU. Conclusion: The convolution calculation with the ideal kernel on the GPU was clinically acceptable for time and may be more accurate in an inhomogeneous region. Intensity modulated arc therapy needs dose calculations for different gantry angles at many control points. Thus, it would be more practical that the kernel uses a coarse spacing technique if the calculation is faster while keeping the similar accuracy to a current treatment planning system.« less
NASA Astrophysics Data System (ADS)
Xia, Y.; Tian, J.; d'Angelo, P.; Reinartz, P.
2018-05-01
3D reconstruction of plants is hard to implement, as the complex leaf distribution highly increases the difficulty level in dense matching. Semi-Global Matching has been successfully applied to recover the depth information of a scene, but may perform variably when different matching cost algorithms are used. In this paper two matching cost computation algorithms, Census transform and an algorithm using a convolutional neural network, are tested for plant reconstruction based on Semi-Global Matching. High resolution close-range photogrammetric images from a handheld camera are used for the experiment. The disparity maps generated based on the two selected matching cost methods are comparable with acceptable quality, which shows the good performance of Census and the potential of neural networks to improve the dense matching.
The CFS-PML in numerical simulation of ATEM
NASA Astrophysics Data System (ADS)
Zhao, Xuejiao; Ji, Yanju; Qiu, Shuo; Guan, Shanshan; Wu, Yanqi
2017-01-01
In the simulation of airborne transient electromagnetic method (ATEM) in time-domain, the truncated boundary reflection can bring a big error to the results. The complex frequency shifted perfectly matched layer (CFS-PML) absorbing boundary condition has been proved to have a better absorption of low frequency incident wave and can reduce the late reflection greatly. In this paper, we apply the CFS-PML to three-dimensional numerical simulation of ATEM in time-domain to achieve a high precision .The expression of divergence equation in CFS-PML is confirmed and its explicit iteration format based on the finite difference method and the recursive convolution technique is deduced. Finally, we use the uniformity half space model and the anomalous model to test the validity of this method. Results show that the CFS-PML can reduce the average relative error to 2.87% and increase the accuracy of the anomaly recognition.
Gasmelseed, Akram; Yunus, Jasmy
2014-01-01
The interaction of a dipole antenna with a human eye model in the presence of a metamaterial is investigated in this paper. The finite difference time domain (FDTD) method with convolutional perfectly matched layer (CPML) formulation have been used. A three-dimensional anatomical model of the human eye with resolution of 1.25 mm × 1.25 mm × 1.25 mm was used in this study. The dipole antenna was driven by modulated Gaussian pulse and the numerical study is performed with dipole operating at 900 MHz. The analysis has been done by varying the size and value of electric permittivity of the metamaterial. By normalizing the peak SAR (1 g and 10 g) to 1 W for all examined cases, we observed how the SAR values are not affected by the different permittivity values with the size of the metamaterial kept fixed. Copyright © 2013 Elsevier Ltd. All rights reserved.
AN FDTD ALGORITHM WITH PERFECTLY MATCHED LAYERS FOR CONDUCTIVE MEDIA. (R825225)
We extend Berenger's perfectly matched layers (PML) to conductive media. A finite-difference-time-domain (FDTD) algorithm with PML as an absorbing boundary condition is developed for solutions of Maxwell's equations in inhomogeneous, conductive media. For a perfectly matched laye...
NASA Astrophysics Data System (ADS)
Ping, Ping; Zhang, Yu; Xu, Yixian; Chu, Risheng
2016-12-01
In order to improve the perfectly matched layer (PML) efficiency in viscoelastic media, we first propose a split multi-axial PML (M-PML) and an unsplit convolutional PML (C-PML) in the second-order viscoelastic wave equations with the displacement as the only unknown. The advantage of these formulations is that it is easy and efficient to revise the existing codes of the second-order spectral element method (SEM) or finite-element method (FEM) with absorbing boundaries in a uniform equation, as well as more economical than the auxiliary differential equations PML. Three models which are easily suffered from late time instabilities are considered to validate our approaches. Through comparison the M-PML with C-PML efficiency of absorption and stability for long time simulation, it can be concluded that: (1) for an isotropic viscoelastic medium with high Poisson's ratio, the C-PML will be a sufficient choice for long time simulation because of its weak reflections and superior stability; (2) unlike the M-PML with high-order damping profile, the M-PML with second-order damping profile loses its stability in long time simulation for an isotropic viscoelastic medium; (3) in an anisotropic viscoelastic medium, the C-PML suffers from instabilities, while the M-PML with second-order damping profile can be a better choice for its superior stability and more acceptable weak reflections than the M-PML with high-order damping profile. The comparative analysis of the developed methods offers meaningful significance for long time seismic wave modeling in second-order viscoelastic wave equations.
Nonreflective Conditions for Perfectly Matched Layer in Computational Aeroacoustics
NASA Astrophysics Data System (ADS)
Choung, Hanahchim; Jang, Seokjong; Lee, Soogab
2018-05-01
In computational aeroacoustics, boundary conditions such as radiation, outflow, or absorbing boundary conditions are critical issues in that they can affect the entire solution of the computation. Among these types of boundary conditions, the perfectly matched layer boundary condition, which has been widely used in computational fluid dynamics and computational aeroacoustics, is developed by augmenting the additional term in the original governing equations by an absorption function so as to stably absorb the outgoing waves. Even if the perfectly matched layer is analytically a perfectly nonreflective boundary condition, spurious waves occur at the interface, since the analysis is performed in discretized space. Hence, this study is focused on factors that affect numerical errors from perfectly matched layer to find the optimum conditions for nonreflective PML. Through a mathematical approach, a minimum width of perfectly matched layer and an optimum absorption coefficient are suggested. To validate the prediction of the analysis, numerical simulations are performed in a generalized coordinate system, as well as in a Cartesian coordinate system.
A distributed-memory approximation algorithm for maximum weight perfect bipartite matching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azad, Ariful; Buluc, Aydin; Li, Xiaoye S.
We design and implement an efficient parallel approximation algorithm for the problem of maximum weight perfect matching in bipartite graphs, i.e. the problem of finding a set of non-adjacent edges that covers all vertices and has maximum weight. This problem differs from the maximum weight matching problem, for which scalable approximation algorithms are known. It is primarily motivated by finding good pivots in scalable sparse direct solvers before factorization where sequential implementations of maximum weight perfect matching algorithms, such as those available in MC64, are widely used due to the lack of scalable alternatives. To overcome this limitation, we proposemore » a fully parallel distributed memory algorithm that first generates a perfect matching and then searches for weightaugmenting cycles of length four in parallel and iteratively augments the matching with a vertex disjoint set of such cycles. For most practical problems the weights of the perfect matchings generated by our algorithm are very close to the optimum. An efficient implementation of the algorithm scales up to 256 nodes (17,408 cores) on a Cray XC40 supercomputer and can solve instances that are too large to be handled by a single node using the sequential algorithm.« less
Rethinking Skin Lesion Segmentation in a Convolutional Classifier.
Burdick, Jack; Marques, Oge; Weinthal, Janet; Furht, Borko
2017-10-18
Melanoma is a fatal form of skin cancer when left undiagnosed. Computer-aided diagnosis systems powered by convolutional neural networks (CNNs) can improve diagnostic accuracy and save lives. CNNs have been successfully used in both skin lesion segmentation and classification. For reasons heretofore unclear, previous works have found image segmentation to be, conflictingly, both detrimental and beneficial to skin lesion classification. We investigate the effect of expanding the segmentation border to include pixels surrounding the target lesion. Ostensibly, segmenting a target skin lesion will remove inessential information, non-lesion skin, and artifacts to aid in classification. Our results indicate that segmentation border enlargement produces, to a certain degree, better results across all metrics of interest when using a convolutional based classifier built using the transfer learning paradigm. Consequently, preprocessing methods which produce borders larger than the actual lesion can potentially improve classifier performance, more than both perfect segmentation, using dermatologist created ground truth masks, and no segmentation altogether.
A PERFECT MATCH CONDITION FOR POINT-SET MATCHING PROBLEMS USING THE OPTIMAL MASS TRANSPORT APPROACH
CHEN, PENGWEN; LIN, CHING-LONG; CHERN, I-LIANG
2013-01-01
We study the performance of optimal mass transport-based methods applied to point-set matching problems. The present study, which is based on the L2 mass transport cost, states that perfect matches always occur when the product of the point-set cardinality and the norm of the curl of the non-rigid deformation field does not exceed some constant. This analytic result is justified by a numerical study of matching two sets of pulmonary vascular tree branch points whose displacement is caused by the lung volume changes in the same human subject. The nearly perfect match performance verifies the effectiveness of this mass transport-based approach. PMID:23687536
Ellison, M D; Breen, T J; Davies, D B; Edwards, E B; Mahoney, R J; Daily, O P; Norman, D J
1996-11-01
The transplant community attempts to maximize overall renal graft survival rates through nationwide sharing of perfectly-matched cadaveric kidneys. Although the number of such transplants is determined annually, the number available but not transplanted has never been assessed. There has also been no verification of the widespread claim that kidneys transplanted as paybacks for perfect matches are inferior. From records of the United Network for Organ Sharing, a complete accounting of six-antigen-matched kidney disposition was obtained, including a frequency distribution of reasons for refusal given when kidneys were refused for matched patients. Actuarial graft survival (GS) rates for matched, payback, and other cadaveric renal transplants were determined. Of the six-antigen-matched kidneys available, 97 percent were transplanted; 71 percent of those were accepted for matched patients. The two-year GS rate for matched patients was 84 percent, significantly higher than that for kidneys available for matched patients but transplanted into other patients (71.3 percent) and that for all other cadaveric kidneys (75.5 percent). Most reasons for refusal were related to donor quality. Kidneys refused for such reasons showed a 67.7 percent two-year GS rate in nonmatched patients and the highest rates of acute and chronic rejection and primary failure. The two-year GS rate for kidneys accepted as paybacks for matched kidneys (75.7 percent) was equivalent to that for all non-matched cadaveric kidneys (75.5 percent). If all normal-quality grafts refused for perfectly matched patients during 1990 through 1992 had been accepted for those patients, the number of transplants with typically superior survival rates could have increased by 25 percent, from 1,365 to 1,704. The payback requirement of the United Network for Organ Sharing does not seem to reduce the overall benefits of sharing perfectly matched kidneys nationwide.
Identifying Corresponding Patches in SAR and Optical Images With a Pseudo-Siamese CNN
NASA Astrophysics Data System (ADS)
Hughes, Lloyd H.; Schmitt, Michael; Mou, Lichao; Wang, Yuanyuan; Zhu, Xiao Xiang
2018-05-01
In this letter, we propose a pseudo-siamese convolutional neural network (CNN) architecture that enables to solve the task of identifying corresponding patches in very-high-resolution (VHR) optical and synthetic aperture radar (SAR) remote sensing imagery. Using eight convolutional layers each in two parallel network streams, a fully connected layer for the fusion of the features learned in each stream, and a loss function based on binary cross-entropy, we achieve a one-hot indication if two patches correspond or not. The network is trained and tested on an automatically generated dataset that is based on a deterministic alignment of SAR and optical imagery via previously reconstructed and subsequently co-registered 3D point clouds. The satellite images, from which the patches comprising our dataset are extracted, show a complex urban scene containing many elevated objects (i.e. buildings), thus providing one of the most difficult experimental environments. The achieved results show that the network is able to predict corresponding patches with high accuracy, thus indicating great potential for further development towards a generalized multi-sensor key-point matching procedure. Index Terms-synthetic aperture radar (SAR), optical imagery, data fusion, deep learning, convolutional neural networks (CNN), image matching, deep matching
NASA Technical Reports Server (NTRS)
Clark, R. T.; Mccallister, R. D.
1982-01-01
The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.
Seismic signal auto-detecing from different features by using Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Huang, Y.; Zhou, Y.; Yue, H.; Zhou, S.
2017-12-01
We try Convolutional Neural Network to detect some features of seismic data and compare their efficience. The features include whether a signal is seismic signal or noise and the arrival time of P and S phase and each feature correspond to a Convolutional Neural Network. We first use traditional STA/LTA to recongnize some events and then use templete matching to find more events as training set for the Neural Network. To make the training set more various, we add some noise to the seismic data and make some synthetic seismic data and noise. The 3-component raw signal and time-frequancy ananlyze are used as the input data for our neural network. Our Training is performed on GPUs to achieve efficient convergence. Our method improved the precision in comparison with STA/LTA and template matching. We will move to recurrent neural network to see if this kind network is better in detect P and S phase.
Cognitive Learning Styles: Can You Engineer a "Perfect" Match?
ERIC Educational Resources Information Center
Khuzzan, Sharifah Mazlina Syed; Goulding, Jack Steven
2016-01-01
Education and training is widely acknowledged as being one of the key factors for leveraging organisational success. However, it is equally acknowledged that skills development and the acquisition of learning through managed cognitive approaches has yet to provide a "perfect" match. Whilst it is argued that an ideal learning scenario…
NASA Technical Reports Server (NTRS)
Goodrich, John W.
2009-01-01
In this paper we show by means of numerical experiments that the error introduced in a numerical domain because of a Perfectly Matched Layer or Damping Layer boundary treatment can be controlled. These experimental demonstrations are for acoustic propagation with the Linearized Euler Equations with both uniform and steady jet flows. The propagating signal is driven by a time harmonic pressure source. Combinations of Perfectly Matched and Damping Layers are used with different damping profiles. These layer and profile combinations allow the relative error introduced by a layer to be kept as small as desired, in principle. Tradeoffs between error and cost are explored.
The spatial resolution of a rotating gamma camera tomographic facility.
Webb, S; Flower, M A; Ott, R J; Leach, M O; Inamdar, R
1983-12-01
An important feature determining the spatial resolution in transverse sections reconstructed by convolution and back-projection is the frequency filter corresponding to the convolution kernel. Equations have been derived giving the theoretical spatial resolution, for a perfect detector and noise-free data, using four filter functions. Experiments have shown that physical constraints will always limit the resolution that can be achieved with a given system. The experiments indicate that the region of the frequency spectrum between KN/2 and KN where KN is the Nyquist frequency does not contribute significantly to resolution. In order to investigate the physical effect of these filter functions, the spatial resolution of reconstructed images obtained with a GE 400T rotating gamma camera has been measured. The results obtained serve as an aid to choosing appropriate reconstruction filters for use with a rotating gamma camera system.
1981-09-30
to perform a variety of local arithmetic operations. Our initial task will be to use it for computing 5X5 convolutions common to many low level...report presents the results of applying our relaxation based scene matching systein I1] to a new domain - automatic matching of pairs of images. The task...objects (corners of buildings) within the large image. But we did demonstrate the ability of our system to automatically segment, describe, and match
CONEDEP: COnvolutional Neural network based Earthquake DEtection and Phase Picking
NASA Astrophysics Data System (ADS)
Zhou, Y.; Huang, Y.; Yue, H.; Zhou, S.; An, S.; Yun, N.
2017-12-01
We developed an automatic local earthquake detection and phase picking algorithm based on Fully Convolutional Neural network (FCN). The FCN algorithm detects and segments certain features (phases) in 3 component seismograms to realize efficient picking. We use STA/LTA algorithm and template matching algorithm to construct the training set from seismograms recorded 1 month before and after the Wenchuan earthquake. Precise P and S phases are identified and labeled to construct the training set. Noise data are produced by combining back-ground noise and artificial synthetic noise to form the equivalent scale of noise set as the signal set. Training is performed on GPUs to achieve efficient convergence. Our algorithm has significantly improved performance in terms of the detection rate and precision in comparison with STA/LTA and template matching algorithms.
UNIPIC code for simulations of high power microwave devices
NASA Astrophysics Data System (ADS)
Wang, Jianguo; Zhang, Dianhui; Liu, Chunliang; Li, Yongdong; Wang, Yue; Wang, Hongguang; Qiao, Hailiang; Li, Xiaoze
2009-03-01
In this paper, UNIPIC code, a new member in the family of fully electromagnetic particle-in-cell (PIC) codes for simulations of high power microwave (HPM) generation, is introduced. In the UNIPIC code, the electromagnetic fields are updated using the second-order, finite-difference time-domain (FDTD) method, and the particles are moved using the relativistic Newton-Lorentz force equation. The convolutional perfectly matched layer method is used to truncate the open boundaries of HPM devices. To model curved surfaces and avoid the time step reduction in the conformal-path FDTD method, CP weakly conditional-stable FDTD (WCS FDTD) method which combines the WCS FDTD and CP-FDTD methods, is implemented. UNIPIC is two-and-a-half dimensional, is written in the object-oriented C++ language, and can be run on a variety of platforms including WINDOWS, LINUX, and UNIX. Users can use the graphical user's interface to create the geometric structures of the simulated HPM devices, or input the old structures created before. Numerical experiments on some typical HPM devices by using the UNIPIC code are given. The results are compared to those obtained from some well-known PIC codes, which agree well with each other.
Najafi-Yazdi, A.; Mongeau, L.
2012-01-01
The Lattice Boltzmann Method (LBM) is a well established computational tool for fluid flow simulations. This method has been recently utilized for low Mach number computational aeroacoustics. Robust and nonreflective boundary conditions, similar to those used in Navier-Stokes solvers, are needed for LBM-based aeroacoustics simulations. The goal of the present study was to develop an absorbing boundary condition based on the perfectly matched layer (PML) concept for LBM. The derivation of formulations for both two and three dimensional problems are presented. The macroscopic behavior of the new formulation is discussed. The new formulation was tested using benchmark acoustic problems. The perfectly matched layer concept appears to be very well suited for LBM, and yielded very low acoustic reflection factor. PMID:23526050
Convolutional auto-encoder for image denoising of ultra-low-dose CT.
Nishio, Mizuho; Nagashima, Chihiro; Hirabayashi, Saori; Ohnishi, Akinori; Sasaki, Kaori; Sagawa, Tomoyuki; Hamada, Masayuki; Yamashita, Tatsuo
2017-08-01
The purpose of this study was to validate a patch-based image denoising method for ultra-low-dose CT images. Neural network with convolutional auto-encoder and pairs of standard-dose CT and ultra-low-dose CT image patches were used for image denoising. The performance of the proposed method was measured by using a chest phantom. Standard-dose and ultra-low-dose CT images of the chest phantom were acquired. The tube currents for standard-dose and ultra-low-dose CT were 300 and 10 mA, respectively. Ultra-low-dose CT images were denoised with our proposed method using neural network, large-scale nonlocal mean, and block-matching and 3D filtering. Five radiologists and three technologists assessed the denoised ultra-low-dose CT images visually and recorded their subjective impressions of streak artifacts, noise other than streak artifacts, visualization of pulmonary vessels, and overall image quality. For the streak artifacts, noise other than streak artifacts, and visualization of pulmonary vessels, the results of our proposed method were statistically better than those of block-matching and 3D filtering (p-values < 0.05). On the other hand, the difference in the overall image quality between our proposed method and block-matching and 3D filtering was not statistically significant (p-value = 0.07272). The p-values obtained between our proposed method and large-scale nonlocal mean were all less than 0.05. Neural network with convolutional auto-encoder could be trained using pairs of standard-dose and ultra-low-dose CT image patches. According to the visual assessment by radiologists and technologists, the performance of our proposed method was superior to that of large-scale nonlocal mean and block-matching and 3D filtering.
NASA Astrophysics Data System (ADS)
Lan, Bo; Lowe, Michael J. S.; Dunne, Fionn P. E.
2015-10-01
A new spherical convolution approach has been presented which couples HCP single crystal wave speed (the kernel function) with polycrystal c-axis pole distribution function to give the resultant polycrystal wave speed response. The three functions have been expressed as spherical harmonic expansions thus enabling application of the de-convolution technique to enable any one of the three to be determined from knowledge of the other two. Hence, the forward problem of determination of polycrystal wave speed from knowledge of single crystal wave speed response and the polycrystal pole distribution has been solved for a broad range of experimentally representative HCP polycrystal textures. The technique provides near-perfect representation of the sensitivity of wave speed to polycrystal texture as well as quantitative prediction of polycrystal wave speed. More importantly, a solution to the inverse problem is presented in which texture, as a c-axis distribution function, is determined from knowledge of the kernel function and the polycrystal wave speed response. It has also been explained why it has been widely reported in the literature that only texture coefficients up to 4th degree may be obtained from ultrasonic measurements. Finally, the de-convolution approach presented provides the potential for the measurement of polycrystal texture from ultrasonic wave speed measurements.
Enhancement of digital radiography image quality using a convolutional neural network.
Sun, Yuewen; Li, Litao; Cong, Peng; Wang, Zhentao; Guo, Xiaojing
2017-01-01
Digital radiography system is widely used for noninvasive security check and medical imaging examination. However, the system has a limitation of lower image quality in spatial resolution and signal to noise ratio. In this study, we explored whether the image quality acquired by the digital radiography system can be improved with a modified convolutional neural network to generate high-resolution images with reduced noise from the original low-quality images. The experiment evaluated on a test dataset, which contains 5 X-ray images, showed that the proposed method outperformed the traditional methods (i.e., bicubic interpolation and 3D block-matching approach) as measured by peak signal to noise ratio (PSNR) about 1.3 dB while kept highly efficient processing time within one second. Experimental results demonstrated that a residual to residual (RTR) convolutional neural network remarkably improved the image quality of object structural details by increasing the image resolution and reducing image noise. Thus, this study indicated that applying this RTR convolutional neural network system was useful to improve image quality acquired by the digital radiography system.
Simulating Seismic Wave Propagation in Viscoelastic Media with an Irregular Free Surface
NASA Astrophysics Data System (ADS)
Liu, Xiaobo; Chen, Jingyi; Zhao, Zhencong; Lan, Haiqiang; Liu, Fuping
2018-05-01
In seismic numerical simulations of wave propagation, it is very important for us to consider surface topography and attenuation, which both have large effects (e.g., wave diffractions, conversion, amplitude/phase change) on seismic imaging and inversion. An irregular free surface provides significant information for interpreting the characteristics of seismic wave propagation in areas with rugged or rapidly varying topography, and viscoelastic media are a better representation of the earth's properties than acoustic/elastic media. In this study, we develop an approach for seismic wavefield simulation in 2D viscoelastic isotropic media with an irregular free surface. Based on the boundary-conforming grid method, the 2D time-domain second-order viscoelastic isotropic equations and irregular free surface boundary conditions are transferred from a Cartesian coordinate system to a curvilinear coordinate system. Finite difference operators with second-order accuracy are applied to discretize the viscoelastic wave equations and the irregular free surface in the curvilinear coordinate system. In addition, we select the convolutional perfectly matched layer boundary condition in order to effectively suppress artificial reflections from the edges of the model. The snapshot and seismogram results from numerical tests show that our algorithm successfully simulates seismic wavefields (e.g., P-wave, Rayleigh wave and converted waves) in viscoelastic isotropic media with an irregular free surface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi Tao; Li Ying; Song Zhi
We show that a perfect quantum-state transmission can be realized through a spin chain possessing the commensurate structure of an energy spectrum, which is matched with the corresponding parity. As an exposition of the mirror inversion symmetry discovered by Albanese et al. (e-print quant-ph/0405029), the parity matched commensurability of the energy spectra helps us to present preengineered spin systems for quantum information transmission. Based on these theoretical analyses, we propose a protocol of near-perfect quantum-state transfer by using a ferromagnetic Heisenberg chain with uniform coupling constant, but an external parabolic magnetic field. The numerical results show that the initial Gaussianmore » wave packet in this system with optimal field distribution can be reshaped near perfectly over a longer distance.« less
Clinical Assistant Diagnosis for Electronic Medical Record Based on Convolutional Neural Network.
Yang, Zhongliang; Huang, Yongfeng; Jiang, Yiran; Sun, Yuxi; Zhang, Yu-Jin; Luo, Pengcheng
2018-04-20
Automatically extracting useful information from electronic medical records along with conducting disease diagnoses is a promising task for both clinical decision support(CDS) and neural language processing(NLP). Most of the existing systems are based on artificially constructed knowledge bases, and then auxiliary diagnosis is done by rule matching. In this study, we present a clinical intelligent decision approach based on Convolutional Neural Networks(CNN), which can automatically extract high-level semantic information of electronic medical records and then perform automatic diagnosis without artificial construction of rules or knowledge bases. We use collected 18,590 copies of the real-world clinical electronic medical records to train and test the proposed model. Experimental results show that the proposed model can achieve 98.67% accuracy and 96.02% recall, which strongly supports that using convolutional neural network to automatically learn high-level semantic features of electronic medical records and then conduct assist diagnosis is feasible and effective.
Metric learning with spectral graph convolutions on brain connectivity networks.
Ktena, Sofia Ira; Parisot, Sarah; Ferrante, Enzo; Rajchl, Martin; Lee, Matthew; Glocker, Ben; Rueckert, Daniel
2018-04-01
Graph representations are often used to model structured data at an individual or population level and have numerous applications in pattern recognition problems. In the field of neuroscience, where such representations are commonly used to model structural or functional connectivity between a set of brain regions, graphs have proven to be of great importance. This is mainly due to the capability of revealing patterns related to brain development and disease, which were previously unknown. Evaluating similarity between these brain connectivity networks in a manner that accounts for the graph structure and is tailored for a particular application is, however, non-trivial. Most existing methods fail to accommodate the graph structure, discarding information that could be beneficial for further classification or regression analyses based on these similarities. We propose to learn a graph similarity metric using a siamese graph convolutional neural network (s-GCN) in a supervised setting. The proposed framework takes into consideration the graph structure for the evaluation of similarity between a pair of graphs, by employing spectral graph convolutions that allow the generalisation of traditional convolutions to irregular graphs and operates in the graph spectral domain. We apply the proposed model on two datasets: the challenging ABIDE database, which comprises functional MRI data of 403 patients with autism spectrum disorder (ASD) and 468 healthy controls aggregated from multiple acquisition sites, and a set of 2500 subjects from UK Biobank. We demonstrate the performance of the method for the tasks of classification between matching and non-matching graphs, as well as individual subject classification and manifold learning, showing that it leads to significantly improved results compared to traditional methods. Copyright © 2017 Elsevier Inc. All rights reserved.
Matching Matched Filtering with Deep Networks for Gravitational-Wave Astronomy
NASA Astrophysics Data System (ADS)
Gabbard, Hunter; Williams, Michael; Hayes, Fergus; Messenger, Chris
2018-04-01
We report on the construction of a deep convolutional neural network that can reproduce the sensitivity of a matched-filtering search for binary black hole gravitational-wave signals. The standard method for the detection of well-modeled transient gravitational-wave signals is matched filtering. We use only whitened time series of measured gravitational-wave strain as an input, and we train and test on simulated binary black hole signals in synthetic Gaussian noise representative of Advanced LIGO sensitivity. We show that our network can classify signal from noise with a performance that emulates that of match filtering applied to the same data sets when considering the sensitivity defined by receiver-operator characteristics.
Matching Matched Filtering with Deep Networks for Gravitational-Wave Astronomy.
Gabbard, Hunter; Williams, Michael; Hayes, Fergus; Messenger, Chris
2018-04-06
We report on the construction of a deep convolutional neural network that can reproduce the sensitivity of a matched-filtering search for binary black hole gravitational-wave signals. The standard method for the detection of well-modeled transient gravitational-wave signals is matched filtering. We use only whitened time series of measured gravitational-wave strain as an input, and we train and test on simulated binary black hole signals in synthetic Gaussian noise representative of Advanced LIGO sensitivity. We show that our network can classify signal from noise with a performance that emulates that of match filtering applied to the same data sets when considering the sensitivity defined by receiver-operator characteristics.
Annunziata, Roberto; Trucco, Emanuele
2016-11-01
Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.
Effective image differencing with convolutional neural networks for real-time transient hunting
NASA Astrophysics Data System (ADS)
Sedaghat, Nima; Mahabal, Ashish
2018-06-01
Large sky surveys are increasingly relying on image subtraction pipelines for real-time (and archival) transient detection. In this process one has to contend with varying point-spread function (PSF) and small brightness variations in many sources, as well as artefacts resulting from saturated stars and, in general, matching errors. Very often the differencing is done with a reference image that is deeper than individual images and the attendant difference in noise characteristics can also lead to artefacts. We present here a deep-learning approach to transient detection that encapsulates all the steps of a traditional image-subtraction pipeline - image registration, background subtraction, noise removal, PSF matching and subtraction - in a single real-time convolutional network. Once trained, the method works lightening-fast and, given that it performs multiple steps in one go, the time saved and false positives eliminated for multi-CCD surveys like Zwicky Transient Facility and Large Synoptic Survey Telescope will be immense, as millions of subtractions will be needed per night.
NASA Astrophysics Data System (ADS)
Giongo Fernandes, Alexandre; Benjamin, Robert A.; Babler, Brian
2018-01-01
Two sets of infrared images of the Galactic Center region (|L|< 1 degree and |B| < 0.75 degrees) taken by the Spitzer Space Telescope in IRAC 3.6 micron and 4.5 micron bands are searched for high proper motion objects (> 100 mas/year). The two image sets come from GALCEN observations in 2005 and GLIMPSE proper observations in 2015 with matched observation modes. We use three different methods to search for these objects in extremely crowded fields: (1) comparing matched point source lists, (2) crowd sourcing by several college introductory astronomy classes in the state of Wisconsin (700 volunteers), and (3) convolutional neural networks trained using objects from the previous two methods. Before our search six high proper objects were known, four of which were found by the VVV near-infrared Galactic plane survey. We compare and describe our methods for this search, and present a preliminary catalog of high proper motions objects.
Turbo Trellis Coded Modulation With Iterative Decoding for Mobile Satellite Communications
NASA Technical Reports Server (NTRS)
Divsalar, D.; Pollara, F.
1997-01-01
In this paper, analytical bounds on the performance of parallel concatenation of two codes, known as turbo codes, and serial concatenation of two codes over fading channels are obtained. Based on this analysis, design criteria for the selection of component trellis codes for MPSK modulation, and a suitable bit-by-bit iterative decoding structure are proposed. Examples are given for throughput of 2 bits/sec/Hz with 8PSK modulation. The parallel concatenation example uses two rate 4/5 8-state convolutional codes with two interleavers. The convolutional codes' outputs are then mapped to two 8PSK modulations. The serial concatenated code example uses an 8-state outer code with rate 4/5 and a 4-state inner trellis code with 5 inputs and 2 x 8PSK outputs per trellis branch. Based on the above mentioned design criteria for fading channels, a method to obtain he structure of the trellis code with maximum diversity is proposed. Simulation results are given for AWGN and an independent Rayleigh fading channel with perfect Channel State Information (CSI).
Multiple Objects Fusion Tracker Using a Matching Network for Adaptively Represented Instance Pairs
Oh, Sang-Il; Kang, Hang-Bong
2017-01-01
Multiple-object tracking is affected by various sources of distortion, such as occlusion, illumination variations and motion changes. Overcoming these distortions by tracking on RGB frames, such as shifting, has limitations because of material distortions caused by RGB frames. To overcome these distortions, we propose a multiple-object fusion tracker (MOFT), which uses a combination of 3D point clouds and corresponding RGB frames. The MOFT uses a matching function initialized on large-scale external sequences to determine which candidates in the current frame match with the target object in the previous frame. After conducting tracking on a few frames, the initialized matching function is fine-tuned according to the appearance models of target objects. The fine-tuning process of the matching function is constructed as a structured form with diverse matching function branches. In general multiple object tracking situations, scale variations for a scene occur depending on the distance between the target objects and the sensors. If the target objects in various scales are equally represented with the same strategy, information losses will occur for any representation of the target objects. In this paper, the output map of the convolutional layer obtained from a pre-trained convolutional neural network is used to adaptively represent instances without information loss. In addition, MOFT fuses the tracking results obtained from each modality at the decision level to compensate the tracking failures of each modality using basic belief assignment, rather than fusing modalities by selectively using the features of each modality. Experimental results indicate that the proposed tracker provides state-of-the-art performance considering multiple objects tracking (MOT) and KITTIbenchmarks. PMID:28420194
Detection of bars in galaxies using a deep convolutional neural network
NASA Astrophysics Data System (ADS)
Abraham, Sheelu; Aniyan, A. K.; Kembhavi, Ajit K.; Philip, N. S.; Vaghmare, Kaustubh
2018-06-01
We present an automated method for the detection of bar structure in optical images of galaxies using a deep convolutional neural network that is easy to use and provides good accuracy. In our study, we use a sample of 9346 galaxies in the redshift range of 0.009-0.2 from the Sloan Digital Sky Survey (SDSS), which has 3864 barred galaxies, the rest being unbarred. We reach a top precision of 94 per cent in identifying bars in galaxies using the trained network. This accuracy matches the accuracy reached by human experts on the same data without additional information about the images. Since deep convolutional neural networks can be scaled to handle large volumes of data, the method is expected to have great relevance in an era where astronomy data is rapidly increasing in terms of volume, variety, volatility, and velocity along with other V's that characterize big data. With the trained model, we have constructed a catalogue of barred galaxies from SDSS and made it available online.
Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; LeCun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2012-01-01
Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons. PMID:22518097
Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; Lecun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2012-01-01
Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, N; Najafi, M; Hancock, S
Purpose: Robust matching of ultrasound images is a challenging problem as images of the same anatomy often present non-trivial differences. This poses an obstacle for ultrasound guidance in radiotherapy. Thus our objective is to overcome this obstacle by designing and evaluating an image blocks matching framework based on a two channel deep convolutional neural network. Methods: We extend to 3D an algorithmic structure previously introduced for 2D image feature learning [1]. To obtain the similarity between two 3D image blocks A and B, the 3D image blocks are divided into 2D patches Ai and Bi. The similarity is then calculatedmore » as the average similarity score of Ai and Bi. The neural network was then trained with public non-medical image pairs, and subsequently evaluated on ultrasound image blocks for the following scenarios: (S1) same image blocks with/without shifts (A and A-shift-x); (S2) non-related random block pairs; (S3) ground truth registration matched pairs of different ultrasound images with/without shifts (A-i and A-reg-i-shift-x). Results: For S1 the similarity scores of A and A-shift-x were 32.63, 18.38, 12.95, 9.23, 2.15 and 0.43 for x=ranging from 0 mm to 10 mm in 2 mm increments. For S2 the average similarity score for non-related block pairs was −1.15. For S3 the average similarity score of ground truth registration matched blocks A-i and A-reg-i-shift-0 (1≤i≤5) was 12.37. After translating A-reg-i-shift-0 by 0 mm, 2 mm, 4 mm, 6 mm, 8 mm, and 10 mm, the average similarity scores of A-i and A-reg-i-shift-x were 11.04, 8.42, 4.56, 2.27, and 0.29 respectively. Conclusion: The proposed method correctly assigns highest similarity to corresponding 3D ultrasound image blocks despite differences in image content and thus can form the basis for ultrasound image registration and tracking.[1] Zagoruyko, Komodakis, “Learning to compare image patches via convolutional neural networks', IEEE CVPR 2015,pp.4353–4361.« less
Perfect transmission at oblique incidence by trigonal warping in graphene P-N junctions
NASA Astrophysics Data System (ADS)
Zhang, Shu-Hui; Yang, Wen
2018-01-01
We develop an analytical mode-matching technique for the tight-binding model to describe electron transport across graphene P-N junctions. This method shares the simplicity of the conventional mode-matching technique for the low-energy continuum model and the accuracy of the tight-binding model over a wide range of energies. It further reveals an interesting phenomenon on a sharp P-N junction: the disappearance of the well-known Klein tunneling (i.e., perfect transmission) at normal incidence and the appearance of perfect transmission at oblique incidence due to trigonal warping at energies beyond the linear Dirac regime. We show that this phenomenon arises from the conservation of a generalized pseudospin in the tight-binding model. We expect this effect to be experimentally observable in graphene and other Dirac fermions systems, such as the surface of three-dimensional topological insulators.
Genotyping by alkaline dehybridization using graphically encoded particles.
Zhang, Huaibin; DeConinck, Adam J; Slimmer, Scott C; Doyle, Patrick S; Lewis, Jennifer A; Nuzzo, Ralph G
2011-03-01
This work describes a nonenzymatic, isothermal genotyping method based on the kinetic differences exhibited in the dehybridization of perfectly matched (PM) and single-base mismatched (MM) DNA duplexes in an alkaline solution. Multifunctional encoded hydrogel particles incorporating allele-specific oligonucleotide (ASO) probes in two distinct regions were fabricated by using microfluidic-based stop-flow lithography. Each particle contained two distinct ASO probe sequences differing at a single base position, and thus each particle was capable of simultaneously probing two distinct target alleles. Fluorescently labeled target alleles were annealed to both probe regions of a particle, and the rate of duplex dehybridization was monitored by using fluorescence microscopy. Duplex dehybridization was achieved through an alkaline stimulus using either a pH step function or a temporal pH gradient. When a single target probe sequence was used, the rate of mismatch duplex dehybridization could be discriminated from the rate of perfect match duplex dehybridization. In a more demanding application in which two distinct probe sequences were used, we found that the rate profiles provided a means to discriminate probe dehybridizations from both of the two mismatched duplexes as well as to distinguish at high certainty the dehybridization of the two perfectly matched duplexes. These results demonstrate an ability of alkaline dehybridization to correctly discriminate the rank hierarchy of thermodynamic stability among four sets of perfect match and single-base mismatch duplexes. We further demonstrate that these rate profiles are strongly temperature dependent and illustrate how the sensitivity can be compensated beneficially by the use of an actuating gradient pH field. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Garai, Anirban; Diosady, Laslo T.; Murman, Scott M.; Madavan, Nateri K.
2016-01-01
The perfectly matched layer (PML) technique is developed in the context of a high- order spectral-element Discontinuous-Galerkin (DG) method. The technique is applied to a range of test cases and is shown to be superior compared to other approaches, such as those based on using characteristic boundary conditions and sponge layers, for treating the inflow and outflow boundaries of computational domains. In general, the PML technique improves the quality of the numerical results for simulations of practical flow configurations, but it also exhibits some instabilities for large perturbations. A preliminary analysis that attempts to understand the source of these instabilities is discussed.
Gallmeier, F. X.; Iverson, E. B.; Lu, W.; ...
2016-01-08
Neutron transport simulation codes are an indispensable tool used for the design and construction of modern neutron scattering facilities and instrumentation. It has become increasingly clear that some neutron instrumentation has started to exploit physics that is not well-modelled by the existing codes. Particularly, the transport of neutrons through single crystals and across interfaces in MCNP(X), Geant4 and other codes ignores scattering from oriented crystals and refractive effects, and yet these are essential ingredients for the performance of monochromators and ultra-cold neutron transport respectively (to mention but two examples). In light of these developments, we have extended the MCNPX codemore » to include a single-crystal neutron scattering model and neutron reflection/refraction physics. Furthermore, we have also generated silicon scattering kernels for single crystals of definable orientation with respect to an incoming neutron beam. As a first test of these new tools, we have chosen to model the recently developed convoluted moderator concept, in which a moderating material is interleaved with layers of perfect crystals to provide an exit path for neutrons moderated to energies below the crystal s Bragg cut off at locations deep within the moderator. Studies of simple cylindrical convoluted moderator systems of 100 mm diameter and composed of polyethylene and single crystal silicon were performed with the upgraded MCNPX code and reproduced the magnitude of effects seen in experiments compared to homogeneous moderator systems. Applying different material properties for refraction and reflection, and by replacing the silicon in the models with voids, we show that the emission enhancements seen in recent experiments are primarily caused by the transparency of the silicon/void layers. Finally the convoluted moderator experiments described by Iverson et al. were simulated and we find satisfactory agreement between the measurement and the results of simulations performed using the tools we have developed.« less
Wallis, Thomas S A; Funke, Christina M; Ecker, Alexander S; Gatys, Leon A; Wichmann, Felix A; Bethge, Matthias
2017-10-01
Our visual environment is full of texture-"stuff" like cloth, bark, or gravel as distinct from "things" like dresses, trees, or paths-and humans are adept at perceiving subtle variations in material properties. To investigate image features important for texture perception, we psychophysically compare a recent parametric model of texture appearance (convolutional neural network [CNN] model) that uses the features encoded by a deep CNN (VGG-19) with two other models: the venerable Portilla and Simoncelli model and an extension of the CNN model in which the power spectrum is additionally matched. Observers discriminated model-generated textures from original natural textures in a spatial three-alternative oddity paradigm under two viewing conditions: when test patches were briefly presented to the near-periphery ("parafoveal") and when observers were able to make eye movements to all three patches ("inspection"). Under parafoveal viewing, observers were unable to discriminate 10 of 12 original images from CNN model images, and remarkably, the simpler Portilla and Simoncelli model performed slightly better than the CNN model (11 textures). Under foveal inspection, matching CNN features captured appearance substantially better than the Portilla and Simoncelli model (nine compared to four textures), and including the power spectrum improved appearance matching for two of the three remaining textures. None of the models we test here could produce indiscriminable images for one of the 12 textures under the inspection condition. While deep CNN (VGG-19) features can often be used to synthesize textures that humans cannot discriminate from natural textures, there is currently no uniformly best model for all textures and viewing conditions.
Backscattering from a Gaussian distributed, perfectly conducting, rough surface
NASA Technical Reports Server (NTRS)
Brown, G. S.
1977-01-01
The problem of scattering by random surfaces possessing many scales of roughness is analyzed. The approach is applicable to bistatic scattering from dielectric surfaces, however, this specific analysis is restricted to backscattering from a perfectly conducting surface in order to more clearly illustrate the method. The surface is assumed to be Gaussian distributed so that the surface height can be split into large and small scale components, relative to the electromagnetic wavelength. A first order perturbation approach is employed wherein the scattering solution for the large scale structure is perturbed by the small scale diffraction effects. The scattering from the large scale structure is treated via geometrical optics techniques. The effect of the large scale surface structure is shown to be equivalent to a convolution in k-space of the height spectrum with the following: the shadowing function, a polarization and surface slope dependent function, and a Gaussian factor resulting from the unperturbed geometrical optics solution. This solution provides a continuous transition between the near normal incidence geometrical optics and wide angle Bragg scattering results.
The Analysis and Construction of Perfectly Matched Layers for the Linearized Euler Equations
NASA Technical Reports Server (NTRS)
Hesthaven, J. S.
1997-01-01
We present a detailed analysis of a recently proposed perfectly matched layer (PML) method for the absorption of acoustic waves. The split set of equations is shown to be only weakly well-posed, and ill-posed under small low order perturbations. This analysis provides the explanation for the stability problems associated with the split field formulation and illustrates why applying a filter has a stabilizing effect. Utilizing recent results obtained within the context of electromagnetics, we develop strongly well-posed absorbing layers for the linearized Euler equations. The schemes are shown to be perfectly absorbing independent of frequency and angle of incidence of the wave in the case of a non-convecting mean flow. In the general case of a convecting mean flow, a number of techniques is combined to obtain a absorbing layers exhibiting PML-like behavior. The efficacy of the proposed absorbing layers is illustrated though computation of benchmark problems in aero-acoustics.
Palacios-Flores, Kim; García-Sotelo, Jair; Castillo, Alejandra; Uribe, Carina; Aguilar, Luis; Morales, Lucía; Gómez-Romero, Laura; Reyes, José; Garciarubio, Alejandro; Boege, Margareta; Dávila, Guillermo
2018-01-01
We present a conceptually simple, sensitive, precise, and essentially nonstatistical solution for the analysis of genome variation in haploid organisms. The generation of a Perfect Match Genomic Landscape (PMGL), which computes intergenome identity with single nucleotide resolution, reveals signatures of variation wherever a query genome differs from a reference genome. Such signatures encode the precise location of different types of variants, including single nucleotide variants, deletions, insertions, and amplifications, effectively introducing the concept of a general signature of variation. The precise nature of variants is then resolved through the generation of targeted alignments between specific sets of sequence reads and known regions of the reference genome. Thus, the perfect match logic decouples the identification of the location of variants from the characterization of their nature, providing a unified framework for the detection of genome variation. We assessed the performance of the PMGL strategy via simulation experiments. We determined the variation profiles of natural genomes and of a synthetic chromosome, both in the context of haploid yeast strains. Our approach uncovered variants that have previously escaped detection. Moreover, our strategy is ideally suited for further refining high-quality reference genomes. The source codes for the automated PMGL pipeline have been deposited in a public repository. PMID:29367403
Acoustic metamaterials with broadband and wide-angle impedance matching
NASA Astrophysics Data System (ADS)
Liu, Chenkai; Luo, Jie; Lai, Yun
2018-04-01
We propose a general approach to design broadband and wide-angle impedance-matched acoustic metamaterials. Such an unusual acoustic impedance matching characteristic can be well explained by using a spatially dispersive effective medium theory. For demonstrations, we used silicone rubber, which has a huge impedance contrast with water, to design one- and two-dimensional acoustic structures which are almost perfectly impedance matched to water for a wide range of incident angles and in a broad frequency band. Our work opens up an approach to realize extraordinary acoustic impedance matching properties via metamaterial-design techniques.
Modeling of direct detection Doppler wind lidar. I. The edge technique.
McKay, J A
1998-09-20
Analytic models, based on a convolution of a Fabry-Perot etalon transfer function with a Gaussian spectral source, are developed for the shot-noise-limited measurement precision of Doppler wind lidars based on the edge filter technique by use of either molecular or aerosol atmospheric backscatter. The Rayleigh backscatter formulation yields a map of theoretical sensitivity versus etalon parameters, permitting design optimization and showing that the optimal system will have a Doppler measurement uncertainty no better than approximately 2.4 times that of a perfect, lossless receiver. An extension of the models to include the effect of limited etalon aperture leads to a condition for the minimum aperture required to match light collection optics. It is shown that, depending on the choice of operating point, the etalon aperture finesse must be 4-15 to avoid degradation of measurement precision. A convenient, closed-form expression for the measurement precision is obtained for spectrally narrow backscatter and is shown to be useful for backscatter that is spectrally broad as well. The models are extended to include extrinsic noise, such as solar background or the Rayleigh background on an aerosol Doppler lidar. A comparison of the model predictions with experiment has not yet been possible, but a comparison with detailed instrument modeling by McGill and Spinhirne shows satisfactory agreement. The models derived here will be more conveniently implemented than McGill and Spinhirne's and more readily permit physical insights to the optimization and limitations of the double-edge technique.
Data matching for free-surface multiple attenuation by multidimensional deconvolution
NASA Astrophysics Data System (ADS)
van der Neut, Joost; Frijlink, Martijn; van Borselen, Roald
2012-09-01
A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.
Kaltenbacher, Barbara; Kaltenbacher, Manfred; Sim, Imbo
2013-01-01
We consider the second order wave equation in an unbounded domain and propose an advanced perfectly matched layer (PML) technique for its efficient and reliable simulation. In doing so, we concentrate on the time domain case and use the finite-element (FE) method for the space discretization. Our un-split-PML formulation requires four auxiliary variables within the PML region in three space dimensions. For a reduced version (rPML), we present a long time stability proof based on an energy analysis. The numerical case studies and an application example demonstrate the good performance and long time stability of our formulation for treating open domain problems. PMID:23888085
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pourmatin, Hossein, E-mail: mpourmat@andrew.cmu.edu; Dayal, Kaushik, E-mail: kaushik@cmu.edu
2016-10-15
Graphical abstract: - Abstract: We consider the scattering of incident plane-wave electrons from a defect in a crystal modeled by the time-harmonic Schrödinger equation. While the defect potential is localized, the far-field potential is periodic, unlike standard free-space scattering problems. Previous work on the Schrödinger equation has been almost entirely in free-space conditions; a few works on crystals have been in one-dimension. We construct absorbing boundary conditions for this problem using perfectly matched layers in a tight-binding formulation. Using the example of a point defect in graphene, we examine the efficiency and convergence of the proposed absorbing boundary condition.
ERIC Educational Resources Information Center
Monahan, Patrick O.; Ankenmann, Robert D.
2010-01-01
When the matching score is either less than perfectly reliable or not a sufficient statistic for determining latent proficiency in data conforming to item response theory (IRT) models, Type I error (TIE) inflation may occur for the Mantel-Haenszel (MH) procedure or any differential item functioning (DIF) procedure that matches on summed-item…
NASA Technical Reports Server (NTRS)
Feria, Y.; Cheung, K.-M.
1995-01-01
In a time-varying signal-to-noise ration (SNR) environment, symbol rate is often changed to maximize data return. However, the symbol-rate change has some undesirable effects, such as changing the transmission bandwidth and perhaps causing the receiver symbol loop to lose lock temporarily, thus losing some data. In this article, we are proposing an alternate way of varying the data rate without changing the symbol rate and, therefore, the transmission bandwidth. The data rate change is achieved in a seamless fashion by puncturing the convolutionally encoded symbol stream to adapt to the changing SNR environment. We have also derived an exact expression to enumerate the number of distinct puncturing patterns. To demonstrate this seamless rate change capability, we searched for good puncturing patterns for the Galileo (14,1/4) convolutional code and changed the data rates by using the punctured codes to match the Galileo SNR profile of November 9, 1997. We show that this scheme reduces the symbol-rate changes from nine to two and provides a comparable data return in a day and a higher symbol SNR during most of the day.
NASA Astrophysics Data System (ADS)
Feria, Y.; Cheung, K.-M.
1994-10-01
In a time-varying signal-to-noise ratio (SNR) environment, symbol rate is often changed to maximize data return. However, the symbol-rate change has some undesirable effects, such as changing the transmission bandwidth and perhaps causing the receiver symbol loop to lose lock temporarily, thus losing some data. In this article, we are proposing an alternate way of varying the data rate without changing the symbol rate and, therefore, the transmission bandwidth. The data rate change is achieved in a seamless fashion by puncturing the convolutionally encoded symbol stream to adapt to the changing SNR environment. We have also derived an exact expression to enumerate the number of distinct puncturing patterns. To demonstrate this seamless rate-change capability, we searched for good puncturing patterns for the Galileo (14,1/4) convolutional code and changed the data rates by using the punctured codes to match the Galileo SNR profile of November 9, 1997. We show that this scheme reduces the symbol-rate changes from nine to two and provides a comparable data return in a day and a higher symbol SNR during most of the day.
NASA Astrophysics Data System (ADS)
Feria, Y.; Cheung, K.-M.
1995-02-01
In a time-varying signal-to-noise ration (SNR) environment, symbol rate is often changed to maximize data return. However, the symbol-rate change has some undesirable effects, such as changing the transmission bandwidth and perhaps causing the receiver symbol loop to lose lock temporarily, thus losing some data. In this article, we are proposing an alternate way of varying the data rate without changing the symbol rate and, therefore, the transmission bandwidth. The data rate change is achieved in a seamless fashion by puncturing the convolutionally encoded symbol stream to adapt to the changing SNR environment. We have also derived an exact expression to enumerate the number of distinct puncturing patterns. To demonstrate this seamless rate change capability, we searched for good puncturing patterns for the Galileo (14,1/4) convolutional code and changed the data rates by using the punctured codes to match the Galileo SNR profile of November 9, 1997. We show that this scheme reduces the symbol-rate changes from nine to two and provides a comparable data return in a day and a higher symbol SNR during most of the day.
A fast button surface defects detection method based on convolutional neural network
NASA Astrophysics Data System (ADS)
Liu, Lizhe; Cao, Danhua; Wu, Songlin; Wu, Yubin; Wei, Taoran
2018-01-01
Considering the complexity of the button surface texture and the variety of buttons and defects, we propose a fast visual method for button surface defect detection, based on convolutional neural network (CNN). CNN has the ability to extract the essential features by training, avoiding designing complex feature operators adapted to different kinds of buttons, textures and defects. Firstly, we obtain the normalized button region and then use HOG-SVM method to identify the front and back side of the button. Finally, a convolutional neural network is developed to recognize the defects. Aiming at detecting the subtle defects, we propose a network structure with multiple feature channels input. To deal with the defects of different scales, we take a strategy of multi-scale image block detection. The experimental results show that our method is valid for a variety of buttons and able to recognize all kinds of defects that have occurred, including dent, crack, stain, hole, wrong paint and uneven. The detection rate exceeds 96%, which is much better than traditional methods based on SVM and methods based on template match. Our method can reach the speed of 5 fps on DSP based smart camera with 600 MHz frequency.
Palacios-Flores, Kim; García-Sotelo, Jair; Castillo, Alejandra; Uribe, Carina; Aguilar, Luis; Morales, Lucía; Gómez-Romero, Laura; Reyes, José; Garciarubio, Alejandro; Boege, Margareta; Dávila, Guillermo
2018-04-01
We present a conceptually simple, sensitive, precise, and essentially nonstatistical solution for the analysis of genome variation in haploid organisms. The generation of a Perfect Match Genomic Landscape (PMGL), which computes intergenome identity with single nucleotide resolution, reveals signatures of variation wherever a query genome differs from a reference genome. Such signatures encode the precise location of different types of variants, including single nucleotide variants, deletions, insertions, and amplifications, effectively introducing the concept of a general signature of variation. The precise nature of variants is then resolved through the generation of targeted alignments between specific sets of sequence reads and known regions of the reference genome. Thus, the perfect match logic decouples the identification of the location of variants from the characterization of their nature, providing a unified framework for the detection of genome variation. We assessed the performance of the PMGL strategy via simulation experiments. We determined the variation profiles of natural genomes and of a synthetic chromosome, both in the context of haploid yeast strains. Our approach uncovered variants that have previously escaped detection. Moreover, our strategy is ideally suited for further refining high-quality reference genomes. The source codes for the automated PMGL pipeline have been deposited in a public repository. Copyright © 2018 by the Genetics Society of America.
Facet Annotation by Extending CNN with a Matching Strategy.
Wu, Bei; Wei, Bifan; Liu, Jun; Guo, Zhaotong; Zheng, Yuanhao; Chen, Yihe
2018-06-01
Most community question answering (CQA) websites manage plenty of question-answer pairs (QAPs) through topic-based organizations, which may not satisfy users' fine-grained search demands. Facets of topics serve as a powerful tool to navigate, refine, and group the QAPs. In this work, we propose FACM, a model to annotate QAPs with facets by extending convolution neural networks (CNNs) with a matching strategy. First, phrase information is incorporated into text representation by CNNs with different kernel sizes. Then, through a matching strategy among QAPs and facet label texts (FaLTs) acquired from Wikipedia, we generate similarity matrices to deal with the facet heterogeneity. Finally, a three-channel CNN is trained for facet label assignment of QAPs. Experiments on three real-world data sets show that FACM outperforms the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Song, Wanjun; Zhang, Hou
2017-11-01
Through introducing the alternating direction implicit (ADI) technique and the memory-optimized algorithm to the shift operator (SO) finite difference time domain (FDTD) method, the memory-optimized SO-ADI FDTD for nonmagnetized collisional plasma is proposed and the corresponding formulae of the proposed method for programming are deduced. In order to further the computational efficiency, the iteration method rather than Gauss elimination method is employed to solve the equation set in the derivation of the formulae. Complicated transformations and convolutions are avoided in the proposed method compared with the Z transforms (ZT) ADI FDTD method and the piecewise linear JE recursive convolution (PLJERC) ADI FDTD method. The numerical dispersion of the SO-ADI FDTD method with different plasma frequencies and electron collision frequencies is analyzed and the appropriate ratio of grid size to the minimum wavelength is given. The accuracy of the proposed method is validated by the reflection coefficient test on a nonmagnetized collisional plasma sheet. The testing results show that the proposed method is advantageous for improving computational efficiency and saving computer memory. The reflection coefficient of a perfect electric conductor (PEC) sheet covered by multilayer plasma and the RCS of the objects coated by plasma are calculated by the proposed method and the simulation results are analyzed.
NASA Astrophysics Data System (ADS)
Luo, Chang; Wang, Jie; Feng, Gang; Xu, Suhui; Wang, Shiqiang
2017-10-01
Deep convolutional neural networks (CNNs) have been widely used to obtain high-level representation in various computer vision tasks. However, for remote scene classification, there are not sufficient images to train a very deep CNN from scratch. From two viewpoints of generalization power, we propose two promising kinds of deep CNNs for remote scenes and try to find whether deep CNNs need to be deep for remote scene classification. First, we transfer successful pretrained deep CNNs to remote scenes based on the theory that depth of CNNs brings the generalization power by learning available hypothesis for finite data samples. Second, according to the opposite viewpoint that generalization power of deep CNNs comes from massive memorization and shallow CNNs with enough neural nodes have perfect finite sample expressivity, we design a lightweight deep CNN (LDCNN) for remote scene classification. With five well-known pretrained deep CNNs, experimental results on two independent remote-sensing datasets demonstrate that transferred deep CNNs can achieve state-of-the-art results in an unsupervised setting. However, because of its shallow architecture, LDCNN cannot obtain satisfactory performance, regardless of whether in an unsupervised, semisupervised, or supervised setting. CNNs really need depth to obtain general features for remote scenes. This paper also provides baseline for applying deep CNNs to other remote sensing tasks.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-08
... modifiers available to algorithms used by Floor brokers to route interest to the Exchange's matching engine...-Quotes entered into the matching engine by an algorithm on behalf of a Floor broker. STP modifiers would... algorithms removes impediments to and perfects the mechanism of a free and open market because there is a...
Grammatical verb aspect and event roles in sentence processing.
Madden-Lombardi, Carol; Dominey, Peter Ford; Ventre-Dominey, Jocelyne
2017-01-01
Two experiments examine how grammatical verb aspect constrains our understanding of events. According to linguistic theory, an event described in the perfect aspect (John had opened the bottle) should evoke a mental representation of a finished event with focus on the resulting object, whereas an event described in the imperfective aspect (John was opening the bottle) should evoke a representation of the event as ongoing, including all stages of the event, and focusing all entities relevant to the ongoing action (instruments, objects, agents, locations, etc.). To test this idea, participants saw rebus sentences in the perfect and imperfective aspect, presented one word at a time, self-paced. In each sentence, the instrument and the recipient of the action were replaced by pictures (John was using/had used a *corkscrew* to open the *bottle* at the restaurant). Time to process the two images as well as speed and accuracy on sensibility judgments were measured. Although experimental sentences always made sense, half of the object and instrument pictures did not match the temporal constraints of the verb. For instance, in perfect sentences aspect-congruent trials presented an image of the corkscrew closed (no longer in-use) and the wine bottle fully open. The aspect-incongruent yet still sensible versions either replaced the corkscrew with an in-use corkscrew (open, in-hand) or the bottle image with a half-opened bottle. In this case, the participant would still respond "yes", but with longer expected response times. A three-way interaction among Verb Aspect, Sentence Role, and Temporal Match on image processing times showed that participants were faster to process images that matched rather than mismatched the aspect of the verb, especially for resulting objects in perfect sentences. A second experiment replicated and extended the results to confirm that this was not due to the placement of the object in the sentence. These two experiments extend previous research, showing how verb aspect drives not only the temporal structure of event representation, but also the focus on specific roles of the event. More generally, the findings of visual match during online sentence-picture processing are consistent with theories of perceptual simulation.
Grammatical verb aspect and event roles in sentence processing
Madden-Lombardi, Carol; Dominey, Peter Ford; Ventre-Dominey, Jocelyne
2017-01-01
Two experiments examine how grammatical verb aspect constrains our understanding of events. According to linguistic theory, an event described in the perfect aspect (John had opened the bottle) should evoke a mental representation of a finished event with focus on the resulting object, whereas an event described in the imperfective aspect (John was opening the bottle) should evoke a representation of the event as ongoing, including all stages of the event, and focusing all entities relevant to the ongoing action (instruments, objects, agents, locations, etc.). To test this idea, participants saw rebus sentences in the perfect and imperfective aspect, presented one word at a time, self-paced. In each sentence, the instrument and the recipient of the action were replaced by pictures (John was using/had used a *corkscrew* to open the *bottle* at the restaurant). Time to process the two images as well as speed and accuracy on sensibility judgments were measured. Although experimental sentences always made sense, half of the object and instrument pictures did not match the temporal constraints of the verb. For instance, in perfect sentences aspect-congruent trials presented an image of the corkscrew closed (no longer in-use) and the wine bottle fully open. The aspect-incongruent yet still sensible versions either replaced the corkscrew with an in-use corkscrew (open, in-hand) or the bottle image with a half-opened bottle. In this case, the participant would still respond “yes”, but with longer expected response times. A three-way interaction among Verb Aspect, Sentence Role, and Temporal Match on image processing times showed that participants were faster to process images that matched rather than mismatched the aspect of the verb, especially for resulting objects in perfect sentences. A second experiment replicated and extended the results to confirm that this was not due to the placement of the object in the sentence. These two experiments extend previous research, showing how verb aspect drives not only the temporal structure of event representation, but also the focus on specific roles of the event. More generally, the findings of visual match during online sentence-picture processing are consistent with theories of perceptual simulation. PMID:29287091
Deep learning guided stroke management: a review of clinical applications.
Feng, Rui; Badgeley, Marcus; Mocco, J; Oermann, Eric K
2018-04-01
Stroke is a leading cause of long-term disability, and outcome is directly related to timely intervention. Not all patients benefit from rapid intervention, however. Thus a significant amount of attention has been paid to using neuroimaging to assess potential benefit by identifying areas of ischemia that have not yet experienced cellular death. The perfusion-diffusion mismatch, is used as a simple metric for potential benefit with timely intervention, yet penumbral patterns provide an inaccurate predictor of clinical outcome. Machine learning research in the form of deep learning (artificial intelligence) techniques using deep neural networks (DNNs) excel at working with complex inputs. The key areas where deep learning may be imminently applied to stroke management are image segmentation, automated featurization (radiomics), and multimodal prognostication. The application of convolutional neural networks, the family of DNN architectures designed to work with images, to stroke imaging data is a perfect match between a mature deep learning technique and a data type that is naturally suited to benefit from deep learning's strengths. These powerful tools have opened up exciting opportunities for data-driven stroke management for acute intervention and for guiding prognosis. Deep learning techniques are useful for the speed and power of results they can deliver and will become an increasingly standard tool in the modern stroke specialist's arsenal for delivering personalized medicine to patients with ischemic stroke. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Hokmabadi, Mohammad P.; Tareki, Abubaker; Rivera, Elmer; Kung, Patrick; Lindquist, Robert G.; Kim, Seongsin M.
2017-01-01
In this letter, we report the unique design, simulation and experimental verification of an electrically tunable THz metamaterial perfect absorber consisting of complementary split ring resonator (CSRR) arrays integrated with liquid crystal as the subwavelength spacer in between. We observe a shift in resonance frequency of about 5.0 GHz at 0.567 THz with a 5 V bias voltage at 1KHz between the CSRR and the metal backplane, while the absorbance and full width at half maximum bandwidth are maintained at 90% and 0.025 THz, respectively. Simulated absorption spectrum by using a uniaxial model of LC matches perfectly the experiment data and demonstrates that the effective refractive index of LC changes between 1.5 and 1.7 by sweeping a 1 kHz bias voltage from 0 V to 5 V. By matching simulation and experiment for different bias voltages, we also estimate the angle of LC molecules versus the bias voltage. Additionally, we study the created THz fields inside the spacer to gain a better insight of the characteristics of tunable response of this device. This structure and associated study can support the design of liquid crystal based tunable terahertz detectors and sensors for various applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Huixu; Li, Zhigang; Stan, Liliana
Broadband perfect absorber based on one ultrathin layer of the refractory metal chromium without structure pat- terning is proposed and demonstrated. The ideal permittivity of the metal layer for achieving broadband perfect absorption is derived based on the impedance transformation method. Since the permittivity of the refractory metal chromium matches this ideal permittivity well in the visible and near-infrared range, a silica-chromium-silica three-layer absorber is fabricated to demonstrate the broadband perfect absorption. The experimental results under normal incidence show that the absorption is above 90% over the wavelength range of 0.4–1.4 μm, and the measurements under angled incidence within 400–800more » nm prove that the absorber is angle-insensitive and polarization- independent.« less
Role of color memory in successive color constancy.
Ling, Yazhu; Hurlbert, Anya
2008-06-01
We investigate color constancy for real 2D paper samples using a successive matching paradigm in which the observer memorizes a reference surface color under neutral illumination and after a temporal interval selects a matching test surface under the same or different illumination. We find significant effects of the illumination, reference surface, and their interaction on the matching error. We characterize the matching error in the absence of illumination change as the "pure color memory shift" and introduce a new index for successive color constancy that compares this shift against the matching error under changing illumination. The index also incorporates the vector direction of the matching errors in chromaticity space, unlike the traditional constancy index. With this index, we find that color constancy is nearly perfect.
Chen, Shuo; Luo, Chenggao; Wang, Hongqiang; Deng, Bin; Cheng, Yongqiang; Zhuang, Zhaowen
2018-04-26
As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. However, there are still two problems in three-dimensional (3D) TCAI. Firstly, the large-scale reference-signal matrix based on meshing the 3D imaging area creates a heavy computational burden, thus leading to unsatisfactory efficiency. Secondly, it is difficult to resolve the target under low signal-to-noise ratio (SNR). In this paper, we propose a 3D imaging method based on matched filtering (MF) and convolutional neural network (CNN), which can reduce the computational burden and achieve high-resolution imaging for low SNR targets. In terms of the frequency-hopping (FH) signal, the original echo is processed with MF. By extracting the processed echo in different spike pulses separately, targets in different imaging planes are reconstructed simultaneously to decompose the global computational complexity, and then are synthesized together to reconstruct the 3D target. Based on the conventional TCAI model, we deduce and build a new TCAI model based on MF. Furthermore, the convolutional neural network (CNN) is designed to teach the MF-TCAI how to reconstruct the low SNR target better. The experimental results demonstrate that the MF-TCAI achieves impressive performance on imaging ability and efficiency under low SNR. Moreover, the MF-TCAI has learned to better resolve the low-SNR 3D target with the help of CNN. In summary, the proposed 3D TCAI can achieve: (1) low-SNR high-resolution imaging by using MF; (2) efficient 3D imaging by downsizing the large-scale reference-signal matrix; and (3) intelligent imaging with CNN. Therefore, the TCAI based on MF and CNN has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.
Matching the quasiparton distribution in a momentum subtraction scheme
NASA Astrophysics Data System (ADS)
Stewart, Iain W.; Zhao, Yong
2018-03-01
The quasiparton distribution is a spatial correlation of quarks or gluons along the z direction in a moving nucleon which enables direct lattice calculations of parton distribution functions. It can be defined with a nonperturbative renormalization in a regularization independent momentum subtraction scheme (RI/MOM), which can then be perturbatively related to the collinear parton distribution in the MS ¯ scheme. Here we carry out a direct matching from the RI/MOM scheme for the quasi-PDF to the MS ¯ PDF, determining the non-singlet quark matching coefficient at next-to-leading order in perturbation theory. We find that the RI/MOM matching coefficient is insensitive to the ultraviolet region of convolution integral, exhibits improved perturbative convergence when converting between the quasi-PDF and PDF, and is consistent with a quasi-PDF that vanishes in the unphysical region as the proton momentum Pz→∞ , unlike other schemes. This direct approach therefore has the potential to improve the accuracy for converting quasidistribution lattice calculations to collinear distributions.
Gao, Kai; Huang, Lianjie
2017-11-13
Conventional perfectly matched layers (PML) can be unstable for certain kinds of anisotropic media. Multi-axial PML removes such instability using nonzero damping coe cients in the directions tangential with the PML interface. While using non-zero damping pro le ratios can stabilize PML, it is important to obtain the smallest possible damping pro le ratios to minimize arti cial re ections caused by these non-zero ratios, particularly for 3D general anisotropic media. Using the eigenvectors of the PML system matrix, we develop a straightforward and e cient numerical algorithm to determine the optimal damping pro le ratios to stabilize PML inmore » 2D and 3D general anisotropic media. Numerical examples show that our algorithm provides optimal damping pro le ratios to ensure the stability of PML and complex-frequency-shifted PML for elastic-wave modeling in 2D and 3D general anisotropic media.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Huang, Lianjie
Conventional perfectly matched layers (PML) can be unstable for certain kinds of anisotropic media. Multi-axial PML removes such instability using nonzero damping coe cients in the directions tangential with the PML interface. While using non-zero damping pro le ratios can stabilize PML, it is important to obtain the smallest possible damping pro le ratios to minimize arti cial re ections caused by these non-zero ratios, particularly for 3D general anisotropic media. Using the eigenvectors of the PML system matrix, we develop a straightforward and e cient numerical algorithm to determine the optimal damping pro le ratios to stabilize PML inmore » 2D and 3D general anisotropic media. Numerical examples show that our algorithm provides optimal damping pro le ratios to ensure the stability of PML and complex-frequency-shifted PML for elastic-wave modeling in 2D and 3D general anisotropic media.« less
Li, J; Guo, L-X; Zeng, H; Han, X-B
2009-06-01
A message-passing-interface (MPI)-based parallel finite-difference time-domain (FDTD) algorithm for the electromagnetic scattering from a 1-D randomly rough sea surface is presented. The uniaxial perfectly matched layer (UPML) medium is adopted for truncation of FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different processors is illustrated for one sea surface realization, and the computation time of the parallel FDTD algorithm is dramatically reduced compared to a single-process implementation. Finally, some numerical results are shown, including the backscattering characteristics of sea surface for different polarization and the bistatic scattering from a sea surface with large incident angle and large wind speed.
Three-dimensional fingerprint recognition by using convolution neural network
NASA Astrophysics Data System (ADS)
Tian, Qianyu; Gao, Nan; Zhang, Zonghua
2018-01-01
With the development of science and technology and the improvement of social information, fingerprint recognition technology has become a hot research direction and been widely applied in many actual fields because of its feasibility and reliability. The traditional two-dimensional (2D) fingerprint recognition method relies on matching feature points. This method is not only time-consuming, but also lost three-dimensional (3D) information of fingerprint, with the fingerprint rotation, scaling, damage and other issues, a serious decline in robustness. To solve these problems, 3D fingerprint has been used to recognize human being. Because it is a new research field, there are still lots of challenging problems in 3D fingerprint recognition. This paper presents a new 3D fingerprint recognition method by using a convolution neural network (CNN). By combining 2D fingerprint and fingerprint depth map into CNN, and then through another CNN feature fusion, the characteristics of the fusion complete 3D fingerprint recognition after classification. This method not only can preserve 3D information of fingerprints, but also solves the problem of CNN input. Moreover, the recognition process is simpler than traditional feature point matching algorithm. 3D fingerprint recognition rate by using CNN is compared with other fingerprint recognition algorithms. The experimental results show that the proposed 3D fingerprint recognition method has good recognition rate and robustness.
Iizuka, Hideki; Lefor, Alan K
2018-04-19
To determine if the Consecutive Interpreting Approach enhances medical English communication skills of students in a Japanese medical university and to assess this method based on performance and student evaluations. This is a three-phase study using a mixed-methods design, which starts with four language reproduction activities for 30 medical and 95 nursing students, followed by a quantitative analysis of perfect-match reproduction rates to assess changes over the duration of the study and qualitative error analysis of participants' language reproduction. The final stage included a scored course evaluation and free-form comments to evaluate this approach and to identify effective educational strategies to enhance medical English communication skills. Mean perfect-match reproduction rates of all participants over four reproduction activities differed statistically significantly (repeated measures ANOVA, p<0.0005). The overall perfect-match reproduction rates improved from 75.3 % to 90.1 % for nursing and 89.5 % to 91.6% for medical students. The final achievement levels of nursing and medical students were equivalent (test of equivalence, p<0.05). Details of lexical- and syntactic-level errors were identified. The course evaluation scores were 3.74 (n=30, SD = 0.59) and 3.77 (n=90, SD=0.54) for medical and nursing students respectively. Participants' medical English communication skills are enhanced using this approach. Participants expressed positive feedback regarding this instruction method. This approach may be effective to enhance the language skills of non-native English-speaking students seeking to practice medicine in English speaking countries.
Lefor, Alan K.
2018-01-01
Objectives To determine if the Consecutive Interpreting Approach enhances medical English communication skills of students in a Japanese medical university and to assess this method based on performance and student evaluations. Methods This is a three-phase study using a mixed-methods design, which starts with four language reproduction activities for 30 medical and 95 nursing students, followed by a quantitative analysis of perfect-match reproduction rates to assess changes over the duration of the study and qualitative error analysis of participants' language reproduction. The final stage included a scored course evaluation and free-form comments to evaluate this approach and to identify effective educational strategies to enhance medical English communication skills. Results Mean perfect-match reproduction rates of all participants over four reproduction activities differed statistically significantly (repeated measures ANOVA, p<0.0005). The overall perfect-match reproduction rates improved from 75.3 % to 90.1 % for nursing and 89.5 % to 91.6% for medical students. The final achievement levels of nursing and medical students were equivalent (test of equivalence, p<0.05). Details of lexical- and syntactic-level errors were identified. The course evaluation scores were 3.74 (n=30, SD = 0.59) and 3.77 (n=90, SD=0.54) for medical and nursing students respectively. Conclusions Participants’ medical English communication skills are enhanced using this approach. Participants expressed positive feedback regarding this instruction method. This approach may be effective to enhance the language skills of non-native English-speaking students seeking to practice medicine in English speaking countries. PMID:29677693
Hoppe, Elisabeth; Körzdörfer, Gregor; Würfl, Tobias; Wetzl, Jens; Lugauer, Felix; Pfeuffer, Josef; Maier, Andreas
2017-01-01
The purpose of this work is to evaluate methods from deep learning for application to Magnetic Resonance Fingerprinting (MRF). MRF is a recently proposed measurement technique for generating quantitative parameter maps. In MRF a non-steady state signal is generated by a pseudo-random excitation pattern. A comparison of the measured signal in each voxel with the physical model yields quantitative parameter maps. Currently, the comparison is done by matching a dictionary of simulated signals to the acquired signals. To accelerate the computation of quantitative maps we train a Convolutional Neural Network (CNN) on simulated dictionary data. As a proof of principle we show that the neural network implicitly encodes the dictionary and can replace the matching process.
Variable-pulse-shape pulsed-power accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoltzfus, Brian S.; Austin, Kevin; Hutsel, Brian Thomas
A variable-pulse-shape pulsed-power accelerator is driven by a large number of independent LC drive circuits. Each LC circuit drives one or more coaxial transmission lines that deliver the circuit's output power to several water-insulated radial transmission lines that are connected in parallel at small radius by a water-insulated post-hole convolute. The accelerator can be impedance matched throughout. The coaxial transmission lines are sufficiently long to transit-time isolate the LC drive circuits from the water-insulated transmission lines, which allows each LC drive circuit to be operated without being affected by the other circuits. This enables the creation of any power pulsemore » that can be mathematically described as a time-shifted linear combination of the pulses of the individual LC drive circuits. Therefore, the output power of the convolute can provide a variable pulse shape to a load that can be used for magnetically driven, quasi-isentropic compression experiments and other applications.« less
NASA Technical Reports Server (NTRS)
Nixon, Douglas D.
2009-01-01
Discrete/Continuous (D/C) control theory is a new generalized theory of discrete-time control that expands the concept of conventional (exact) discrete-time control to create a framework for design and implementation of discretetime control systems that include a continuous-time command function generator so that actuator commands need not be constant between control decisions, but can be more generally defined and implemented as functions that vary with time across sample period. Because the plant/control system construct contains two linear subsystems arranged in tandem, a novel dual-kernel counter-flow convolution integral appears in the formulation. As part of the D/C system design and implementation process, numerical evaluation of that integral over the sample period is required. Three fundamentally different evaluation methods and associated algorithms are derived for the constant-coefficient case. Numerical results are matched against three available examples that have closed-form solutions.
Multi-MA reflex triode research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swanekamp, Stephen Brian; Commisso, Robert J.; Weber, Bruce V.
The Reflex Triode can efficiently produce and transmit medium energy (10-100 keV) x-rays. Perfect reflexing through thin converter can increase transmission of 10-100 keV x-rays. Gamble II experiment at 1 MV, 1 MA, 60 ns - maximum dose with 25 micron tantalum. Electron orbits depend on the foil thickness. Electron orbits from LSP used to calculate path length inside tantalum. A simple formula predicts the optimum foil thickness for reflexing converters. The I(V) characteristics of the diode can be understood using simple models. Critical current dominates high voltage triodes, bipolar current is more important at low voltage. Higher current (2.5more » MA), lower voltage (250 kV) triodes are being tested on Saturn at Sandia. Small, precise, anode-cathode gaps enable low impedance operation. Sample Saturn results at 2.5 MA, 250 kV. Saturn dose rate could be about two times greater. Cylindrical triode may improve x-ray transmission. Cylindrical triode design will be tested at 1/2 scale on Gamble II. For higher current on Saturn, could use two cylindrical triodes in parallel. 3 triodes in parallel require positive polarity operation. 'Triodes in series' would improve matching low impedance triodes to generator. Conclusions of this presentation are: (1) Physics of reflex triodes from Gamble II experiments (1 MA, 1 MV) - (a) Converter thickness 1/20 of CSDA range optimizes x-ray dose; (b) Simple model based on electron orbits predicts optimum thickness from LSP/ITS calculations and experiment; (c) I(V) analysis: beam dynamics different between 1 MV and 250 kV; (2) Multi-MA triode experiments on Saturn (2.5 MA, 250 kV) - (a) Polarity inversion in vacuum, (b) No-convolute configuration, accurate gap settings, (c) About half of current produces useful x-rays, (d) Cylindrical triode one option to increase x-ray transmission; and (3) Potential to increase Saturn current toward 10 MA, maintaining voltage and outer diameter - (a) 2 (or 3) cylindrical triodes in parallel, (b) Triodes in series to improve matching, (c) These concepts will be tested first on Gamble II.« less
Squeezed-light generation in a nonlinear planar waveguide with a periodic corrugation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perina, Jan Jr.; Haderka, Ondrej; Sibilia, Concita
Two-mode nonlinear interaction (second-harmonic and second-subharmonic generation) in a planar waveguide with a small periodic corrugation at the surface is studied. Scattering of the interacting fields on the corrugation leads to constructive interference that enhances the nonlinear process provided that all the interactions are phase matched. Conditions for the overall phase matching are found. Compared with a perfectly quasi-phase-matched waveguide, better values of squeezing as well as higher intensities are reached under these conditions. Procedure for finding optimum values of parameters for squeezed-light generation is described.
Helicopter time-domain electromagnetic numerical simulation based on Leapfrog ADI-FDTD
NASA Astrophysics Data System (ADS)
Guan, S.; Ji, Y.; Li, D.; Wu, Y.; Wang, A.
2017-12-01
We present a three-dimension (3D) Alternative Direction Implicit Finite-Difference Time-Domain (Leapfrog ADI-FDTD) method for the simulation of helicopter time-domain electromagnetic (HTEM) detection. This method is different from the traditional explicit FDTD, or ADI-FDTD. Comparing with the explicit FDTD, leapfrog ADI-FDTD algorithm is no longer limited by Courant-Friedrichs-Lewy(CFL) condition. Thus, the time step is longer. Comparing with the ADI-FDTD, we reduce the equations from 12 to 6 and .the Leapfrog ADI-FDTD method will be easier for the general simulation. First, we determine initial conditions which are adopted from the existing method presented by Wang and Tripp(1993). Second, we derive Maxwell equation using a new finite difference equation by Leapfrog ADI-FDTD method. The purpose is to eliminate sub-time step and retain unconditional stability characteristics. Third, we add the convolution perfectly matched layer (CPML) absorbing boundary condition into the leapfrog ADI-FDTD simulation and study the absorbing effect of different parameters. Different absorbing parameters will affect the absorbing ability. We find the suitable parameters after many numerical experiments. Fourth, We compare the response with the 1-Dnumerical result method for a homogeneous half-space to verify the correctness of our algorithm.When the model contains 107*107*53 grid points, the conductivity is 0.05S/m. The results show that Leapfrog ADI-FDTD need less simulation time and computer storage space, compared with ADI-FDTD. The calculation speed decreases nearly four times, memory occupation decreases about 32.53%. Thus, this algorithm is more efficient than the conventional ADI-FDTD method for HTEM detection, and is more precise than that of explicit FDTD in the late time.
A time-space domain stereo finite difference method for 3D scalar wave propagation
NASA Astrophysics Data System (ADS)
Chen, Yushu; Yang, Guangwen; Ma, Xiao; He, Conghui; Song, Guojie
2016-11-01
The time-space domain finite difference methods reduce numerical dispersion effectively by minimizing the error in the joint time-space domain. However, their interpolating coefficients are related with the Courant numbers, leading to significantly extra time costs for loading the coefficients consecutively according to velocity in heterogeneous models. In the present study, we develop a time-space domain stereo finite difference (TSSFD) method for 3D scalar wave equation. The method propagates both the displacements and their gradients simultaneously to keep more information of the wavefields, and minimizes the maximum phase velocity error directly using constant interpolation coefficients for different Courant numbers. We obtain the optimal constant coefficients by combining the truncated Taylor series approximation and the time-space domain optimization, and adjust the coefficients to improve the stability condition. Subsequent investigation shows that the TSSFD can suppress numerical dispersion effectively with high computational efficiency. The maximum phase velocity error of the TSSFD is just 3.09% even with only 2 sampling points per minimum wavelength when the Courant number is 0.4. Numerical experiments show that to generate wavefields with no visible numerical dispersion, the computational efficiency of the TSSFD is 576.9%, 193.5%, 699.0%, and 191.6% of those of the 4th-order and 8th-order Lax-Wendroff correction (LWC) method, the 4th-order staggered grid method (SG), and the 8th-order optimal finite difference method (OFD), respectively. Meanwhile, the TSSFD is compatible to the unsplit convolutional perfectly matched layer (CPML) boundary condition for absorbing artificial boundaries. The efficiency and capability to handle complex velocity models make it an attractive tool in imaging methods such as acoustic reverse time migration (RTM).
Perfect absorption of low-frequency sound waves by critically coupled subwavelength resonant system
NASA Astrophysics Data System (ADS)
Long, Houyou; Cheng, Ying; Tao, Jiancheng; Liu, Xiaojun
2017-01-01
The perfect absorption (PA) for low-frequency audible sound waves has been achieved by critically coupling the inherent loss factor to the inherent leakage factor of a system, which is constructed by attaching a deep-subwavelength lossy resonant plate (LRP) to a backed rigid wall closely. We have certified it by using the graphical method in the complex frequency plane. By coupling the LRP to an air cavity in front of the rigid wall, the high efficient (>80%) low-frequency broadband absorption is obtained from 99.1 Hz to 294.8 Hz. Here, the thickness of LRP is only 1/13.5 of the relevant wavelength at 294.8 Hz. The impedance analyses further demonstrate that the impedances are perfectly matched between the system and the surrounding background medium at PA.
Perfect absorption in 1D photonic crystal nanobeam embedded with graphene/Al2O3 multilayer stack
NASA Astrophysics Data System (ADS)
Liu, Hanqing; Zha, Song; Liu, Peiguo; Zhou, Xiaotian; Bian, Li-an
2018-05-01
We exploit the concept of critical coupling to graphene based chip-integrated applications and numerically demonstrate that a perfect absorption (PA) absorber in the near-infrared can be obtained by graphene/Al2O3 multilayer stack (GAMS) critical coupling with a resonant cavity in the 1D photonic crystal nanobeam (PCN). The key point is dynamically matching the coupling rate of incident light wave to the cavity with the absorbing rate of GAMS via electrically modulating the chemical potential of graphene. Simulation results show that the radius of GAMS as well as the thickness of Al2O3 layer are closely connected with the performance of perfect absorption. These results may provide potential applications in the high-density integrated optical devices, photolectric transducers, and laser pulse limiters.
QUASI-PML FOR WAVES IN CYLINDRICAL COORDINATES. (R825225)
We prove that the straightforward extension of Berenger's original perfectly matched layer (PML) is not reflectionless at a cylindrical interface in the continuum limit. A quasi-PLM is developed as an absorbing boundary condition (ABC) for the finite-difference time-domain method...
SU-F-E-20: A Mathematical Model of Linac Jaw Calibration Integrated with Collimator Walkout
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Y; Corns, R; Huang, V
2016-06-15
Purpose: Accurate jaw calibration is possible, but it does not necessarily achieve good junctions because of collimator rotation walkout. We developed a mathematical model seeking to pick an origin for calibration that minimizes the collimator walkout effect. Methods: We use radioopaque markers aligned with crosshair on the EPID to determine the collimator walkout at collimator angles 0°, 90° and 270°. We can accurately calibrate jaws to any arbitrary origin near the radiation field centre. While the absolute position of an origin moves with the collimator walkout, its relative location to the crosshair is an invariant. We studied two approaches tomore » select an optimal origin. One approach seeks to bring all three origin locations (0°–90°–270°) as close as possible by minimizing the perimeter of the triangle formed by these points. The other approach focuses on the gap for 0°–90° junctions. Results: Our perimeter cost function has two variables and non-linear behaviour. Generally, it does not have zero-perimeter-length solution which leads to perfect jaw matches. The zero solution can only be achieved, if the collimator rotates about a single fixed axis. In the second approach, we can always get perfect 0°–0° and 0°–90° junctions, because we ignore the 0°–270° situation. For our TrueBeams, both techniques for selecting an origin improved junction dose inhomogeneities to less than ±6%. Conclusion: Our model considers the general jaw matching with collimator rotations and proposes two potential solutions. One solution optimizes the junction gaps by considering all three collimator angles while the other only considers 0°–90°. The first solution will not give perfect matching, but can be clinically acceptable with minimized collimator walkout effect, while the second can have perfect junctions at the expense of the 0°–270° junctions. Different clinics might choose between these two methods basing on their clinical practices.« less
THE PERFECTLY MATCHED LAYER (PML) FOR ACOUSTIC WAVES IN ABSORPTIVE MEDIA. (R825225)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
PERFECTLY MATCHED LAYERS FOR ELASTIC WAVES IN CYLINDRICAL AND SPHERICAL COORDINATES. (R825225)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
A novel unsplit perfectly matched layer for the second-order acoustic wave equation.
Ma, Youneng; Yu, Jinhua; Wang, Yuanyuan
2014-08-01
When solving acoustic field equations by using numerical approximation technique, absorbing boundary conditions (ABCs) are widely used to truncate the simulation to a finite space. The perfectly matched layer (PML) technique has exhibited excellent absorbing efficiency as an ABC for the acoustic wave equation formulated as a first-order system. However, as the PML was originally designed for the first-order equation system, it cannot be applied to the second-order equation system directly. In this article, we aim to extend the unsplit PML to the second-order equation system. We developed an efficient unsplit implementation of PML for the second-order acoustic wave equation based on an auxiliary-differential-equation (ADE) scheme. The proposed method can benefit to the use of PML in simulations based on second-order equations. Compared with the existing PMLs, it has simpler implementation and requires less extra storage. Numerical results from finite-difference time-domain models are provided to illustrate the validity of the approach. Copyright © 2014 Elsevier B.V. All rights reserved.
Perfectly matched layers in a divergence preserving ADI scheme for electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kraus, C.; ETH Zurich, Chair of Computational Science, 8092 Zuerich; Adelmann, A., E-mail: andreas.adelmann@psi.ch
For numerical simulations of highly relativistic and transversely accelerated charged particles including radiation fast algorithms are needed. While the radiation in particle accelerators has wavelengths in the order of 100 {mu}m the computational domain has dimensions roughly five orders of magnitude larger resulting in very large mesh sizes. The particles are confined to a small area of this domain only. To resolve the smallest scales close to the particles subgrids are envisioned. For reasons of stability the alternating direction implicit (ADI) scheme by Smithe et al. [D.N. Smithe, J.R. Cary, J.A. Carlsson, Divergence preservation in the ADI algorithms for electromagnetics,more » J. Comput. Phys. 228 (2009) 7289-7299] for Maxwell equations has been adopted. At the boundary of the domain absorbing boundary conditions have to be employed to prevent reflection of the radiation. In this paper we show how the divergence preserving ADI scheme has to be formulated in perfectly matched layers (PML) and compare the performance in several scenarios.« less
Quantum Experiments and Graphs: Multiparty States as Coherent Superpositions of Perfect Matchings.
Krenn, Mario; Gu, Xuemei; Zeilinger, Anton
2017-12-15
We show a surprising link between experimental setups to realize high-dimensional multipartite quantum states and graph theory. In these setups, the paths of photons are identified such that the photon-source information is never created. We find that each of these setups corresponds to an undirected graph, and every undirected graph corresponds to an experimental setup. Every term in the emerging quantum superposition corresponds to a perfect matching in the graph. Calculating the final quantum state is in the #P-complete complexity class, thus it cannot be done efficiently. To strengthen the link further, theorems from graph theory-such as Hall's marriage problem-are rephrased in the language of pair creation in quantum experiments. We show explicitly how this link allows one to answer questions about quantum experiments (such as which classes of entangled states can be created) with graph theoretical methods, and how to potentially simulate properties of graphs and networks with quantum experiments (such as critical exponents and phase transitions).
Optimization of single-base-pair mismatch discrimination in oligonucleotide microarrays
NASA Technical Reports Server (NTRS)
Urakawa, Hidetoshi; El Fantroussi, Said; Smidt, Hauke; Smoot, James C.; Tribou, Erik H.; Kelly, John J.; Noble, Peter A.; Stahl, David A.
2003-01-01
The discrimination between perfect-match and single-base-pair-mismatched nucleic acid duplexes was investigated by using oligonucleotide DNA microarrays and nonequilibrium dissociation rates (melting profiles). DNA and RNA versions of two synthetic targets corresponding to the 16S rRNA sequences of Staphylococcus epidermidis (38 nucleotides) and Nitrosomonas eutropha (39 nucleotides) were hybridized to perfect-match probes (18-mer and 19-mer) and to a set of probes having all possible single-base-pair mismatches. The melting profiles of all probe-target duplexes were determined in parallel by using an imposed temperature step gradient. We derived an optimum wash temperature for each probe and target by using a simple formula to calculate a discrimination index for each temperature of the step gradient. This optimum corresponded to the output of an independent analysis using a customized neural network program. These results together provide an experimental and analytical framework for optimizing mismatch discrimination among all probes on a DNA microarray.
Alloyed surfaces: New substrates for graphene growth
NASA Astrophysics Data System (ADS)
Tresca, C.; Verbitskiy, N. I.; Fedorov, A.; Grüneis, A.; Profeta, G.
2017-11-01
We report a systematic ab-initio density functional theory investigation of Ni(111) surface alloyed with elements of group IV (Si, Ge and Sn), demonstrating the possibility to use it to grow high quality graphene. Ni(111) surface represents an ideal substrate for graphene, due to its catalytic properties and perfect matching with the graphene lattice constant. However, Dirac bands of graphene growth on Ni(111) are completely destroyed due to the strong hybridization between carbon pz and Ni d orbitals. Group IV atoms, namely Si, Ge and Sn, once deposited on Ni(111) surface, form an ordered alloyed surface with √{ 3} ×√{ 3} -R30° reconstruction. We demonstrate that, at variance with the pure Ni(111) surface, alloyed surfaces effectively decouple graphene from the substrate, resulting unstrained due to the nearly perfect lattice matching and preserves linear Dirac bands without the strong hybridization with Ni d states. The proposed surfaces can be prepared before graphene growth without resorting on post-growth processes which necessarily alter the electronic and structural properties of graphene.
Quantum Experiments and Graphs: Multiparty States as Coherent Superpositions of Perfect Matchings
NASA Astrophysics Data System (ADS)
Krenn, Mario; Gu, Xuemei; Zeilinger, Anton
2017-12-01
We show a surprising link between experimental setups to realize high-dimensional multipartite quantum states and graph theory. In these setups, the paths of photons are identified such that the photon-source information is never created. We find that each of these setups corresponds to an undirected graph, and every undirected graph corresponds to an experimental setup. Every term in the emerging quantum superposition corresponds to a perfect matching in the graph. Calculating the final quantum state is in the #P-complete complexity class, thus it cannot be done efficiently. To strengthen the link further, theorems from graph theory—such as Hall's marriage problem—are rephrased in the language of pair creation in quantum experiments. We show explicitly how this link allows one to answer questions about quantum experiments (such as which classes of entangled states can be created) with graph theoretical methods, and how to potentially simulate properties of graphs and networks with quantum experiments (such as critical exponents and phase transitions).
Perfectly Matched Layer for Linearized Euler Equations in Open and Ducted Domains
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.; Auriault, Laurent; Cambuli, Francesco
1998-01-01
Recently, perfectly matched layer (PML) as an absorbing boundary condition has widespread applications. The idea was first introduced by Berenger for electromagnetic waves computations. In this paper, it is shown that the PML equations for the linearized Euler equations support unstable solutions when the mean flow has a component normal to the layer. To suppress such unstable solutions so as to render the PML concept useful for this class of problems, it is proposed that artificial selective damping terms be added to the discretized PML equations. It is demonstrated that with a proper choice of artificial mesh Reynolds number, the PML equations can be made stable. Numerical examples are provided to illustrate that the stabilized PML performs well as an absorbing boundary condition. In a ducted environment, the wave mode are dispersive. It will be shown that the group velocity and phase velocity of these modes can have opposite signs. This results in a confined environment, PML may not be suitable as an absorbing boundary condition.
Three-dimensional templating arthroplasty of the humeral head.
Cho, Sung Won; Jharia, Trambak K; Moon, Young Lae; Sim, Sung Woo; Shin, Dong Sun; Bigliani, Louis U
2013-10-01
No anatomical study has been conducted over Asian population to design humeral head prosthesis for the population concerned. This study was done to evaluate the accuracy of commercially available humeral head prosthetic designs, in replicating the humeral head anatomy. CT scan data of 48 patients were taken and their 3D CAD models were generated. Then, humeral head prosthetic design of a BF shoulder system produced by a standardized, commercially available company (Zimmer) was used for templating shoulder arthroplasty and the humeral head size having the perfect fit was assessed. These data were compared with the available data in the literature. All the humeral heads were perfectly matched by one of the sizes available. The average head size was 48.5 mm and the average head thickness was 23.5 mm. The results matched reasonably well with the available data in the literature. The humeral head anatomy can be recreated reasonably well by the commercially available humeral head prosthetic designs and sizes. Their dimensions are similar to that of the published literature.
Automatic Organ Segmentation for CT Scans Based on Super-Pixel and Convolutional Neural Networks.
Liu, Xiaoming; Guo, Shuxu; Yang, Bingtao; Ma, Shuzhi; Zhang, Huimao; Li, Jing; Sun, Changjian; Jin, Lanyi; Li, Xueyan; Yang, Qi; Fu, Yu
2018-04-20
Accurate segmentation of specific organ from computed tomography (CT) scans is a basic and crucial task for accurate diagnosis and treatment. To avoid time-consuming manual optimization and to help physicians distinguish diseases, an automatic organ segmentation framework is presented. The framework utilized convolution neural networks (CNN) to classify pixels. To reduce the redundant inputs, the simple linear iterative clustering (SLIC) of super-pixels and the support vector machine (SVM) classifier are introduced. To establish the perfect boundary of organs in one-pixel-level, the pixels need to be classified step-by-step. First, the SLIC is used to cut an image into grids and extract respective digital signatures. Next, the signature is classified by the SVM, and the rough edges are acquired. Finally, a precise boundary is obtained by the CNN, which is based on patches around each pixel-point. The framework is applied to abdominal CT scans of livers and high-resolution computed tomography (HRCT) scans of lungs. The experimental CT scans are derived from two public datasets (Sliver 07 and a Chinese local dataset). Experimental results show that the proposed method can precisely and efficiently detect the organs. This method consumes 38 s/slice for liver segmentation. The Dice coefficient of the liver segmentation results reaches to 97.43%. For lung segmentation, the Dice coefficient is 97.93%. This finding demonstrates that the proposed framework is a favorable method for lung segmentation of HRCT scans.
Lakhani, Paras
2017-08-01
The goal of this study is to evaluate the efficacy of deep convolutional neural networks (DCNNs) in differentiating subtle, intermediate, and more obvious image differences in radiography. Three different datasets were created, which included presence/absence of the endotracheal (ET) tube (n = 300), low/normal position of the ET tube (n = 300), and chest/abdominal radiographs (n = 120). The datasets were split into training, validation, and test. Both untrained and pre-trained deep neural networks were employed, including AlexNet and GoogLeNet classifiers, using the Caffe framework. Data augmentation was performed for the presence/absence and low/normal ET tube datasets. Receiver operating characteristic (ROC), area under the curves (AUC), and 95% confidence intervals were calculated. Statistical differences of the AUCs were determined using a non-parametric approach. The pre-trained AlexNet and GoogLeNet classifiers had perfect accuracy (AUC 1.00) in differentiating chest vs. abdominal radiographs, using only 45 training cases. For more difficult datasets, including the presence/absence and low/normal position endotracheal tubes, more training cases, pre-trained networks, and data-augmentation approaches were helpful to increase accuracy. The best-performing network for classifying presence vs. absence of an ET tube was still very accurate with an AUC of 0.99. However, for the most difficult dataset, such as low vs. normal position of the endotracheal tube, DCNNs did not perform as well, but achieved a reasonable AUC of 0.81.
Who discovered the sylvian fissure?
Collice, Massimo; Collice, Rosa; Riva, Alessandro
2008-10-01
Cerebral convolutions were unknown until the 17th century. A constant sulcus was not recognized until the mid-1600s; it was named "the fissure of Sylvius," after the person who had always been considered as the one who discovered it. It is commonly asserted that the first description of the lateral scissure was made by Caspar Bartholin, who attributed its discovery to Sylvius. However, this was not actually the case, as Caspar Bartholin died in 1629, whereas Sylvius started studying medicine in 1632. The description could have been made either by Caspar Bartholin's son Thomas or by Sylvius himself. Irrespective of the description's author, the key to the history of the lateral fissure is that it was first identified by Fabrici d'Acquapendente in 1600, 40 years before Sylvius' description. In one of the 300 colored plates (Tabulae Pictae) by Fabrici, the lateral fissure is perfectly depicted, as are the temporal convolutions. Therefore, even if it was an accidental discovery, Fabrici should be the one noted as having discovered the fissure. This article ends with a short history of the plates. They were painted in oil on paper and were thought to further a great work, the Theatrum Totius Animalis Fabricae, which was begun in 1591 and never completed or published. Only the colored illustrations of this project remain. These plates were forgotten for more than 200 years, until they were rediscovered by Giuseppe Sterzi in 1909. They are among the best examples of anatomic iconography in terms of innovation, accuracy, and artistic accomplishment.
Evaluation of Deep Learning Based Stereo Matching Methods: from Ground to Aerial Images
NASA Astrophysics Data System (ADS)
Liu, J.; Ji, S.; Zhang, C.; Qin, Z.
2018-05-01
Dense stereo matching has been extensively studied in photogrammetry and computer vision. In this paper we evaluate the application of deep learning based stereo methods, which were raised from 2016 and rapidly spread, on aerial stereos other than ground images that are commonly used in computer vision community. Two popular methods are evaluated. One learns matching cost with a convolutional neural network (known as MC-CNN); the other produces a disparity map in an end-to-end manner by utilizing both geometry and context (known as GC-net). First, we evaluate the performance of the deep learning based methods for aerial stereo images by a direct model reuse. The models pre-trained on KITTI 2012, KITTI 2015 and Driving datasets separately, are directly applied to three aerial datasets. We also give the results of direct training on target aerial datasets. Second, the deep learning based methods are compared to the classic stereo matching method, Semi-Global Matching(SGM), and a photogrammetric software, SURE, on the same aerial datasets. Third, transfer learning strategy is introduced to aerial image matching based on the assumption of a few target samples available for model fine tuning. It experimentally proved that the conventional methods and the deep learning based methods performed similarly, and the latter had greater potential to be explored.
Convolutional neural network architectures for predicting DNA–protein binding
Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.
2016-01-01
Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608
Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.
Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus
2017-01-01
Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.
Mexican Drug Trafficking Organizations: Matching Strategy to Threat
2013-03-01
raids,16 family ties, and heritage. Issues ranging from national security to the environment and migration are complex issues that demand involvement...the perfect network for the Sinaloa Cartel to move heroin and methamphetamine26, and this is just one of the aforementioned 1000 cities where DTOs
Entanglement-assisted quantum convolutional coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilde, Mark M.; Brun, Todd A.
2010-04-15
We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.
Jiménez, Noé; Romero-García, Vicent; Pagneux, Vincent; Groby, Jean-Philippe
2017-10-19
Perfect, broadband and asymmetric sound absorption is theoretically, numerically and experimentally reported by using subwavelength thickness panels in a transmission problem. The panels are composed of a periodic array of varying crosssection waveguides, each of them being loaded by Helmholtz resonators (HRs) with graded dimensions. The low cut-off frequency of the absorption band is fixed by the resonance frequency of the deepest HR, that reduces drastically the transmission. The preceding HR is designed with a slightly higher resonance frequency with a geometry that allows the impedance matching to the surrounding medium. Therefore, reflection vanishes and the structure is critically coupled. This results in perfect sound absorption at a single frequency. We report perfect absorption at 300 Hz for a structure whose thickness is 40 times smaller than the wavelength. Moreover, this process is repeated by adding HRs to the waveguide, each of them with a higher resonance frequency than the preceding one. Using this frequency cascade effect, we report quasi-perfect sound absorption over almost two frequency octaves ranging from 300 to 1000 Hz for a panel composed of 9 resonators with a total thickness of 11 cm, i.e., 10 times smaller than the wavelength at 300 Hz.
Putzulu, Rossana; Piccirillo, Nicola; Orlando, Nicoletta; Massini, Giuseppina; Maresca, Maddalena; Scavone, Fernando; Ricerca, Bianca Maria; Zini, Gina
2017-04-01
Chronic red blood cell transfusions remain an essential part of supportive treatment in patients with thalassaemia and sickle cell disease (SCD). Red blood cell (RBC) transfusions expose patients to the risk of developing antibodies: RBC alloimmunization occurs when the immune system meets foreign antigens. We created a register of extensively genotyped donors to achieve a better matched transfusion in order to reduce transfusion alloimmunization. Extended RBC antigen typing was determined and confirmed by molecular biology techniques using Human Erythrocyte Antigen (HEA) BeadChip (BioArray Solutions Ltd., Warren, NJ) in periodic blood donors and in patients with thalassaemia and SCD. During 3 years, we typed extensively 1220 periodic blood donors, 898 male and 322 female. We also studied 10 hematologic patients affected by thalassaemia and sickle cell disease referred to our institution as candidate to periodic transfusions. Our patients (8 females and 2 males with a median age of 48 years, range 24-76 years), extensively typed using molecular techniques and screened for RBC alloantibodies, were transfused with a median of 33.5 RBC units. After three years of molecular typing, the "perfect match" transfusion strategy avoided new alloantibodies development in all studied patients. Copyright © 2017 Elsevier Ltd. All rights reserved.
[The choice of color in fixed prosthetics: what steps should be followed for a reliable outcome?].
Vanheusden, Alain; Mainjot, Amélie
2004-01-01
The creation of a perfectly-matched esthetic fixed restoration is undeniably one of the most difficult challenges in modern dentistry. The final outcome depends on several essential steps: the use of an appropriate light source, the accurate analysis and correct evaluation of patient's teeth parameters (morphology, colour, surface texture,...), the clear and precise transmission of this data to the laboratory and the sound interpretation of it by a dental technician who masters esthetic prosthetic techniques perfectly. The purpose of this paper was to give a reproducible clinical method to the practitioner in order to achieve a reliable dental colorimetric analysis.
Using Deep Learning Model for Meteorological Satellite Cloud Image Prediction
NASA Astrophysics Data System (ADS)
Su, X.
2017-12-01
A satellite cloud image contains much weather information such as precipitation information. Short-time cloud movement forecast is important for precipitation forecast and is the primary means for typhoon monitoring. The traditional methods are mostly using the cloud feature matching and linear extrapolation to predict the cloud movement, which makes that the nonstationary process such as inversion and deformation during the movement of the cloud is basically not considered. It is still a hard task to predict cloud movement timely and correctly. As deep learning model could perform well in learning spatiotemporal features, to meet this challenge, we could regard cloud image prediction as a spatiotemporal sequence forecasting problem and introduce deep learning model to solve this problem. In this research, we use a variant of Gated-Recurrent-Unit(GRU) that has convolutional structures to deal with spatiotemporal features and build an end-to-end model to solve this forecast problem. In this model, both the input and output are spatiotemporal sequences. Compared to Convolutional LSTM(ConvLSTM) model, this model has lower amount of parameters. We imply this model on GOES satellite data and the model perform well.
Soil Respiration and Student Inquiry: A Perfect Match
ERIC Educational Resources Information Center
Hoyt, Catherine Marie; Wallenstein, Matthew David
2011-01-01
This activity explores the cycling of carbon between the atmosphere (primarily as CO[subscript 2]) and biomass in plants, animals, and microscopic organisms. Students design soil respiration experiments using a protocol that resembles current practice in soil ecology. Three methods for measuring soil respiration are presented. Student-derived…
Artful Teaching and Science Investigations: A Perfect Match
ERIC Educational Resources Information Center
McGee, Christy
2018-01-01
Tomlinson's explanation of Artful Teaching and her 2017 expansion of this concept The Five Key Elements of Differentiation provide the theoretical framework of this examination of the need for science investigations in elementary schools. The Artful Teaching framework uses an equilateral triangle with vertices labeled The Teacher, The Student, and…
Children's Early Productivity with Verbal Morphology
ERIC Educational Resources Information Center
Wagner, Laura; Swensen, Lauren D.; Naigles, Letitia R.
2009-01-01
Three studies using the intermodal preferential looking paradigm examined onset of productive comprehension of tense/aspect morphology in English. When can toddlers understand these forms with novel verbs and novel events? The first study used familiar verbs and showed that 26-36-month olds correctly matched a past/perfective form ("-ed" or…
A Perfectly Matched Layer for Peridynamics in Two Dimensions
2013-04-01
KIM Seoul National University, Republic of Korea Z. MROZ Academy of Science, Poland D. PAMPLONA Universidade Católica do Rio de Janeiro , Brazil M. B...applications, Prentice-Hall, Upper Saddle River , NJ, 1996. [Silling 2000] S. A. Silling, “Reformulation of elasticity theory for discontinuities and long
A novel solid solution LiGa(S1–x Se x )2 for generating coherent ultrafast mid-IR sources
NASA Astrophysics Data System (ADS)
Jer Huang, Jin; Zhang, Xin Lu; Feng, Qian; Dai, Jun Feng; Andreev, Yury M.; Lanskii, Grigory V.; Grechin, Sergei G.
2018-06-01
With renewed refractive indices, the potential of a solid solution () in optical frequency conversion—especially in phase matching and group velocity matching—is theoretically investigated, together with the composition ratio limitation. It is found that the solution has excellent features for generating coherent ultrafast mid-IR sources covering 8–11 μm, which can be realized by type II down-conversion in the ba-plane with perfect group velocity matching, or type I in the bc-plane with part group velocity matching. This will have broad applications in LiDAR monitoring and precision spectroscopy, as well as life and environmental sciences.
TDAAPS 2: Acoustic Wave Propagation in Attenuative Moving Media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph A.
This report outlines recent enhancements to the TDAAPS algorithm first described by Symons et al., 2005. One of the primary additions to the code is the ability to specify an attenuative media using standard linear fluid mechanisms to match reasonably general frequency versus loss curves, including common frequency versus loss curves for the atmosphere and seawater. Other improvements that will be described are the addition of improved numerical boundary conditions via various forms of Perfectly Matched Layers, enhanced accuracy near high contrast media interfaces, and improved physics options.
A matching approach to communicate through the plasma sheath surrounding a hypersonic vehicle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Xiaotian; Jiang, Binhao, E-mail: jiangbh@hit.edu.cn
2015-06-21
In order to overcome the communication blackout problem suffered by hypersonic vehicles, a matching approach has been proposed for the first time in this paper. It utilizes a double-positive (DPS) material layer surrounding a hypersonic vehicle antenna to match with the plasma sheath enclosing the vehicle. Analytical analysis and numerical results indicate a resonance between the matched layer and the plasma sheath will be formed to mitigate the blackout problem in some conditions. The calculated results present a perfect radiated performance of the antenna, when the match is exactly built between these two layers. The effects of the parameters ofmore » the plasma sheath have been researched by numerical methods. Based on these results, the proposed approach is easier to realize and more flexible to the varying radiated conditions in hypersonic flight comparing with other methods.« less
Tweaked residual convolutional network for face alignment
NASA Astrophysics Data System (ADS)
Du, Wenchao; Li, Ke; Zhao, Qijun; Zhang, Yi; Chen, Hu
2017-08-01
We propose a novel Tweaked Residual Convolutional Network approach for face alignment with two-level convolutional networks architecture. Specifically, the first-level Tweaked Convolutional Network (TCN) module predicts the landmark quickly but accurately enough as a preliminary, by taking low-resolution version of the detected face holistically as the input. The following Residual Convolutional Networks (RCN) module progressively refines the landmark by taking as input the local patch extracted around the predicted landmark, particularly, which allows the Convolutional Neural Network (CNN) to extract local shape-indexed features to fine tune landmark position. Extensive evaluations show that the proposed Tweaked Residual Convolutional Network approach outperforms existing methods.
Deep Learning for Lowtextured Image Matching
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Fedorenko, V. V.; Fomin, N. A.
2018-05-01
Low-textured objects pose challenges for an automatic 3D model reconstruction. Such objects are common in archeological applications of photogrammetry. Most of the common feature point descriptors fail to match local patches in featureless regions of an object. Hence, automatic documentation of the archeological process using Structure from Motion (SfM) methods is challenging. Nevertheless, such documentation is possible with the aid of a human operator. Deep learning-based descriptors have outperformed most of common feature point descriptors recently. This paper is focused on the development of a new Wide Image Zone Adaptive Robust feature Descriptor (WIZARD) based on the deep learning. We use a convolutional auto-encoder to compress discriminative features of a local path into a descriptor code. We build a codebook to perform point matching on multiple images. The matching is performed using the nearest neighbor search and a modified voting algorithm. We present a new "Multi-view Amphora" (Amphora) dataset for evaluation of point matching algorithms. The dataset includes images of an Ancient Greek vase found at Taman Peninsula in Southern Russia. The dataset provides color images, a ground truth 3D model, and a ground truth optical flow. We evaluated the WIZARD descriptor on the "Amphora" dataset to show that it outperforms the SIFT and SURF descriptors on the complex patch pairs.
NASA Astrophysics Data System (ADS)
Sapia, Mark Angelo
2000-11-01
Three-dimensional microscope images typically suffer from reduced resolution due to the effects of convolution, optical aberrations and out-of-focus blurring. Two- dimensional ultrasound images are also degraded by convolutional bluffing and various sources of noise. Speckle noise is a major problem in ultrasound images. In microscopy and ultrasound, various methods of digital filtering have been used to improve image quality. Several methods of deconvolution filtering have been used to improve resolution by reversing the convolutional effects, many of which are based on regularization techniques and non-linear constraints. The technique discussed here is a unique linear filter for deconvolving 3D fluorescence microscopy or 2D ultrasound images. The process is to solve for the filter completely in the spatial-domain using an adaptive algorithm to converge to an optimum solution for de-blurring and resolution improvement. There are two key advantages of using an adaptive solution: (1)it efficiently solves for the filter coefficients by taking into account all sources of noise and degraded resolution at the same time, and (2)achieves near-perfect convergence to the ideal linear deconvolution filter. This linear adaptive technique has other advantages such as avoiding artifacts of frequency-domain transformations and concurrent adaptation to suppress noise. Ultimately, this approach results in better signal-to-noise characteristics with virtually no edge-ringing. Many researchers have not adopted linear techniques because of poor convergence, noise instability and negative valued data in the results. The methods presented here overcome many of these well-documented disadvantages and provide results that clearly out-perform other linear methods and may also out-perform regularization and constrained algorithms. In particular, the adaptive solution is most responsible for overcoming the poor performance associated with linear techniques. This linear adaptive approach to deconvolution is demonstrated with results of restoring blurred phantoms for both microscopy and ultrasound and restoring 3D microscope images of biological cells and 2D ultrasound images of human subjects (courtesy of General Electric and Diasonics, Inc.).
Pang, Shuchao; Yu, Zhezhou; Orgun, Mehmet A
2017-03-01
Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. We propose a robust automated end-to-end classifier for biomedical images based on a domain transferred deep convolutional neural network model that shows a highly reliable and accurate performance which has been confirmed on several public biomedical image datasets. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Acton, Scott T.; Gilliam, Andrew D.; Li, Bing; Rossi, Adam
2008-02-01
Improvised explosive devices (IEDs) are common and lethal instruments of terrorism, and linking a terrorist entity to a specific device remains a difficult task. In the effort to identify persons associated with a given IED, we have implemented a specialized content based image retrieval system to search and classify IED imagery. The system makes two contributions to the art. First, we introduce a shape-based matching technique exploiting shape, color, and texture (wavelet) information, based on novel vector field convolution active contours and a novel active contour initialization method which treats coarse segmentation as an inverse problem. Second, we introduce a unique graph theoretic approach to match annotated printed circuit board images for which no schematic or connectivity information is available. The shape-based image retrieval method, in conjunction with the graph theoretic tool, provides an efficacious system for matching IED images. For circuit imagery, the basic retrieval mechanism has a precision of 82.1% and the graph based method has a precision of 98.1%. As of the fall of 2007, the working system has processed over 400,000 case images.
ERIC Educational Resources Information Center
Barker, Randolph T.; Gower, Kim
2009-01-01
Teaching business communication while performing professional business consulting is the perfect learning match. The bizarre but true stories from the consulting world provide excellent analogies for classroom learning, and feedback from students about the consulting experiences reaffirms the power of using stories for teaching. When discussing…
School Social Work Consultation Models and Response to Intervention: A Perfect Match
ERIC Educational Resources Information Center
Sabatino, Christine Anlauf
2009-01-01
The 2004 amendments to the Individuals with Disabilities Education Act introduced the concept of Response to Intervention (RTI). In part, this is an educational prevention approach to maximize student academic achievement and minimize behaviors that interfere with school success. It consists of assessment and intervention practices on multiple…
Culture through Comparison: Creating Audio-Visual Listening Materials for a CLIL Course
ERIC Educational Resources Information Center
Zhyrun, Iryna
2016-01-01
Authentic listening has become a part of CLIL materials, but it can be difficult to find listening materials that perfectly match the language level, length requirements, content, and cultural context of a course. The difficulty of finding appropriate materials online, financial limitations posed by copyright fees, and necessity to produce…
Multiscale Modeling of Non-crystalline Ceramics (Glass)
2013-03-01
of infinite regions using a perfectly matched layer, SEM XII Congress & Exposition on Experimental and Applied Mechanics, May, 2012, Costa Mesa , CA...MCCAULEY (10 HCS) P PLOSTINS P BAKER RDRL WML J NEWILL M ZOLTOSKI RDRL WML B I BATYREV (1 HC) S IZVYEKOV (1 HC) B RICE (1 HC) R PESCE RODRIGUEZ D TAYLOR N
Ultrathin Limit of Exchange Bias Coupling at Oxide Multiferroic/Ferromagnetic Interfaces
2013-07-12
perfect, lattice-matched hetero- structures of complex perovskite oxides using state-of-the-art thin fi lm growth techniques has generated new physical...investigated for several BFO/LSMO heterostructures by X-ray absorption spectroscopy (XAS) measurements at 17 K of the Fe- L 2,3 edge at the Advanced Light
Text-Based Synchronous E-Learning and Dyslexia: Not Necessarily the Perfect Match!
ERIC Educational Resources Information Center
Woodfine, B. P.; Nunes, M. Baptista; Wright, D. J.
2008-01-01
The introduction, in the United Kingdom, of the Special Education Needs and Disabilities Act (SENDA) published and approved in 2001, has removed the exemptions given to educational institutions by the Disabilities Discrimination Act (DDA) of 1995. This applies to learning web sites and materials that must now undergo "reasonable…
NASA Astrophysics Data System (ADS)
Rahimi Dalkhani, Amin; Javaherian, Abdolrahim; Mahdavi Basir, Hadi
2018-04-01
Wave propagation modeling as a vital tool in seismology can be done via several different numerical methods among them are finite-difference, finite-element, and spectral-element methods (FDM, FEM and SEM). Some advanced applications in seismic exploration benefit the frequency domain modeling. Regarding flexibility in complex geological models and dealing with the free surface boundary condition, we studied the frequency domain acoustic wave equation using FEM and SEM. The results demonstrated that the frequency domain FEM and SEM have a good accuracy and numerical efficiency with the second order interpolation polynomials. Furthermore, we developed the second order Clayton and Engquist absorbing boundary condition (CE-ABC2) and compared it with the perfectly matched layer (PML) for the frequency domain FEM and SEM. In spite of PML method, CE-ABC2 does not add any additional computational cost to the modeling except assembling boundary matrices. As a result, considering CE-ABC2 is more efficient than PML for the frequency domain acoustic wave propagation modeling especially when computational cost is high and high-level absorbing performance is unnecessary.
Complete wavelength mismatching effect in a Doppler broadened Y-type six-level EIT atomic medium
NASA Astrophysics Data System (ADS)
Bharti, Vineet; Wasan, Ajay
We present a theoretical study of the Doppler broadened Y-type six-level atomic system, using a density matrix approach, to investigate the effect of varying control field wavelengths and closely spaced hyperfine levels in the 5P state of 87Rb. The closely spaced hyperfine levels in our six-level system affect the optical properties of Y-type system and cause asymmetry in absorption profiles. Depending upon the choices of π-probe, σ+-control and σ--control fields transitions, we consider three regimes: (i) perfect wavelength matching regime (λp=λ=λ), (ii) partial wavelength mismatching regime (λp≠λ=λ), and (iii) complete wavelength mismatching regime (λp≠λ≠λ). The complete wavelength mismatching regime is further distinguished into two situations, i.e., λ<λ and λ>λ. We have shown that in the room temperature atomic vapor, the asymmetric transparency window gets broadened in the partial wavelength mismatching regime as compared to the perfect wavelength matching regime. This broad transparency window also splits at the line center in the complete wavelength mismatching regime.
Processing Elided Verb Phrases with Flawed Antecedents: the Recycling Hypothesis
Arregui, Ana; Clifton, Charles; Frazier, Lyn; Moulton, Keir
2006-01-01
Traditional syntactic accounts of verb phrase ellipsis (e.g. “Jason laughed. Sam did [ ] too.”) categorize as ungrammatical many sentences that language users find acceptable (they “undergenerate”); semantic accounts overgenerate. We propose that a processing theory, together with a syntactic account, does a better job of describing and explaining the data on verb phrase-ellipsis. Five acceptability judgment experiments supported a “VP recycling hypothesis,” which claims that when a syntactically-matching antecedent is not available, the listener/reader creates one using the materials at hand. Experiments 1 and 2 used verb phrase ellipsis sentences with antecedents ranging from perfect (a verb phrase in matrix verb phrase position) to impossible (a verb phrase containing only a deverbal word). Experiments 3 and 4 contrasted antecedents in verbal versus nominal gerund subjects. Experiment 5 explored the possibility that speakers are particularly likely to go beyond the grammar and produce elided constituents without perfect matching antecedents when the antecedent needed is less marked than the antecedent actually produced. This experiment contrasted active (unmarked) and passive antecedents to show that readers seem to honor such a tendency. PMID:17710192
A comparison study between MLP and convolutional neural network models for character recognition
NASA Astrophysics Data System (ADS)
Ben Driss, S.; Soua, M.; Kachouri, R.; Akil, M.
2017-05-01
Optical Character Recognition (OCR) systems have been designed to operate on text contained in scanned documents and images. They include text detection and character recognition in which characters are described then classified. In the classification step, characters are identified according to their features or template descriptions. Then, a given classifier is employed to identify characters. In this context, we have proposed the unified character descriptor (UCD) to represent characters based on their features. Then, matching was employed to ensure the classification. This recognition scheme performs a good OCR Accuracy on homogeneous scanned documents, however it cannot discriminate characters with high font variation and distortion.3 To improve recognition, classifiers based on neural networks can be used. The multilayer perceptron (MLP) ensures high recognition accuracy when performing a robust training. Moreover, the convolutional neural network (CNN), is gaining nowadays a lot of popularity for its high performance. Furthermore, both CNN and MLP may suffer from the large amount of computation in the training phase. In this paper, we establish a comparison between MLP and CNN. We provide MLP with the UCD descriptor and the appropriate network configuration. For CNN, we employ the convolutional network designed for handwritten and machine-printed character recognition (Lenet-5) and we adapt it to support 62 classes, including both digits and characters. In addition, GPU parallelization is studied to speed up both of MLP and CNN classifiers. Based on our experimentations, we demonstrate that the used real-time CNN is 2x more relevant than MLP when classifying characters.
Convolution-based estimation of organ dose in tube current modulated CT
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan
2016-05-01
Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The discrepancy between the estimated organ dose and dose simulated using TCM Monte Carlo program was quantified. We further compared the convolution-based organ dose estimation method with two other strategies with different approaches of quantifying the irradiation field. The proposed convolution-based estimation method showed good accuracy with the organ dose simulated using the TCM Monte Carlo simulation. The average percentage error (normalized by CTDIvol) was generally within 10% across all organs and modulation profiles, except for organs located in the pelvic and shoulder regions. This study developed an improved method that accurately quantifies the irradiation field under TCM scans. The results suggested that organ dose could be estimated in real-time both prospectively (with the localizer information only) and retrospectively (with acquired CT data).
Segmentation of corneal endothelium images using a U-Net-based convolutional neural network.
Fabijańska, Anna
2018-04-18
Diagnostic information regarding the health status of the corneal endothelium may be obtained by analyzing the size and the shape of the endothelial cells in specular microscopy images. Prior to the analysis, the endothelial cells need to be extracted from the image. Up to today, this has been performed manually or semi-automatically. Several approaches to automatic segmentation of endothelial cells exist; however, none of them is perfect. Therefore this paper proposes to perform cell segmentation using a U-Net-based convolutional neural network. Particularly, the network is trained to discriminate pixels located at the borders between cells. The edge probability map outputted by the network is next binarized and skeletonized in order to obtain one-pixel wide edges. The proposed solution was tested on a dataset consisting of 30 corneal endothelial images presenting cells of different sizes, achieving an AUROC level of 0.92. The resulting DICE is on average equal to 0.86, which is a good result, regarding the thickness of the compared edges. The corresponding mean absolute percentage error of cell number is at the level of 4.5% which confirms the high accuracy of the proposed approach. The resulting cell edges are well aligned to the ground truths and require a limited number of manual corrections. This also results in accurate values of the cell morphometric parameters. The corresponding errors range from 5.2% for endothelial cell density, through 6.2% for cell hexagonality to 11.93% for the coefficient of variation of the cell size. Copyright © 2018 Elsevier B.V. All rights reserved.
Convolutional coding techniques for data protection
NASA Technical Reports Server (NTRS)
Massey, J. L.
1975-01-01
Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.
6d, Coulomb branch anomaly matching
NASA Astrophysics Data System (ADS)
Intriligator, Kenneth
2014-10-01
6d QFTs are constrained by the analog of 't Hooft anomaly matching: all anomalies for global symmetries and metric backgrounds are constants of RG flows, and for all vacua in moduli spaces. We discuss an anomaly matching mechanism for 6d theories on their Coulomb branch. It is a global symmetry analog of Green-Schwarz-West-Sagnotti anomaly cancellation, and requires the apparent anomaly mismatch to be a perfect square, . Then Δ I 8 is cancelled by making X 4 an electric/magnetic source for the tensor multiplet, so background gauge field instantons yield charged strings. This requires the coefficients in X 4 to be integrally quantized. We illustrate this for theories. We also consider the SCFTs from N small E8 instantons, verifying that the recent result for its anomaly polynomial fits with the anomaly matching mechanism.
Wei, Jianing; Bouman, Charles A; Allebach, Jan P
2014-05-01
Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.
Data-Driven Neural Network Model for Robust Reconstruction of Automobile Casting
NASA Astrophysics Data System (ADS)
Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Lu
2017-09-01
In computer vision system, it is a challenging task to robustly reconstruct complex 3D geometries of automobile castings. However, 3D scanning data is usually interfered by noises, the scanning resolution is low, these effects normally lead to incomplete matching and drift phenomenon. In order to solve these problems, a data-driven local geometric learning model is proposed to achieve robust reconstruction of automobile casting. In order to relieve the interference of sensor noise and to be compatible with incomplete scanning data, a 3D convolution neural network is established to match the local geometric features of automobile casting. The proposed neural network combines the geometric feature representation with the correlation metric function to robustly match the local correspondence. We use the truncated distance field(TDF) around the key point to represent the 3D surface of casting geometry, so that the model can be directly embedded into the 3D space to learn the geometric feature representation; Finally, the training labels is automatically generated for depth learning based on the existing RGB-D reconstruction algorithm, which accesses to the same global key matching descriptor. The experimental results show that the matching accuracy of our network is 92.2% for automobile castings, the closed loop rate is about 74.0% when the matching tolerance threshold τ is 0.2. The matching descriptors performed well and retained 81.6% matching accuracy at 95% closed loop. For the sparse geometric castings with initial matching failure, the 3D matching object can be reconstructed robustly by training the key descriptors. Our method performs 3D reconstruction robustly for complex automobile castings.
Superpixel-based graph cuts for accurate stereo matching
NASA Astrophysics Data System (ADS)
Feng, Liting; Qin, Kaihuai
2017-06-01
Estimating the surface normal vector and disparity of a pixel simultaneously, also known as three-dimensional label method, has been widely used in recent continuous stereo matching problem to achieve sub-pixel accuracy. However, due to the infinite label space, it’s extremely hard to assign each pixel an appropriate label. In this paper, we present an accurate and efficient algorithm, integrating patchmatch with graph cuts, to approach this critical computational problem. Besides, to get robust and precise matching cost, we use a convolutional neural network to learn a similarity measure on small image patches. Compared with other MRF related methods, our method has several advantages: its sub-modular property ensures a sub-problem optimality which is easy to perform in parallel; graph cuts can simultaneously update multiple pixels, avoiding local minima caused by sequential optimizers like belief propagation; it uses segmentation results for better local expansion move; local propagation and randomization can easily generate the initial solution without using external methods. Middlebury experiments show that our method can get higher accuracy than other MRF-based algorithms.
The trellis complexity of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Lin, W.
1995-01-01
It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.
Inter-Sentential Anaphora and Coherence Relations in Discourse: A Perfect Match
ERIC Educational Resources Information Center
Cornish, Francis
2009-01-01
Hobbs [Hobbs, J.R., 1979. "Coherence and coreference." "Cognitive Science" 3, 67-90] claims that the interpretation of inter-sentential anaphors "falls out" as a "by-product" of using a particular coherence relation to integrate two discourse units. The article argues that this is only partly true. Taking the reader's perspective, I suggest that…
ERIC Educational Resources Information Center
Merwin, Rhonda M.; Wilson, Kelly G.
2005-01-01
Thirty-two subjects completed 2 stimulus equivalence tasks using a matching-to-sample paradigm. One task involved direct reinforcement of conditional discriminations designed to produce derived relations between self-referring stimuli (e.g., me, myself, I) and positive evaluation words (e.g., whole, desirable, perfect). The other task was designed…
High band gap 2-6 and 3-5 tunneling junctions for silicon multijunction solar cells
NASA Technical Reports Server (NTRS)
Daud, Taher (Inventor); Kachare, Akaram H. (Inventor)
1986-01-01
A multijunction silicon solar cell of high efficiency is provided by providing a tunnel junction between the solar cell junctions to connect them in series. The tunnel junction is comprised of p+ and n+ layers of high band gap 3-5 or 2-6 semiconductor materials that match the lattice structure of silicon, such as GaP (band gap 2.24 eV) or ZnS (band gap 3.6 eV). Each of which has a perfect lattice match with silicon to avoid defects normally associated with lattice mismatch.
NASA Astrophysics Data System (ADS)
Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2015-08-01
The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to 99.3% with 3%/3 mm and from 79.2% to 95.2% with 2%/2 mm when compared with the CC13 beam model. These results show the effectiveness of the proposed method. Less inter-user variability can be expected of the final beam model. It is also found that the method can be easily integrated into model-based TPS.
Improving energy efficiency in handheld biometric applications
NASA Astrophysics Data System (ADS)
Hoyle, David C.; Gale, John W.; Schultz, Robert C.; Rakvic, Ryan N.; Ives, Robert W.
2012-06-01
With improved smartphone and tablet technology, it is becoming increasingly feasible to implement powerful biometric recognition algorithms on portable devices. Typical iris recognition algorithms, such as Ridge Energy Direction (RED), utilize two-dimensional convolution in their implementation. This paper explores the energy consumption implications of 12 different methods of implementing two-dimensional convolution on a portable device. Typically, convolution is implemented using floating point operations. If a given algorithm implemented integer convolution vice floating point convolution, it could drastically reduce the energy consumed by the processor. The 12 methods compared include 4 major categories: Integer C, Integer Java, Floating Point C, and Floating Point Java. Each major category is further divided into 3 implementations: variable size looped convolution, static size looped convolution, and unrolled looped convolution. All testing was performed using the HTC Thunderbolt with energy measured directly using a Tektronix TDS5104B Digital Phosphor oscilloscope. Results indicate that energy savings as high as 75% are possible by using Integer C versus Floating Point C. Considering the relative proportion of processing time that convolution is responsible for in a typical algorithm, the savings in energy would likely result in significantly greater time between battery charges.
Software designs of image processing tasks with incremental refinement of computation.
Anastasia, Davide; Andreopoulos, Yiannis
2010-08-01
Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.
Convolution kernels for multi-wavelength imaging
NASA Astrophysics Data System (ADS)
Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.
2016-12-01
Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at
NASA Astrophysics Data System (ADS)
QingJie, Wei; WenBin, Wang
2017-06-01
In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval
Match graph generation for symbolic indirect correlation
NASA Astrophysics Data System (ADS)
Lopresti, Daniel; Nagy, George; Joshi, Ashutosh
2006-01-01
Symbolic indirect correlation (SIC) is a new approach for bringing lexical context into the recognition of unsegmented signals that represent words or phrases in printed or spoken form. One way of viewing the SIC problem is to find the correspondence, if one exists, between two bipartite graphs, one representing the matching of the two lexical strings and the other representing the matching of the two signal strings. While perfect matching cannot be expected with real-world signals and while some degree of mismatch is allowed for in the second stage of SIC, such errors, if they are too numerous, can present a serious impediment to a successful implementation of the concept. In this paper, we describe a framework for evaluating the effectiveness of SIC match graph generation and examine the relatively simple, controlled cases of synthetic images of text strings typeset, both normally and in highly condensed fashion. We quantify and categorize the errors that arise, as well as present a variety of techniques we have developed to visualize the intermediate results of the SIC process.
Deep multi-scale convolutional neural network for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Zhang, Feng-zhe; Yang, Xia
2018-04-01
In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.
Impact of jammer side information on the performance of anti-jam systems
NASA Astrophysics Data System (ADS)
Lim, Samuel
1992-03-01
The Chernoff bound parameter, D, provides a performance measure for all coded communication systems. D can be used to determine upper-bounds on bit error probabilities (BEPs) of Viterbi decoded convolutional codes. The impact on BEP bounds of channel measurements that provide additional side information can also be evaluated with D. This memo documents the results of a Chernoff bound parameter evaluation in optimum partial-band noise jamming (OPBNJ) for both BPSK and DPSK modulation schemes. Hard and soft quantized receivers, with and without jammer side information (JSI), were examined. The results of this analysis indicate that JSI does improve decoding performance. However, a knowledge of jammer presence alone achieves a performance level comparable to soft decision decoding with perfect JSI. Furthermore, performance degradation due to the lack of JSI can be compensated for by increasing the number of levels of quantization. Therefore, an anti-jam system without JSI can be made to perform almost as well as a system with JSI.
Solar granulation and statistical crystallography: A modeling approach using size-shape relations
NASA Technical Reports Server (NTRS)
Noever, D. A.
1994-01-01
The irregular polygonal pattern of solar granulation is analyzed for size-shape relations using statistical crystallography. In contrast to previous work which has assumed perfectly hexagonal patterns for granulation, more realistic accounting of cell (granule) shapes reveals a broader basis for quantitative analysis. Several features emerge as noteworthy: (1) a linear correlation between number of cell-sides and neighboring shapes (called Aboav-Weaire's law); (2) a linear correlation between both average cell area and perimeter and the number of cell-sides (called Lewis's law and a perimeter law, respectively) and (3) a linear correlation between cell area and squared perimeter (called convolution index). This statistical picture of granulation is consistent with a finding of no correlation in cell shapes beyond nearest neighbors. A comparative calculation between existing model predictions taken from luminosity data and the present analysis shows substantial agreements for cell-size distributions. A model for understanding grain lifetimes is proposed which links convective times to cell shape using crystallographic results.
NASA Astrophysics Data System (ADS)
Postadjian, T.; Le Bris, A.; Sahbi, H.; Mallet, C.
2017-05-01
Semantic classification is a core remote sensing task as it provides the fundamental input for land-cover map generation. The very recent literature has shown the superior performance of deep convolutional neural networks (DCNN) for many classification tasks including the automatic analysis of Very High Spatial Resolution (VHR) geospatial images. Most of the recent initiatives have focused on very high discrimination capacity combined with accurate object boundary retrieval. Therefore, current architectures are perfectly tailored for urban areas over restricted areas but not designed for large-scale purposes. This paper presents an end-to-end automatic processing chain, based on DCNNs, that aims at performing large-scale classification of VHR satellite images (here SPOT 6/7). Since this work assesses, through various experiments, the potential of DCNNs for country-scale VHR land-cover map generation, a simple yet effective architecture is proposed, efficiently discriminating the main classes of interest (namely buildings, roads, water, crops, vegetated areas) by exploiting existing VHR land-cover maps for training.
Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.
Harikumar, G; Bresler, Y
1999-01-01
We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.
Multichannel blind iterative image restoration.
Sroubek, Filip; Flusser, Jan
2003-01-01
Blind image deconvolution is required in many applications of microscopy imaging, remote sensing, and astronomical imaging. Unfortunately in a single-channel framework, serious conceptual and numerical problems are often encountered. Very recently, an eigenvector-based method (EVAM) was proposed for a multichannel framework which determines perfectly convolution masks in a noise-free environment if channel disparity, called co-primeness, is satisfied. We propose a novel iterative algorithm based on recent anisotropic denoising techniques of total variation and a Mumford-Shah functional with the EVAM restoration condition included. A linearization scheme of half-quadratic regularization together with a cell-centered finite difference discretization scheme is used in the algorithm and provides a unified approach to the solution of total variation or Mumford-Shah. The algorithm performs well even on very noisy images and does not require an exact estimation of mask orders. We demonstrate capabilities of the algorithm on synthetic data. Finally, the algorithm is applied to defocused images taken with a digital camera and to data from astronomical ground-based observations of the Sun.
The analysis of convolutional codes via the extended Smith algorithm
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Onyszchuk, I.
1993-01-01
Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.
Comparison of artificial absorbing boundaries for acoustic wave equation modelling
NASA Astrophysics Data System (ADS)
Gao, Yingjie; Song, Hanjie; Zhang, Jinhai; Yao, Zhenxing
2017-12-01
Absorbing boundary conditions are necessary in numerical simulation for reducing the artificial reflections from model boundaries. In this paper, we overview the most important and typical absorbing boundary conditions developed throughout history. We first derive the wave equations of similar methods in unified forms; then, we compare their absorbing performance via theoretical analyses and numerical experiments. The Higdon boundary condition is shown to be the best one among the three main absorbing boundary conditions that are based on a one-way wave equation. The Clayton and Engquist boundary is a special case of the Higdon boundary but has difficulty in dealing with the corner points in implementaion. The Reynolds boundary does not have this problem but its absorbing performance is the poorest among these three methods. The sponge boundary has difficulties in determining the optimal parameters in advance and too many layers are required to achieve a good enough absorbing performance. The hybrid absorbing boundary condition (hybrid ABC) has a better absorbing performance than the Higdon boundary does; however, it is still less efficient for absorbing nearly grazing waves since it is based on the one-way wave equation. In contrast, the perfectly matched layer (PML) can perform much better using a few layers. For example, the 10-layer PML would perform well for absorbing most reflected waves except the nearly grazing incident waves. The 20-layer PML is suggested for most practical applications. For nearly grazing incident waves, convolutional PML shows superiority over the PML when the source is close to the boundary for large-scale models. The Higdon boundary and hybrid ABC are preferred when the computational cost is high and high-level absorbing performance is not required, such as migration and migration velocity analyses, since they are not as sensitive to the amplitude errors as the full waveform inversion.
NASA Astrophysics Data System (ADS)
Sun, Wenbo; Hu, Yongxiang; Weimer, Carl; Ayers, Kirk; Baize, Rosemary R.; Lee, Tsengdar
2017-02-01
Electromagnetic (EM) beams with orbital angular momentum (OAM) may have great potential applications in communication technology and in remote sensing of the Earth-atmosphere system and outer planets. Study of their interaction with optical lenses and dielectric or metallic objects, or scattering of them by particles in the Earth-atmosphere system, is a necessary step to explore the advantage of the OAM EM beams. In this study, the 3-dimensional (3D) scattered-field (SF) finite-difference time domain (FDTD) technique with the convolutional perfectly matched layer (CPML) absorbing boundary conditions (ABC) is applied to calculate the scattering of the purely azimuthal (the radial mode number is assumed to be zero) Laguerre-Gaussian (LG) beams with the OAM by dielectric particles. We found that for OAM beam's interaction with dielectric particles, the forward-scattering peak in the conventional phase function (P11) disappears, and light scattering peak occurs at a scattering angle of 15° to 45°. The disappearance of forward-scattering peak means that, in laser communications most of the particle-scattered noise cannot enter the receiver, thus the received light is optimally the original OAM-encoded signal. This feature of the OAM beam also implies that in lidar remote sensing of the atmospheric particulates, most of the multiple-scattering energy will be off lidar sensors, and this may result in an accurate profiling of particle layers in the atmosphere or in the oceans by lidar, or even in the ground when a ground penetration radar (GPR) with the OAM is applied. This far-field characteristics of the scattered OAM light also imply that the optical theorem, which is derived from plane-parallel wave scattering case and relates the forward scattering amplitude to the total cross section of the scatterer, is invalid for the scattering of OAM beams by dielectric particles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kong, Xiang-kun; Jiangsu Key Laboratory of Meteorological Observation and Information Processing, Nanjing University of Information Science and Technology, Nanjing 210044; Liu, Shao-Bin, E-mail: plrg@nuaa.edu.cn
2014-12-15
A novel, compact, and multichannel nonreciprocal absorber through a wave tunneling mechanism in epsilon-negative and matching metamaterials is theoretically proposed. Nonreciprocal absorption properties are acquired via the coupling together of evanescent and propagating waves in an asymmetric configuration, constituted of nonlinear plasma alternated with matching metamaterial. The absorption channel number can be adjusted by changing the periodic number. Due to the positive feedback between nonlinear permittivity of plasma and the inner electric field, bistable absorption and reflection are achieved. Moreover, compared with some truncated photonic crystal or multilayered designs proposed before, our design is more compact and independent of incidentmore » angle or polarization. This kind of multilayer structure offers additional opportunities to design novel omnidirectional electromagnetic wave absorbers.« less
Identifying a National Death Index Match
Burchett, Bruce M.; Blazer, Dan G.
2009-01-01
Data from the National Death Index (NDI) are frequently used to determine survival status in epidemiologic or clinical studies. On the basis of selected information submitted by the investigator, NDI returns a file containing a set of candidate matches. Although NDI deems some matches as perfect, multiple candidate matches may be available for other cases. Working across data from the Duke University site of the Established Populations for Epidemiologic Studies of the Elderly (EPESE), NDI, and the Social Security Death Index (SSDI), the authors found that, for this Established Populations for Epidemiologic Studies of the Elderly cohort of 1,896 cases born before 1922 and alive as of January 1, 1999, a match on Social Security number plus additional personal information (specific combinations of last name, first name, month of birth, day of birth) resulted in agreement between NDI and Social Security Death Index dates of death 94.7% of the time, while comparable agreement was found for only 12.3% of candidate decedents who did not have the required combination of information. Thus, an easy to apply algorithm facilitates accurate identification of NDI matches. PMID:19567777
Kekre, Natasha; Antin, Joseph H
2014-07-17
Most patients who require allogeneic stem cell transplantation do not have a matched sibling donor, and many patients do not have a matched unrelated donor. In an effort to increase the applicability of transplantation, alternative donors such as mismatched adult unrelated donors, haploidentical related donors, and umbilical cord blood stem cell products are frequently used when a well matched donor is unavailable. We do not yet have the benefit of randomized trials comparing alternative donor stem cell sources to inform the choice of donor; however, the existing data allow some inferences to be made on the basis of existing observational and phase 2 studies. All 3 alternative donor sources can provide effective lymphohematopoietic reconstitution, but time to engraftment, graft failure rate, graft-versus-host disease, transplant-related mortality, and relapse risk vary by donor source. These factors all contribute to survival outcomes and an understanding of them should help guide clinicians when choosing among alternative donor sources when a matched related or matched unrelated donor is not available. © 2014 by The American Society of Hematology.
ERIC Educational Resources Information Center
Solomon, Norman A.; Scherer, Robert F.; Oliveti, Joseph J.; Mochel, Lucienne; Bryant, Michael
2017-01-01
Initial Association to Advance Collegiate Schools of Business International accreditation involves a process of pairing mentor and host schools to provide guidance and feedback on the congruence of the host school with the accreditation standards. The mentor serves as the primary resource for assisting the host school in identifying gaps with the…
ERIC Educational Resources Information Center
Brudermann, Cédric A.
2015-01-01
This paper explores the potential of digital learning environments to address current issues related to individualised instruction and the expansion of educational opportunities in English as a foreign language at university level. To do so, an applied linguistics-centred research endeavour was carried out. This reflection led to the…
Evidence of β-antimonene at the Sb/Bi2Se3 interface.
Flammini, R; Colonna, S; Hogan, C; Mahatha, S K; Papagno, M; Barla, A; Sheverdyaeva, P M; Moras, P; Aliev, Z S; Babanly, M B; Chulkov, E V; Carbone, C; Ronci, F
2018-01-10
We report a study of the interface between antimony and the prototypical topological insulator Bi 2 Se 3 . Scanning tunnelling microscopy measurements show the presence of ordered domains displaying a perfect lattice match with bismuth selenide. Density functional theory calculations of the most stable atomic configurations demonstrate that the ordered domains can be attributed to stacks of β-antimonene.
Evidence of β-antimonene at the Sb/Bi2Se3 interface
NASA Astrophysics Data System (ADS)
Flammini, R.; Colonna, S.; Hogan, C.; Mahatha, S. K.; Papagno, M.; Barla, A.; Sheverdyaeva, P. M.; Moras, P.; Aliev, Z. S.; Babanly, M. B.; Chulkov, E. V.; Carbone, C.; Ronci, F.
2018-02-01
We report a study of the interface between antimony and the prototypical topological insulator Bi2Se3. Scanning tunnelling microscopy measurements show the presence of ordered domains displaying a perfect lattice match with bismuth selenide. Density functional theory calculations of the most stable atomic configurations demonstrate that the ordered domains can be attributed to stacks of β-antimonene.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-08
... make the STP modifiers available to algorithms used by Floor brokers to route interest to the Exchange..., pegging e- Quotes, and g-Quotes entered into the matching engine by an algorithm on behalf of a Floor... algorithms removes impediments to and perfects the mechanism of a free and open market because there is a...
Yang, Jubiao; Yu, Feimi; Krane, Michael; Zhang, Lucy T
2018-01-01
In this work, a non-reflective boundary condition, the Perfectly Matched Layer (PML) technique, is adapted and implemented in a fluid-structure interaction numerical framework to demonstrate that proper boundary conditions are not only necessary to capture correct wave propagations in a flow field, but also its interacted solid behavior and responses. While most research on the topics of the non-reflective boundary conditions are focused on fluids, little effort has been done in a fluid-structure interaction setting. In this study, the effectiveness of the PML is closely examined in both pure fluid and fluid-structure interaction settings upon incorporating the PML algorithm in a fully-coupled fluid-structure interaction framework, the Immersed Finite Element Method. The performance of the PML boundary condition is evaluated and compared to reference solutions with a variety of benchmark test cases including known and expected solutions of aeroacoustic wave propagation as well as vortex shedding and advection. The application of the PML in numerical simulations of fluid-structure interaction is then investigated to demonstrate the efficacy and necessity of such boundary treatment in order to capture the correct solid deformation and flow field without the requirement of a significantly large computational domain.
Application of the perfectly matched layer in 2.5D marine controlled-source electromagnetic modeling
NASA Astrophysics Data System (ADS)
Li, Gang; Han, Bo
2017-09-01
For the traditional framework of EM modeling algorithms, the Dirichlet boundary is usually used which assumes the field values are zero at the boundaries. This crude condition requires that the boundaries should be sufficiently far away from the area of interest. Although cell sizes could become larger toward the boundaries as electromagnetic wave is propagated diffusively, a large modeling area may still be necessary to mitigate the boundary artifacts. In this paper, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 2.5D frequency-domain marine controlled-source electromagnetic (CSEM) field modeling. By using this PML boundary, one can restrict the modeling area of interest to the target region. Only a few absorbing layers surrounding the computational area can effectively depress the artificial boundary effect without losing the numerical accuracy. A 2.5D marine CSEM modeling scheme with the CFS-PML is developed by using the staggered finite-difference discretization. This modeling algorithm using the CFS-PML is of high accuracy, and shows advantages in computational time and memory saving than that using the Dirichlet boundary. For 3D problem, this computation time and memory saving should be more significant.
NASA Astrophysics Data System (ADS)
Gao, Hongwei; Zhang, Jianfeng
2008-09-01
The perfectly matched layer (PML) absorbing boundary condition is incorporated into an irregular-grid elastic-wave modelling scheme, thus resulting in an irregular-grid PML method. We develop the irregular-grid PML method using the local coordinate system based PML splitting equations and integral formulation of the PML equations. The irregular-grid PML method is implemented under a discretization of triangular grid cells, which has the ability to absorb incident waves in arbitrary directions. This allows the PML absorbing layer to be imposed along arbitrary geometrical boundaries. As a result, the computational domain can be constructed with smaller nodes, for instance, to represent the 2-D half-space by a semi-circle rather than a rectangle. By using a smooth artificial boundary, the irregular-grid PML method can also avoid the special treatments to the corners, which lead to complex computer implementations in the conventional PML method. We implement the irregular-grid PML method in both 2-D elastic isotropic and anisotropic media. The numerical simulations of a VTI lamb's problem, wave propagation in an isotropic elastic medium with curved surface and in a TTI medium demonstrate the good behaviour of the irregular-grid PML method.
Beyond Aztec Castles: Toric Cascades in the dP 3 Quiver
NASA Astrophysics Data System (ADS)
Lai, Tri; Musiker, Gregg
2017-12-01
Given one of an infinite class of supersymmetric quiver gauge theories, string theorists can associate a corresponding toric variety (which is a Calabi-Yau 3-fold) as well as an associated combinatorial model known as a brane tiling. In combinatorial language, a brane tiling is a bipartite graph on a torus and its perfect matchings are of interest to both combinatorialists and physicists alike. A cluster algebra may also be associated to such quivers and in this paper we study the generators of this algebra, known as cluster variables, for the quiver associated to the cone over the del Pezzo surface d P 3. In particular, mutation sequences involving mutations exclusively at vertices with two in-coming arrows and two out-going arrows are referred to as toric cascades in the string theory literature. Such toric cascades give rise to interesting discrete integrable systems on the level of cluster variable dynamics. We provide an explicit algebraic formula for all cluster variables that are reachable by toric cascades as well as a combinatorial interpretation involving perfect matchings of subgraphs of the d P 3 brane tiling for these formulas in most cases.
Driven superconducting quantum circuits
NASA Astrophysics Data System (ADS)
Nakamura, Yasunobu
2014-03-01
Driven nonlinear quantum systems show rich phenomena in various fields of physics. Among them, superconducting quantum circuits have very attractive features such as well-controlled quantum states with design flexibility, strong nonlinearity of Josephson junctions, strong coupling to electromagnetic driving fields, little internal dissipation, and tailored coupling to the electromagnetic environment. We have investigated properties and functionalities of driven superconducting quantum circuits. A transmon qubit coupled to a transmission line shows nearly perfect spatial mode matching between the incident and scattered microwave field in the 1D mode. Dressed states under a driving field are studied there and also in a semi-infinite 1D mode terminated by a resonator containing a flux qubit. An effective Λ-type three-level system is realized under an appropriate driving condition. It allows ``impedance-matched'' perfect absorption of incident probe photons and down conversion into another frequency mode. Finally, the weak signal from the qubit is read out using a Josephson parametric amplifier/oscillator which is another nonlinear circuit driven by a strong pump field. This work was partly supported by the Funding Program for World-Leading Innovative R&D on Science and Technology (FIRST), Project for Developing Innovation Systems of MEXT, MEXT KAKENHI ``Quantum Cybernetics,'' and the NICT Commissioned Research.
Image quality of mixed convolution kernel in thoracic computed tomography.
Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar
2016-11-01
The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.
Serang, Oliver
2015-08-01
Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.
The Chandra Source Catalog 2.0: Early Cross-matches
NASA Astrophysics Data System (ADS)
Rots, Arnold H.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula
2018-01-01
Cross-matching the Chandra Source Catalog (CSC) with other catalogs presents considerable challenges, since the Point Spread Function (PSF) of the Chandra X-ray Observatory varies significantly over the field of view. For the second release of the CSC (CSC2) we have been developing a cross-match tool that is based on the Bayesian algorithms by Budavari, Heinis, and Szalay (ApJ 679, 301 and 705, 739), making use of the error ellipses for the derived positions of the sources.However, calculating match probabilities only on the basis of error ellipses breaks down when the PSFs are significantly different. Not only can bonafide matches easily be missed, but the scene is also muddied by ambiguous multiple matches. These are issues that are not commonly addressed in cross-match tools. We have applied a satisfactory modification to the algorithm that, although not perfect, ameliorates the problems for the vast majority of such cases.We will present some early cross-matches of the CSC2 catalog with obvious candidate catalogs and report on the determination of the absolute astrometric error of the CSC2 based on such cross-matches.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.
2001-09-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE...ABSTRACT In this dissertation, the bit error rates for serially concatenated convolutional codes (SCCC) for both BPSK and DPSK modulation with...INTENTIONALLY LEFT BLANK i EXECUTIVE SUMMARY In this dissertation, the bit error rates of serially concatenated convolutional codes
NASA Technical Reports Server (NTRS)
Benjauthrit, B.; Mulhall, B.; Madsen, B. D.; Alberda, M. E.
1976-01-01
The DSN telemetry system performance with convolutionally coded data using the operational maximum-likelihood convolutional decoder (MCD) being implemented in the Network is described. Data rates from 80 bps to 115.2 kbps and both S- and X-band receivers are reported. The results of both one- and two-way radio losses are included.
High-order perturbations of a spherical collapsing star
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brizuela, David; Martin-Garcia, Jose M.; Sperhake, Ulrich
2010-11-15
A formalism to deal with high-order perturbations of a general spherical background was developed in earlier work [D. Brizuela, J. M. Martin-Garcia, and G. A. Mena Marugan, Phys. Rev. D 74, 044039 (2006); D. Brizuela, J. M. Martin-Garcia, and G. A. Mena Marugan, Phys. Rev. D 76, 024004 (2007)]. In this paper, we apply it to the particular case of a perfect fluid background. We have expressed the perturbations of the energy-momentum tensor at any order in terms of the perturbed fluid's pressure, density, and velocity. In general, these expressions are not linear and have sources depending on lower-order perturbations.more » For the second-order case we make the explicit decomposition of these sources in tensor spherical harmonics. Then, a general procedure is given to evolve the perturbative equations of motions of the perfect fluid for any value of the harmonic label. Finally, with the problem of a spherical collapsing star in mind, we discuss the high-order perturbative matching conditions across a timelike surface, in particular, the surface separating the perfect fluid interior from the exterior vacuum.« less
Enhanced online convolutional neural networks for object tracking
NASA Astrophysics Data System (ADS)
Zhang, Dengzhuo; Gao, Yun; Zhou, Hao; Li, Tianwen
2018-04-01
In recent several years, object tracking based on convolution neural network has gained more and more attention. The initialization and update of convolution filters can directly affect the precision of object tracking effective. In this paper, a novel object tracking via an enhanced online convolution neural network without offline training is proposed, which initializes the convolution filters by a k-means++ algorithm and updates the filters by an error back-propagation. The comparative experiments of 7 trackers on 15 challenging sequences showed that our tracker can perform better than other trackers in terms of AUC and precision.
Perfect Diode in Quantum Spin Chains
NASA Astrophysics Data System (ADS)
Balachandran, Vinitha; Benenti, Giuliano; Pereira, Emmanuel; Casati, Giulio; Poletti, Dario
2018-05-01
We study the rectification of the spin current in X X Z chains segmented in two parts, each with a different anisotropy parameter. Using exact diagonalization and a matrix product state algorithm, we find that a large rectification (of the order of 1 04) is attainable even using a short chain of N =8 spins, when one-half of the chain is gapless while the other has a large enough anisotropy. We present evidence of diffusive transport when the current is driven in one direction and of a transition to an insulating behavior of the system when driven in the opposite direction, leading to a perfect diode in the thermodynamic limit. The above results are explained in terms of matching of the spectrum of magnon excitations between the two halves of the chain.
Preliminary study of injection transients in the TPS storage ring
NASA Astrophysics Data System (ADS)
Chen, C. H.; Liu, Y. C.; Y Chen, J.; Chiu, M. S.; Tseng, F. H.; Fann, S.; Liang, C. C.; Huang, C. S.; Y Lee, T.; Y Chen, B.; Tsai, H. J.; Luo, G. H.; Kuo, C. C.
2017-07-01
An optimized injection efficiency is related to a perfect match between the pulsed magnetic fields in the storage ring and transfer line extraction in the TPS. However, misalignment errors, hardware output errors and leakage fields are unavoidable. We study the influence of injection transients on the stored TPS beam and discuss solutions to compensate these. Related simulations and measurements will be presented.
Multicolor (UV-IR) Photodetectors Based on Lattice-Matched 6.1 A II/VI and III/V Semiconductors
2015-08-27
photodiodes with different cutoff wavelengths connected in series with tunnel diodes between adjacent photodiodes. The LEDs optically bias the inactive...perfectly conductive n-CdTe/p-InSb tunnel junction. 15. SUBJECT TERMS optical biasing; multi-junction photodetectors; triple-junction solar cell...during this project, including initial demonstrations of optical addressing, tunnel junction studies and multicolor device characterization
[Problems with placement and using of automated external defibrillators in Czech Republic].
Olos, Tomás; Bursa, Filip; Gregor, Roman; Holes, David
2011-01-01
The use of automated external defibrillators improves the survival of adults who suffer from cardiopulmonary arrest. Automated external defibrillators detect ventricular fibrillation with almost perfect sensitivity and specificity. Authors describe the use of automated external defibrillator during cardiopulmonary resuscitation in a patient with sudden cardiac arrest during ice-hockey match. The article reports also the use of automated external defibrillators in children.
Achieving unequal error protection with convolutional codes
NASA Technical Reports Server (NTRS)
Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.
1994-01-01
This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.
Experimental Investigation of Convoluted Contouring for Aircraft Afterbody Drag Reduction
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Hunter, Craig A.
1999-01-01
An experimental investigation was performed in the NASA Langley 16-Foot Transonic Tunnel to determine the aerodynamic effects of external convolutions, placed on the boattail of a nonaxisymmetric nozzle for drag reduction. Boattail angles of 15 and 22 were tested with convolutions placed at a forward location upstream of the boattail curvature, at a mid location along the curvature and at a full location that spanned the entire boattail flap. Each of the baseline nozzle afterbodies (no convolutions) had a parabolic, converging contour with a parabolically decreasing corner radius. Data were obtained at several Mach numbers from static conditions to 1.2 for a range of nozzle pressure ratios and angles of attack. An oil paint flow visualization technique was used to qualitatively assess the effect of the convolutions. Results indicate that afterbody drag reduction by convoluted contouring is convolution location, Mach number, boattail angle, and NPR dependent. The forward convolution location was the most effective contouring geometry for drag reduction on the 22 afterbody, but was only effective for M < 0.95. At M = 0.8, drag was reduced 20 and 36 percent at NPRs of 5.4 and 7, respectively, but drag was increased 10 percent for M = 0.95 at NPR = 7. Convoluted contouring along the 15 boattail angle afterbody was not effective at reducing drag because the flow was minimally separated from the baseline afterbody, unlike the massive separation along the 22 boattail angle baseline afterbody.
Experimental study of current loss and plasma formation in the Z machine post-hole convolute
NASA Astrophysics Data System (ADS)
Gomez, M. R.; Gilgenbach, R. M.; Cuneo, M. E.; Jennings, C. A.; McBride, R. D.; Waisman, E. M.; Hutsel, B. T.; Stygar, W. A.; Rose, D. V.; Maron, Y.
2017-01-01
The Z pulsed-power generator at Sandia National Laboratories drives high energy density physics experiments with load currents of up to 26 MA. Z utilizes a double post-hole convolute to combine the current from four parallel magnetically insulated transmission lines into a single transmission line just upstream of the load. Current loss is observed in most experiments and is traditionally attributed to inefficient convolute performance. The apparent loss current varies substantially for z-pinch loads with different inductance histories; however, a similar convolute impedance history is observed for all load types. This paper details direct spectroscopic measurements of plasma density, temperature, and apparent and actual plasma closure velocities within the convolute. Spectral measurements indicate a correlation between impedance collapse and plasma formation in the convolute. Absorption features in the spectra show the convolute plasma consists primarily of hydrogen, which likely forms from desorbed electrode contaminant species such as H2O , H2 , and hydrocarbons. Plasma densities increase from 1 ×1016 cm-3 (level of detectability) just before peak current to over 1 ×1017 cm-3 at stagnation (tens of ns later). The density seems to be highest near the cathode surface, with an apparent cathode to anode plasma velocity in the range of 35 - 50 cm /μ s . Similar plasma conditions and convolute impedance histories are observed in experiments with high and low losses, suggesting that losses are driven largely by load dynamics, which determine the voltage on the convolute.
2015-12-15
Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network ... Convolutional Neural Networks (CNNs) enable them to outperform conventional techniques on standard object detection and classification tasks, their...detection accuracy and speed on the fine-grained Caltech UCSD bird dataset (Wah et al., 2011). Recently, Convolutional Neural Networks (CNNs), a deep
X-ray Moiré deflectometry using synthetic reference images
Stutman, Dan; Valdivia, Maria Pia; Finkenthal, Michael
2015-06-25
Moiré fringe deflectometry with grating interferometers is a technique that enables refraction-based x-ray imaging using a single exposure of an object. To obtain the refraction image, the method requires a reference fringe pattern (without the object). Our study shows that, in order to avoid artifacts, the reference pattern must be exactly matched in phase with the object fringe pattern. In experiments, however, it is difficult to produce a perfectly matched reference pattern due to unavoidable interferometer drifts. We present a simple method to obtain matched reference patterns using a phase-scan procedure to generate synthetic Moiré images. As a result, themore » method will enable deflectometric diagnostics of transient phenomena such as laser-produced plasmas and could improve the sensitivity and accuracy of medical phase-contrast imaging.« less
Witoonchart, Peerajak; Chongstitvatana, Prabhas
2017-08-01
In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.
Prioritized packet video transmission over time-varying wireless channel using proactive FEC
NASA Astrophysics Data System (ADS)
Kumwilaisak, Wuttipong; Kim, JongWon; Kuo, C.-C. Jay
2000-12-01
Quality of video transmitted over time-varying wireless channels relies heavily on the coordinated effort to cope with both channel and source variations dynamically. Given the priority of each source packet and the estimated channel condition, an adaptive protection scheme based on joint source-channel criteria is investigated via proactive forward error correction (FEC). With proactive FEC in Reed Solomon (RS)/Rate-compatible punctured convolutional (RCPC) codes, we study a practical algorithm to match the relative priority of source packets and instantaneous channel conditions. The channel condition is estimated to capture the long-term fading effect in terms of the averaged SNR over a preset window. Proactive protection is performed for each packet based on the joint source-channel criteria with special attention to the accuracy, time-scale match, and feedback delay of channel status estimation. The overall gain of the proposed protection mechanism is demonstrated in terms of the end-to-end wireless video performance.
NASA Astrophysics Data System (ADS)
Al-Hallaq, H. A.; Reft, C. S.; Roeske, J. C.
2006-03-01
The dosimetric effects of bone and air heterogeneities in head and neck IMRT treatments were quantified. An anthropomorphic RANDO phantom was CT-scanned with 16 thermoluminescent dosimeter (TLD) chips placed in and around the target volume. A standard IMRT plan generated with CORVUS was used to irradiate the phantom five times. On average, measured dose was 5.1% higher than calculated dose. Measurements were higher by 7.1% near the heterogeneities and by 2.6% in tissue. The dose difference between measurement and calculation was outside the 95% measurement confidence interval for six TLDs. Using CORVUS' heterogeneity correction algorithm, the average difference between measured and calculated doses decreased by 1.8% near the heterogeneities and by 0.7% in tissue. Furthermore, dose differences lying outside the 95% confidence interval were eliminated for five of the six TLDs. TLD doses recalculated by Pinnacle3's convolution/superposition algorithm were consistently higher than CORVUS doses, a trend that matched our measured results. These results indicate that the dosimetric effects of air cavities are larger than those of bone heterogeneities, thereby leading to a higher delivered dose compared to CORVUS calculations. More sophisticated algorithms such as convolution/superposition or Monte Carlo should be used for accurate tailoring of IMRT dose in head and neck tumours.
NASA Astrophysics Data System (ADS)
Allman, Derek; Reiter, Austin; Bell, Muyinatu
2018-02-01
We previously proposed a method of removing reflection artifacts in photoacoustic images that uses deep learning. Our approach generally relies on using simulated photoacoustic channel data to train a convolutional neural network (CNN) that is capable of distinguishing sources from artifacts based on unique differences in their spatial impulse responses (manifested as depth-based differences in wavefront shapes). In this paper, we directly compare a CNN trained with our previous continuous transducer model to a CNN trained with an updated discrete acoustic receiver model that more closely matches an experimental ultrasound transducer. These two CNNs were trained with simulated data and tested on experimental data. The CNN trained using the continuous receiver model correctly classified 100% of sources and 70.3% of artifacts in the experimental data. In contrast, the CNN trained using the discrete receiver model correctly classified 100% of sources and 89.7% of artifacts in the experimental images. The 19.4% increase in artifact classification accuracy indicates that an acoustic receiver model that closely mimics the experimental transducer plays an important role in improving the classification of artifacts in experimental photoacoustic data. Results are promising for developing a method to display CNN-based images that remove artifacts in addition to only displaying network-identified sources as previously proposed.
Landcover Classification Using Deep Fully Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Wang, J.; Li, X.; Zhou, S.; Tang, J.
2017-12-01
Land cover classification has always been an essential application in remote sensing. Certain image features are needed for land cover classification whether it is based on pixel or object-based methods. Different from other machine learning methods, deep learning model not only extracts useful information from multiple bands/attributes, but also learns spatial characteristics. In recent years, deep learning methods have been developed rapidly and widely applied in image recognition, semantic understanding, and other application domains. However, there are limited studies applying deep learning methods in land cover classification. In this research, we used fully convolutional networks (FCN) as the deep learning model to classify land covers. The National Land Cover Database (NLCD) within the state of Kansas was used as training dataset and Landsat images were classified using the trained FCN model. We also applied an image segmentation method to improve the original results from the FCN model. In addition, the pros and cons between deep learning and several machine learning methods were compared and explored. Our research indicates: (1) FCN is an effective classification model with an overall accuracy of 75%; (2) image segmentation improves the classification results with better match of spatial patterns; (3) FCN has an excellent ability of learning which can attains higher accuracy and better spatial patterns compared with several machine learning methods.
Domínguez-Vicent, Alberto; Esteve-Taboada, Jose Juan; Recchioni, Alberto; Brautaset, Rune
2018-05-01
To assess the power profile and in vitro optical quality of scleral contact lenses with different powers as a function of the optical aperture. The mini and semiscleral contact lenses (Procornea) were measured for five powers per design. The NIMO TR-1504 (Lambda-X) was used to assess the power profile and Zernike coefficients of each contact lens. Ten measurements per lens were taken at 3- and 6-mm apertures. Furthermore, the optical quality of each lens was described in Zernike coefficients, modulation transfer function, and point spread function (PSF). A convolution of each lens PSF with an eye-chart image was also computed. The optical power fluctuated less than 0.5 diopters (D) along the optical zone of each lens. However, the optical power obtained for some lenses did not match with its corresponding nominal one, the maximum difference being 0.5 D. In optical quality, small differences were obtained among all lenses within the same design. Although significant differences were obtained among lenses (P<0.05), these showed small impact in the image quality of each convolution. Insignificant power fluctuations were obtained along the optical zone measured for each scleral lens. Additionally, the optical quality of both lenses has showed to be independent of the lens power within the same aperture.
ERIC Educational Resources Information Center
Umar, A.; Yusau, B.; Ghandi, B. M.
2007-01-01
In this note, we introduce and discuss convolutions of two series. The idea is simple and can be introduced to higher secondary school classes, and has the potential of providing a good background for the well known convolution of function.
A fast complex integer convolution using a hybrid transform
NASA Technical Reports Server (NTRS)
Reed, I. S.; K Truong, T.
1978-01-01
It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.
Performance Analysis of Hybrid ARQ Protocols in a Slotted Code Division Multiple-Access Network
1989-08-01
Convolutional Codes . in Proc Int. Conf. Commun., 21.4.1-21.4.5, 1987. [27] J. Hagenauer. Rate Compatible Punctured Convolutional Codes . in Proc Int. Conf...achieved by using a low rate (r = 0.5), high constraint length (e.g., 32) punctured convolutional code . Code puncturing provides for a variable rate code ...investigated the use of convolutional codes in Type II Hybrid ARQ protocols. The error
2008-09-01
Convolutional Encoder Block Diagram of code rate 1 2 r = and...most commonly used along with block codes . They were introduced in 1955 by Elias [7]. Convolutional codes are characterized by the code rate kr n... convolutional code for 1 2 r = and = 3κ , namely [7 5], is used. Figure 2 Convolutional Encoder Block Diagram of code rate 1 2 r = and
Broadband gradient impedance matching using an acoustic metamaterial for ultrasonic transducers
NASA Astrophysics Data System (ADS)
Li, Zheng; Yang, Dan-Qing; Liu, Shi-Lei; Yu, Si-Yuan; Lu, Ming-Hui; Zhu, Jie; Zhang, Shan-Tao; Zhu, Ming-Wei; Guo, Xia-Sheng; Wu, Hao-Dong; Wang, Xin-Long; Chen, Yan-Feng
2017-02-01
High-quality broadband ultrasound transducers yield superior imaging performance in biomedical ultrasonography. However, proper design to perfectly bridge the energy between the active piezoelectric material and the target medium over the operating spectrum is still lacking. Here, we demonstrate a new anisotropic cone-structured acoustic metamaterial matching layer that acts as an inhomogeneous material with gradient acoustic impedance along the ultrasound propagation direction. When sandwiched between the piezoelectric material unit and the target medium, the acoustic metamaterial matching layer provides a broadband window to support extraordinary transmission of ultrasound over a wide frequency range. We fabricated the matching layer by etching the peeled silica optical fibre bundles with hydrofluoric acid solution. The experimental measurement of an ultrasound transducer equipped with this acoustic metamaterial matching layer shows that the corresponding -6 dB bandwidth is able to reach over 100%. This new material fully enables new high-end piezoelectric materials in the construction of high-performance ultrasound transducers and probes, leading to considerably improved resolutions in biomedical ultrasonography and compact harmonic imaging systems.
An evaluation of medical knowledge contained in Wikipedia and its use in the LOINC database.
Friedlin, Jeff; McDonald, Clement J
2010-01-01
The logical observation identifiers names and codes (LOINC) database contains 55 000 terms consisting of more atomic components called parts. LOINC carries more than 18 000 distinct parts. It is necessary to have definitions/descriptions for each of these parts to assist users in mapping local laboratory codes to LOINC. It is believed that much of this information can be obtained from the internet; the first effort was with Wikipedia. This project focused on 1705 laboratory analytes (the first part in the LOINC laboratory name). Of the 1705 parts queried, 1314 matching articles were found in Wikipedia. Of these, 1299 (98.9%) were perfect matches that exactly described the LOINC part, 15 (1.14%) were partial matches (the description in Wikipedia was related to the LOINC part, but did not describe it fully), and 102 (7.76%) were mis-matches. The current release of RELMA and LOINC include Wikipedia descriptions of LOINC parts obtained as a direct result of this project.
Designing Mixed Detergent Micelles for Uniform Neutron Contrast
Oliver, Ryan C.; Pingali, Sai Venkatesh; Urban, Volker S.
2017-09-29
Micelle-forming detergents provide an amphipathic environment that mimics lipid bilayers and are important tools used to solubilize and stabilize membrane proteins in solution for in vitro structural investigations. Small-angle neutron scattering (SANS) performed at the neutron contrast match point of detergent molecules allows observing the scattering signal from membrane proteins unobstructed by contributions from the detergent. However, we show here that even for a perfectly average-contrast matched detergent there arises significant core-shell scattering from the contrast difference between aliphatic detergent tails and hydrophilic head groups. This residual signal at the average detergent contrast match point interferes with interpreting structural datamore » of membrane proteins. This complication is often made worse by the presence of excess empty (protein-free) micelles. Here, we present an approach for the rational design of mixed micelles containing a deuterated detergent analog, which eliminates neutron contrast between core and shell, and allows the micelle scattering to be fully contrast matched to unambiguously resolve membrane protein structure using solution SANS.« less
Designing Mixed Detergent Micelles for Uniform Neutron Contrast
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliver, Ryan C.; Pingali, Sai Venkatesh; Urban, Volker S.
Micelle-forming detergents provide an amphipathic environment that mimics lipid bilayers and are important tools used to solubilize and stabilize membrane proteins in solution for in vitro structural investigations. Small-angle neutron scattering (SANS) performed at the neutron contrast match point of detergent molecules allows observing the scattering signal from membrane proteins unobstructed by contributions from the detergent. However, we show here that even for a perfectly average-contrast matched detergent there arises significant core-shell scattering from the contrast difference between aliphatic detergent tails and hydrophilic head groups. This residual signal at the average detergent contrast match point interferes with interpreting structural datamore » of membrane proteins. This complication is often made worse by the presence of excess empty (protein-free) micelles. Here, we present an approach for the rational design of mixed micelles containing a deuterated detergent analog, which eliminates neutron contrast between core and shell, and allows the micelle scattering to be fully contrast matched to unambiguously resolve membrane protein structure using solution SANS.« less
Protograph-Based Raptor-Like Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.
2014-01-01
Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.
Convolution of large 3D images on GPU and its decomposition
NASA Astrophysics Data System (ADS)
Karas, Pavel; Svoboda, David
2011-12-01
In this article, we propose a method for computing convolution of large 3D images. The convolution is performed in a frequency domain using a convolution theorem. The algorithm is accelerated on a graphic card by means of the CUDA parallel computing model. Convolution is decomposed in a frequency domain using the decimation in frequency algorithm. We pay attention to keeping our approach efficient in terms of both time and memory consumption and also in terms of memory transfers between CPU and GPU which have a significant inuence on overall computational time. We also study the implementation on multiple GPUs and compare the results between the multi-GPU and multi-CPU implementations.
NASA Astrophysics Data System (ADS)
Chen, Mingji; Wang, Changxian; Cheng, Xiaodong; Gong, Congcheng; Song, Weili; Yuan, Xujin; Fang, Daining
2018-04-01
The realization of an ideal invisible cloak implementing transformation optics is still missing. An impedance matching concept is implanted into transformation optics cloak to generate an impedance matching cloak (IMC) shell. In this work, it is proved that impedance matching structure reduces the cloaking structure’s disturbance to a propagating electromagnetic field and improves its invisibility measured by scattering field intensity. Such a cylindrical IMC shell is designed, fabricated with proposed rounded rectangular split-ring-resonators (RR-SRRs), and experimental measurements show the total scattering field of a perfect electric conductor (PEC) cylinder surrounded by an IMC shell is improved greatly compared to the PEC cylinder showing electromagnetic wave front ripple suppression and a considerable scattering shrinking effect. IMC shell backward scattering field is suppressed down to 7.29%, compared to the previous value of 86.7% due to its impedance matching character, and overall scattering field intensity shrinking is down to 19.3% compared to the previously realized value of 56.4%. Sideward scattering field recorded in the experiment also has a remarkable improvement compared to the PEC cylinder. The impedance matching concept might enlighten the realization of an ideal cloak and other novel electromagnetic cloaking and shielding structures.
Gyehee Lee; Liping A. Cai; Everette Mills; Joseph T. O' Leary
2002-01-01
Internet plays a significant role in generating new business and facilitating customers' need for a better way to plan and book their trips. From a marketers' perspective, one of the seemingly "fatal attractions" of the Internet for DMOs is that it can be an extremely effective tool in terms of both cost effectiveness and market penetration compared...
Perception without self-matching in conditional tag based cooperation.
McAvity, David M; Bristow, Tristen; Bunker, Eric; Dreyer, Alex
2013-09-21
We consider a model for the evolution of cooperation in a population where individuals may have one of a number of different heritable and distinguishable markers or tags. Individuals interact with each of their neighbors on a square lattice by either cooperating by donating some benefit at a cost to themselves or defecting by doing nothing. The decision to cooperate or defect is contingent on each individual's perception of its interacting partner's tag. Unlike in other tag-based models individuals do not compare their own tag to that of their interaction partner. That is, there is no self-matching. When perception is perfect the cooperation rate is substantially higher than in the usual spatial prisoner's dilemma game when the cost of cooperation is high. The enhancement in cooperation is positively correlated with the number of different tags. The more diverse a population is the more cooperative it becomes. When individuals start with an inability to perceive tags the population evolves to a state where individuals gain at least partial perception. With some reproduction mechanisms perfect perception evolves, but with others the ability to perceive tags is imperfect. We find that perception of tags evolves to lower levels when the cost of cooperation is higher. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Khan, Suhail; Khan, Muhammad Shoaib; Ali, Amjad
2018-04-01
In this paper, our aim is to study (n + 2)-dimensional collapse of perfect fluid spherically symmetric spacetime in the context of f(R, T) gravity. The matching conditions are acquired by considering a spherically symmetric non-static (n + 2)-dimensional metric in the inner region and Schwarzschild (n + 2)-dimensional metric in the outer region of the star. To solve the field equations for above settings in f(R, T) gravity, we choose the stress-energy tensor trace and the Ricci scalar as constants. It is observed that two physical horizons, namely, cosmological and black hole horizons appear as a consequence of this collapse. A singularity is also formed after the birth of both the horizons. It is also observed that the term f(R0, T0) slows down the collapsing process.
Evaluation of the Match External Load in Soccer: Methods Comparison.
Castagna, Carlo; Varley, Matthew; Póvoas, Susana C A; D'Ottavio, Stefano
2017-04-01
To test the interchangeability of 2 match-analysis approaches for external-load detection considering arbitrary selected speeds and metabolic power (MP) thresholds in male top-level soccer. Data analyses were performed considering match physical performance of 60 matches (1200 player cases) of randomly selected Spanish, German, and English first-division championship matches (2013-14 season). Match analysis was performed with a validated semiautomated multicamera system operating at 25 Hz. During a match, players covered 10,673 ± 348 m, of which 1778 ± 208 m and 2759 ± 241 m were performed at high intensity, as measured using speed (≥16 km/h, HI) and metabolic power (≥20 W/kg, MPHI) notations. High-intensity notations were nearly perfectly associated (r = .93, P < .0001). A huge method bias (980.63 ± 87.82 m, d = 11.67) was found when considering MPHI and HI. Very large correlations were found between match total distance covered and MPHI (r = .84, P < .0001) and HI (r = .74, P < .0001). Player high-intensity decelerations (≥-2 m/s 2 ) were very largely associated with MPHI (r = .73, P < .0001). The speed and MP methods are highly interchangeable at relative level (magnitude rank) but not absolute level (measure magnitude). The 2 physical match-analysis methods can be independently used to track match external load in elite-level players. However, match-analyst decisions must be based on use of a single method to avoid bias in external-load determination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barraclough, Brendan; Lebron, Sharon; Li, Jonathan G.
2016-05-15
Purpose: To investigate the geometry dependence of the detector response function (DRF) of three commonly used scanning ionization chambers and its impact on a convolution-based method to address the volume averaging effect (VAE). Methods: A convolution-based approach has been proposed recently to address the ionization chamber VAE. It simulates the VAE in the treatment planning system (TPS) by iteratively convolving the calculated beam profiles with the DRF while optimizing the beam model. Since the convolved and the measured profiles are subject to the same VAE, the calculated profiles match the implicit “real” ones when the optimization converges. Three DRFs (Gaussian,more » Lorentzian, and parabolic function) were used for three ionization chambers (CC04, CC13, and SNC125c) in this study. Geometry dependent/independent DRFs were obtained by minimizing the difference between the ionization chamber-measured profiles and the diode-measured profiles convolved with the DRFs. These DRFs were used to obtain eighteen beam models for a commercial TPS. Accuracy of the beam models were evaluated by assessing the 20%–80% penumbra width difference (PWD) between the computed and diode-measured beam profiles. Results: The convolution-based approach was found to be effective for all three ionization chambers with significant improvement for all beam models. Up to 17% geometry dependence of the three DRFs was observed for the studied ionization chambers. With geometry dependent DRFs, the PWD was within 0.80 mm for the parabolic function and CC04 combination and within 0.50 mm for other combinations; with geometry independent DRFs, the PWD was within 1.00 mm for all cases. When using the Gaussian function as the DRF, accounting for geometry dependence led to marginal improvement (PWD < 0.20 mm) for CC04; the improvement ranged from 0.38 to 0.65 mm for CC13; for SNC125c, the improvement was slightly above 0.50 mm. Conclusions: Although all three DRFs were found adequate to represent the response of the studied ionization chambers, the Gaussian function was favored due to its superior overall performance. The geometry dependence of the DRFs can be significant for clinical applications involving small fields such as stereotactic radiotherapy.« less
Barraclough, Brendan; Li, Jonathan G; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2016-05-01
To investigate the geometry dependence of the detector response function (DRF) of three commonly used scanning ionization chambers and its impact on a convolution-based method to address the volume averaging effect (VAE). A convolution-based approach has been proposed recently to address the ionization chamber VAE. It simulates the VAE in the treatment planning system (TPS) by iteratively convolving the calculated beam profiles with the DRF while optimizing the beam model. Since the convolved and the measured profiles are subject to the same VAE, the calculated profiles match the implicit "real" ones when the optimization converges. Three DRFs (Gaussian, Lorentzian, and parabolic function) were used for three ionization chambers (CC04, CC13, and SNC125c) in this study. Geometry dependent/independent DRFs were obtained by minimizing the difference between the ionization chamber-measured profiles and the diode-measured profiles convolved with the DRFs. These DRFs were used to obtain eighteen beam models for a commercial TPS. Accuracy of the beam models were evaluated by assessing the 20%-80% penumbra width difference (PWD) between the computed and diode-measured beam profiles. The convolution-based approach was found to be effective for all three ionization chambers with significant improvement for all beam models. Up to 17% geometry dependence of the three DRFs was observed for the studied ionization chambers. With geometry dependent DRFs, the PWD was within 0.80 mm for the parabolic function and CC04 combination and within 0.50 mm for other combinations; with geometry independent DRFs, the PWD was within 1.00 mm for all cases. When using the Gaussian function as the DRF, accounting for geometry dependence led to marginal improvement (PWD < 0.20 mm) for CC04; the improvement ranged from 0.38 to 0.65 mm for CC13; for SNC125c, the improvement was slightly above 0.50 mm. Although all three DRFs were found adequate to represent the response of the studied ionization chambers, the Gaussian function was favored due to its superior overall performance. The geometry dependence of the DRFs can be significant for clinical applications involving small fields such as stereotactic radiotherapy.
Demonstration of Detection and Ranging Using Solvable Chaos
NASA Technical Reports Server (NTRS)
Corron, Ned J.; Stahl, Mark T.; Blakely, Jonathan N.
2013-01-01
Acoustic experiments demonstrate a novel approach to ranging and detection that exploits the properties of a solvable chaotic oscillator. This nonlinear oscillator includes an ordinary differential equation and a discrete switching condition. The chaotic waveform generated by this hybrid system is used as the transmitted waveform. The oscillator admits an exact analytic solution that can be written as the linear convolution of binary symbols and a single basis function. This linear representation enables coherent reception using a simple analog matched filter and without need for digital sampling or signal processing. An audio frequency implementation of the transmitter and receiver is described. Successful acoustic ranging measurements are presented to demonstrate the viability of the approach.
Learning deep features with adaptive triplet loss for person reidentification
NASA Astrophysics Data System (ADS)
Li, Zhiqiang; Sang, Nong; Chen, Kezhou; Gao, Changxin; Wang, Ruolin
2018-03-01
Person reidentification (re-id) aims to match a specified person across non-overlapping cameras, which remains a very challenging problem. While previous methods mostly focus on feature extraction or metric learning, this paper makes the attempt in jointly learning both the global full-body and local body-parts features of the input persons with a multichannel convolutional neural network (CNN) model, which is trained by an adaptive triplet loss function that serves to minimize the distance between the same person and maximize the distance between different persons. The experimental results show that our approach achieves very promising results on the large-scale Market-1501 and DukeMTMC-reID datasets.
Development and application of deep convolutional neural network in target detection
NASA Astrophysics Data System (ADS)
Jiang, Xiaowei; Wang, Chunping; Fu, Qiang
2018-04-01
With the development of big data and algorithms, deep convolution neural networks with more hidden layers have more powerful feature learning and feature expression ability than traditional machine learning methods, making artificial intelligence surpass human level in many fields. This paper first reviews the development and application of deep convolutional neural networks in the field of object detection in recent years, then briefly summarizes and ponders some existing problems in the current research, and the future development of deep convolutional neural network is prospected.
The perfect family: decision making in biparental care.
Akçay, Erol; Roughgarden, Joan
2009-10-13
Previous theoretical work on parental decisions in biparental care has emphasized the role of the conflict between evolutionary interests of parents in these decisions. A prominent prediction from this work is that parents should compensate for decreases in each other's effort, but only partially so. However, experimental tests that manipulate parents and measure their responses fail to confirm this prediction. At the same time, the process of parental decision making has remained unexplored theoretically. We develop a model to address the discrepancy between experiments and the theoretical prediction, and explore how assuming different decision making processes changes the prediction from the theory. We assume that parents make decisions in behavioral time. They have a fixed time budget, and allocate it between two parental tasks: provisioning the offspring and defending the nest. The proximate determinant of the allocation decisions are parents' behavioral objectives. We assume both parents aim to maximize the offspring production from the nest. Experimental manipulations change the shape of the nest production function. We consider two different scenarios for how parents make decisions: one where parents communicate with each other and act together (the perfect family), and one where they do not communicate, and act independently (the almost perfect family). The perfect family model is able to generate all the types of responses seen in experimental studies. The kind of response predicted depends on the nest production function, i.e. how parents' allocations affect offspring production, and the type of experimental manipulation. In particular, we find that complementarity of parents' allocations promotes matching responses. In contrast, the relative responses do not depend on the type of manipulation in the almost perfect family model. These results highlight the importance of the interaction between nest production function and how parents make decisions, factors that have largely been overlooked in previous models.
Lupia, Rodgers; Wabuyia, Peter B; Otiato, Peter; Fang, Chi-Tai; Tsai, Feng-Jen
2017-12-01
This study aimed to evaluate the association between highly active antiretroviral therapy (HAART) adherence and development of Kaposi's sarcoma (KS) in human immunodeficiency virus (HIV)/AIDS patients. We conducted a retrospective nested case-control study of 165 participants (33 cases and 132 controls) receiving HAART care at Maseno Hospital, Kenya, from January 2005 to October 2013. Cases were HIV-positive adults with KS, who were matched with controls in a ratio of 1:4 based on age (±5 years of each case), sex, and KS diagnosis date. Perfect adherence to HAART was assessed on every clinic visit by patients' self-reporting and pill counts. Chi-square tests were performed to compare socioeconomic and clinical statuses between cases and controls. A conditional logistic regression was used to assess the effects of perfect adherence to HAART, the latest CD4 count, education level, distance to health-care facility, initial World Health Organization stage, and number of regular sexual partners on the development of KS. Only 63.6% participants reported perfect adherence, and the control group had a significantly higher percentage of perfect adherence (75.0%) than did cases (18.2%). After adjustment for potential imbalances in the baseline and clinical characteristics, patients with imperfect HAART adherence had 20-times greater risk of developing KS than patients with perfect HAART adherence [hazard ratios: 21.0, 95% confidence interval: 4.2-105.1]. Patients with low latest CD4 count (≤350 cells/mm 3 ) had a seven-times greater risk of developing KS than did their counterparts (HRs: 7.1, 95% CI: 1.4-36.2). Imperfect HAART adherence and low latest CD4 count are significantly associated with KS development. Copyright © 2015. Published by Elsevier B.V.
A spectral nudging method for the ACCESS1.3 atmospheric model
NASA Astrophysics Data System (ADS)
Uhe, P.; Thatcher, M.
2015-06-01
A convolution-based method of spectral nudging of atmospheric fields is developed in the Australian Community Climate and Earth Systems Simulator (ACCESS) version 1.3 which uses the UK Met Office Unified Model version 7.3 as its atmospheric component. The use of convolutions allow for flexibility in application to different atmospheric grids. An approximation using one-dimensional convolutions is applied, improving the time taken by the nudging scheme by 10-30 times compared with a version using a two-dimensional convolution, without measurably degrading its performance. Care needs to be taken in the order of the convolutions and the frequency of nudging to obtain the best outcome. The spectral nudging scheme is benchmarked against a Newtonian relaxation method, nudging winds and air temperature towards ERA-Interim reanalyses. We find that the convolution approach can produce results that are competitive with Newtonian relaxation in both the effectiveness and efficiency of the scheme, while giving the added flexibility of choosing which length scales to nudge.
A spectral nudging method for the ACCESS1.3 atmospheric model
NASA Astrophysics Data System (ADS)
Uhe, P.; Thatcher, M.
2014-10-01
A convolution based method of spectral nudging of atmospheric fields is developed in the Australian Community Climate and Earth Systems Simulator (ACCESS) version 1.3 which uses the UK Met Office Unified Model version 7.3 as its atmospheric component. The use of convolutions allow flexibility in application to different atmospheric grids. An approximation using one-dimensional convolutions is applied, improving the time taken by the nudging scheme by 10 to 30 times compared with a version using a two-dimensional convolution, without measurably degrading its performance. Care needs to be taken in the order of the convolutions and the frequency of nudging to obtain the best outcome. The spectral nudging scheme is benchmarked against a Newtonian relaxation method, nudging winds and air temperature towards ERA-Interim reanalyses. We find that the convolution approach can produce results that are competitive with Newtonian relaxation in both the effectiveness and efficiency of the scheme, while giving the added flexibility of choosing which length scales to nudge.
Comparing Phylogenetic Trees by Matching Nodes Using the Transfer Distance Between Partitions
Giaro, Krzysztof
2017-01-01
Abstract Ability to quantify dissimilarity of different phylogenetic trees describing the relationship between the same group of taxa is required in various types of phylogenetic studies. For example, such metrics are used to assess the quality of phylogeny construction methods, to define optimization criteria in supertree building algorithms, or to find horizontal gene transfer (HGT) events. Among the set of metrics described so far in the literature, the most commonly used seems to be the Robinson–Foulds distance. In this article, we define a new metric for rooted trees—the Matching Pair (MP) distance. The MP metric uses the concept of the minimum-weight perfect matching in a complete bipartite graph constructed from partitions of all pairs of leaves of the compared phylogenetic trees. We analyze the properties of the MP metric and present computational experiments showing its potential applicability in tasks related to finding the HGT events. PMID:28177699
Monocular correspondence detection for symmetrical objects by template matching
NASA Astrophysics Data System (ADS)
Vilmar, G.; Besslich, Philipp W., Jr.
1990-09-01
We describe a possibility to reconstruct 3-D information from a single view of an 3-D bilateral symmetric object. The symmetry assumption allows us to obtain a " second view" from a different viewpoint by a simple reflection of the monocular image. Therefore we have to solve the correspondence problem in a special case where known feature-based or area-based binocular approaches fail. In principle our approach is based on a frequency domain template matching of the features on the epipolar lines. During a training period our system " learns" the assignment of correspondence models to image features. The object shape is interpolated when no template matches to the image features. This fact is an important advantage of this methodology because no " real world" image holds the symmetry assumption perfectly. To simplify the training process we used single views on human faces (e. g. passport photos) but our system is trainable on any other kind of objects.
Comparing Phylogenetic Trees by Matching Nodes Using the Transfer Distance Between Partitions.
Bogdanowicz, Damian; Giaro, Krzysztof
2017-05-01
Ability to quantify dissimilarity of different phylogenetic trees describing the relationship between the same group of taxa is required in various types of phylogenetic studies. For example, such metrics are used to assess the quality of phylogeny construction methods, to define optimization criteria in supertree building algorithms, or to find horizontal gene transfer (HGT) events. Among the set of metrics described so far in the literature, the most commonly used seems to be the Robinson-Foulds distance. In this article, we define a new metric for rooted trees-the Matching Pair (MP) distance. The MP metric uses the concept of the minimum-weight perfect matching in a complete bipartite graph constructed from partitions of all pairs of leaves of the compared phylogenetic trees. We analyze the properties of the MP metric and present computational experiments showing its potential applicability in tasks related to finding the HGT events.
Feldman, Justin M; Gruskin, Sofia; Coull, Brent A; Krieger, Nancy
2017-10-01
To assess the validity of demographic data reported in news media-based data sets for persons killed by police in Massachusetts (2004-2016) and to evaluate misclassification of these deaths in vital statistics mortality data. We identified 84 deaths resulting from police intervention in 4 news media-based data sources (WGBH News, Fatal Encounters, The Guardian, and The Washington Post) and, via record linkage, conducted matched-pair analyses with the Massachusetts mortality data. Compared with death certificates, there was near-perfect correlation for age in all sources (Pearson r > 0.99) and perfect concordance for gender. Agreement for race/ethnicity ranged from perfect (The Counted and The Washington Post) to high (Fatal Encounters Cohen's κ = 0.92). Among the 78 decedents for whom finalized International Classification of Diseases, 10th Revision (ICD-10), codes were available, 59 (75.6%) were properly classified as "deaths due to legal intervention." In Massachusetts, the 4 media-based sources on persons killed by police provide valid demographic data. Misclassification of deaths due to legal intervention in the mortality data does, however, remain a problem. Replication of the study in other states and nationally is warranted.
Noronha, Jorge; Denicol, Gabriel S.
2015-12-30
In this paper we obtain an analytical solution of the relativistic Boltzmann equation under the relaxation time approximation that describes the out-of-equilibrium dynamics of a radially expanding massless gas. This solution is found by mapping this expanding system in flat spacetime to a static flow in the curved spacetime AdS 2 Ⓧ S 2. We further derive explicit analytic expressions for the momentum dependence of the single-particle distribution function as well as for the spatial dependence of its moments. We find that this dissipative system has the ability to flow as a perfect fluid even though its entropy density doesmore » not match the equilibrium form. The nonequilibrium contribution to the entropy density is shown to be due to higher-order scalar moments (which possess no hydrodynamical interpretation) of the Boltzmann equation that can remain out of equilibrium but do not couple to the energy-momentum tensor of the system. Furthermore, in this system the slowly moving hydrodynamic degrees of freedom can exhibit true perfect fluidity while being totally decoupled from the fast moving, nonhydrodynamical microscopic degrees of freedom that lead to entropy production.« less
One-way quasiplanar terahertz absorbers using nonstructured polar dielectric layers
NASA Astrophysics Data System (ADS)
Rodríguez-Ulibarri, P.; Beruete, M.; Serebryannikov, A. E.
2017-10-01
A concept of quasiplanar one-way transparent terahertz absorbers made of linear isotropic materials is presented. The resulting structure consists of a homogeneous absorbing layer of polar dielectric, GaAs, a dispersion-free substrate, and an ultrathin frequency-selective reflector. It is demonstrated that perfect absorption can be obtained for forward illumination, along with total reflection at backward illumination and transparency windows in the adjacent bands. The design is particularized for the polaritonic gap range where permittivity of GaAs varies in a wide range and includes epsilon-near-zero and transparency regimes. The underlying physics can be explained with the aid of a unified equivalent-circuit (EC) analytical model. Perfect matching of input impedance in forward operation and, simultaneously, strong mismatch in the backward case are the universal criteria of one-way absorption. It is shown that perfect one-way absorption can be achieved at rather arbitrary permittivity values, provided these criteria are fulfilled. The EC results are in good agreement with full-wave simulations in a wide range of material and geometrical parameters. The resulting one-way absorbers are very compact and geometrically simple, and enable transparency in the neighboring frequency ranges and, hence, multifunctionality that utilizes both absorption- and transmission-related regimes.
Cross-Layer Design for Robust and Scalable Video Transmission in Dynamic Wireless Environment
2011-02-01
code rate convolutional codes or prioritized Rate - Compatible Punctured ...34New rate - compatible punctured convolutional codes for Viterbi decoding," IEEE Trans. Communications, Volume 42, Issue 12, pp. 3073-3079, Dec...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise
A Video Transmission System for Severely Degraded Channels
2006-07-01
rate compatible punctured convolutional codes (RCPC) . By separating the SPIHT bitstream...June 2000. 149 [170] J. Hagenauer, Rate - compatible punctured convolutional codes (RCPC codes ) and their applications, IEEE Transactions on...Farvardin [160] used rate compatible convolutional codes . They noticed that for some transmission rates , one of their EEP schemes, which may
There is no MacWilliams identity for convolutional codes. [transmission gain comparison
NASA Technical Reports Server (NTRS)
Shearer, J. B.; Mceliece, R. J.
1977-01-01
An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network
Qu, Xiaobo; He, Yifan
2018-01-01
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods. PMID:29509666
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.
Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di
2018-03-06
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.
Two-beam-excited conical emission.
Kauranen, M; Maki, J J; Gaeta, A L; Boyd, R W
1991-06-15
We describe a conical emission process that occurs when two beams of near-resonant light intersect as they pass through sodium vapor. The light is emitted on the surface of a circular cone that is centered on the bisector of the two applied beams and has an angular extent equal to the crossing angle of the two applied beams. We ascribe the origin of this effect to a perfectly phase-matched four-wave mixing process.
Zhang, Xi; Zhang, Jing; Wu, Dongzhi; Liu, Zhijing; Cai, Shuxian; Chen, Mei; Zhao, Yanping; Li, Chunyan; Yang, Huanghao; Chen, Jinghua
2014-12-07
Locked nucleic acid (LNA) is applied in toehold-mediated strand displacement reaction (TMSDR) to develop a junction-probe electrochemiluminescence (ECL) biosensor for single-nucleotide polymorphism (SNP) detection in the BRCA1 gene related to breast cancer. More than 65-fold signal difference can be observed with perfectly matched target sequence to single-base mismatched sequence under the same conditions, indicating good selectivity of the ECL biosensor.
GPU-accelerated Monte Carlo convolution/superposition implementation for dose calculation.
Zhou, Bo; Yu, Cedric X; Chen, Danny Z; Hu, X Sharon
2010-11-01
Dose calculation is a key component in radiation treatment planning systems. Its performance and accuracy are crucial to the quality of treatment plans as emerging advanced radiation therapy technologies are exerting ever tighter constraints on dose calculation. A common practice is to choose either a deterministic method such as the convolution/superposition (CS) method for speed or a Monte Carlo (MC) method for accuracy. The goal of this work is to boost the performance of a hybrid Monte Carlo convolution/superposition (MCCS) method by devising a graphics processing unit (GPU) implementation so as to make the method practical for day-to-day usage. Although the MCCS algorithm combines the merits of MC fluence generation and CS fluence transport, it is still not fast enough to be used as a day-to-day planning tool. To alleviate the speed issue of MC algorithms, the authors adopted MCCS as their target method and implemented a GPU-based version. In order to fully utilize the GPU computing power, the MCCS algorithm is modified to match the GPU hardware architecture. The performance of the authors' GPU-based implementation on an Nvidia GTX260 card is compared to a multithreaded software implementation on a quad-core system. A speedup in the range of 6.7-11.4x is observed for the clinical cases used. The less than 2% statistical fluctuation also indicates that the accuracy of the authors' GPU-based implementation is in good agreement with the results from the quad-core CPU implementation. This work shows that GPU is a feasible and cost-efficient solution compared to other alternatives such as using cluster machines or field-programmable gate arrays for satisfying the increasing demands on computation speed and accuracy of dose calculation. But there are also inherent limitations of using GPU for accelerating MC-type applications, which are also analyzed in detail in this article.
Chromatin accessibility prediction via a hybrid deep convolutional neural network.
Liu, Qiao; Xia, Fei; Yin, Qijin; Jiang, Rui
2018-03-01
A majority of known genetic variants associated with human-inherited diseases lie in non-coding regions that lack adequate interpretation, making it indispensable to systematically discover functional sites at the whole genome level and precisely decipher their implications in a comprehensive manner. Although computational approaches have been complementing high-throughput biological experiments towards the annotation of the human genome, it still remains a big challenge to accurately annotate regulatory elements in the context of a specific cell type via automatic learning of the DNA sequence code from large-scale sequencing data. Indeed, the development of an accurate and interpretable model to learn the DNA sequence signature and further enable the identification of causative genetic variants has become essential in both genomic and genetic studies. We proposed Deopen, a hybrid framework mainly based on a deep convolutional neural network, to automatically learn the regulatory code of DNA sequences and predict chromatin accessibility. In a series of comparison with existing methods, we show the superior performance of our model in not only the classification of accessible regions against background sequences sampled at random, but also the regression of DNase-seq signals. Besides, we further visualize the convolutional kernels and show the match of identified sequence signatures and known motifs. We finally demonstrate the sensitivity of our model in finding causative noncoding variants in the analysis of a breast cancer dataset. We expect to see wide applications of Deopen with either public or in-house chromatin accessibility data in the annotation of the human genome and the identification of non-coding variants associated with diseases. Deopen is freely available at https://github.com/kimmo1019/Deopen. ruijiang@tsinghua.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Deep architecture neural network-based real-time image processing for image-guided radiotherapy.
Mori, Shinichiro
2017-08-01
To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
A Frequency-Domain Adaptive Matched Filter for Active Sonar Detection.
Zhao, Zhishan; Zhao, Anbang; Hui, Juan; Hou, Baochun; Sotudeh, Reza; Niu, Fang
2017-07-04
The most classical detector of active sonar and radar is the matched filter (MF), which is the optimal processor under ideal conditions. Aiming at the problem of active sonar detection, we propose a frequency-domain adaptive matched filter (FDAMF) with the use of a frequency-domain adaptive line enhancer (ALE). The FDAMF is an improved MF. In the simulations in this paper, the signal to noise ratio (SNR) gain of the FDAMF is about 18.6 dB higher than that of the classical MF when the input SNR is -10 dB. In order to improve the performance of the FDAMF with a low input SNR, we propose a pre-processing method, which is called frequency-domain time reversal convolution and interference suppression (TRC-IS). Compared with the classical MF, the FDAMF combined with the TRC-IS method obtains higher SNR gain, a lower detection threshold, and a better receiver operating characteristic (ROC) in the simulations in this paper. The simulation results show that the FDAMF has higher processing gain and better detection performance than the classical MF under ideal conditions. The experimental results indicate that the FDAMF does improve the performance of the MF, and can adapt to actual interference in a way. In addition, the TRC-IS preprocessing method works well in an actual noisy ocean environment.
Coherent perfect absorption in a quantum nonlinear regime of cavity quantum electrodynamics
NASA Astrophysics Data System (ADS)
Wei, Yang-hua; Gu, Wen-ju; Yang, Guoqing; Zhu, Yifu; Li, Gao-xiang
2018-05-01
Coherent perfect absorption (CPA) is investigated in the quantum nonlinear regime of cavity quantum electrodynamics (CQED), in which a single two-level atom couples to a single-mode cavity weakly driven by two identical laser fields. In the strong-coupling regime and due to the photon blockade effect, the weakly driven CQED system can be described as a quantum system with three polariton states. CPA is achieved at a critical input field strength when the frequency of the input fields matches the polariton transition frequency. In the quantum nonlinear regime, the incoherent dissipation processes such as atomic and photon decays place a lower bound for the purity of the intracavity quantum field. Our results show that under the CPA condition, the intracavity field always exhibits the quadrature squeezing property manifested by the quantum nonlinearity, and the outgoing photon flux displays the super-Poissonian distribution.
Subwavelength total acoustic absorption with degenerate resonators
NASA Astrophysics Data System (ADS)
Yang, Min; Meng, Chong; Fu, Caixing; Li, Yong; Yang, Zhiyu; Sheng, Ping
2015-09-01
We report the experimental realization of perfect sound absorption by sub-wavelength monopole and dipole resonators that exhibit degenerate resonant frequencies. This is achieved through the destructive interference of two resonators' transmission responses, while the matching of their averaged impedances to that of air implies no backscattering, thereby leading to total absorption. Two examples, both using decorated membrane resonators (DMRs) as the basic units, are presented. The first is a flat panel comprising a DMR and a pair of coupled DMRs, while the second one is a ventilated short tube containing a DMR in conjunction with a sidewall DMR backed by a cavity. In both examples, near perfect absorption, up to 99.7%, has been observed with the airborne wavelength up to 1.2 m, which is at least an order of magnitude larger than the composite absorber. Excellent agreement between theory and experiment is obtained.
Ultrathin microwave metamaterial absorber utilizing embedded resistors
NASA Astrophysics Data System (ADS)
Kim, Young Ju; Hwang, Ji Sub; Yoo, Young Joon; Khuyen, Bui Xuan; Rhee, Joo Yull; Chen, Xianfeng; Lee, YoungPak
2017-10-01
We numerically and experimentally studied an ultrathin and broadband perfect absorber by enhancing the bandwidth with embedded resistors into the metamaterial structure, which is easy to fabricate in order to lower the Q-factor and by using multiple resonances with the patches of different sizes. We analyze the absorption mechanism in terms of the impedance matching with the free space and through the distribution of surface current at each resonance frequency. The magnetic field, induced by the antiparallel surface currents, is formed strongly in the direction opposite to the incident electromagnetic wave, to cancel the incident wave, leading to the perfect absorption. The corresponding experimental absorption was found to be higher than 97% in 0.88-3.15 GHz. The agreement between measurement and simulation was good. The aspects of our proposed structure can be applied to future electronic devices, for example, advanced noise-suppression sheets in the microwave regime.
2011-05-01
rate convolutional codes or the prioritized Rate - Compatible Punctured ...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise ratio SSIM... Convolutional (RCPC) codes . The RCPC codes achieve UEP by puncturing off different amounts of coded bits of the parent code . The
Convolution Operation of Optical Information via Quantum Storage
NASA Astrophysics Data System (ADS)
Li, Zhixiang; Liu, Jianji; Fan, Hongming; Zhang, Guoquan
2017-06-01
We proposed a novel method to achieve optical convolution of two input images via quantum storage based on electromagnetically induced transparency (EIT) effect. By placing an EIT media in the confocal Fourier plane of the 4f-imaging system, the optical convolution of the two input images can be achieved in the image plane.
NASA Astrophysics Data System (ADS)
Tachibana, Hideyuki; Suzuki, Takafumi; Mabuchi, Kunihiko
We address an estimation method of isometric muscle tension of fingers, as fundamental research for a neural signal-based prosthesis of fingers. We utilize needle electromyogram (EMG) signals, which have approximately equivalent information to peripheral neural signals. The estimating algorithm comprised two convolution operations. The first convolution is between normal distribution and a spike array, which is detected by needle EMG signals. The convolution estimates the probability density of spike-invoking time in the muscle. In this convolution, we hypothesize that each motor unit in a muscle activates spikes independently based on a same probability density function. The second convolution is between the result of the previous convolution and isometric twitch, viz., the impulse response of the motor unit. The result of the calculation is the sum of all estimated tensions of whole muscle fibers, i.e., muscle tension. We confirmed that there is good correlation between the estimated tension of the muscle and the actual tension, with >0.9 correlation coefficients at 59%, and >0.8 at 89% of all trials.
High Performance Implementation of 3D Convolutional Neural Networks on a GPU.
Lan, Qiang; Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie
2017-01-01
Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version.
High Performance Implementation of 3D Convolutional Neural Networks on a GPU
Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie
2017-01-01
Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version. PMID:29250109
Convoluted nozzle design for the RL10 derivative 2B engine
NASA Technical Reports Server (NTRS)
1985-01-01
The convoluted nozzle is a conventional refractory metal nozzle extension that is formed with a portion of the nozzle convoluted to show the extendible nozzle within the length of the rocket engine. The convoluted nozzle (CN) was deployed by a system of four gas driven actuators. For spacecraft applications the optimum CN may be self-deployed by internal pressure retained, during deployment, by a jettisonable exit closure. The convoluted nozzle is included in a study of extendible nozzles for the RL10 Engine Derivative 2B for use in an early orbit transfer vehicle (OTV). Four extendible nozzle configurations for the RL10-2B engine were evaluated. Three configurations of the two position nozzle were studied including a hydrogen dump cooled metal nozzle and radiation cooled nozzles of refractory metal and carbon/carbon composite construction respectively.
Sim, K S; Teh, V; Tey, Y C; Kho, T K
2016-11-01
This paper introduces new development technique to improve the Scanning Electron Microscope (SEM) image quality and we name it as sub-blocking multiple peak histogram equalization (SUB-B-MPHE) with convolution operator. By using this new proposed technique, it shows that the new modified MPHE performs better than original MPHE. In addition, the sub-blocking method consists of convolution operator which can help to remove the blocking effect for SEM images after applying this new developed technique. Hence, by using the convolution operator, it effectively removes the blocking effect by properly distributing the suitable pixel value for the whole image. Overall, the SUB-B-MPHE with convolution outperforms the rest of methods. SCANNING 38:492-501, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Escape Distance in Ground-Nesting Birds Differs with Individual Level of Camouflage.
Wilson-Aggarwal, Jared K; Troscianko, Jolyon T; Stevens, Martin; Spottiswoode, Claire N
2016-08-01
Camouflage is one of the most widespread antipredator strategies in the animal kingdom, yet no animal can match its background perfectly in a complex environment. Therefore, selection should favor individuals that use information on how effective their camouflage is in their immediate habitat when responding to an approaching threat. In a field study of African ground-nesting birds (plovers, coursers, and nightjars), we tested the hypothesis that individuals adaptively modulate their escape behavior in relation to their degree of background matching. We used digital imaging and models of predator vision to quantify differences in color, luminance, and pattern between eggs and their background, as well as the plumage of incubating adult nightjars. We found that plovers and coursers showed greater escape distances when their eggs were a poorer pattern match to the background. Nightjars sit on their eggs until a potential threat is nearby, and, correspondingly, they showed greater escape distances when the pattern and color match of the incubating adult's plumage-rather than its eggs-was a poorer match to the background. Finally, escape distances were shorter in the middle of the day, suggesting that escape behavior is mediated by both camouflage and thermoregulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colbert, C.; Moles, D.R.
This paper reports that the authors developed for the Air Force the Mark VI Personal Identity Verifier (PIV) for controlling access to a fixed or mobile ICBM site, a computer terminal, or mainframe. The Mark VI records the digitized silhouettes of four fingers of each hand on an AT and T smart card. Like fingerprints, finger shapes, lengths, and widths constitute an unguessable biometric password. A Security Officer enrolls an authorized person who places each hand, in turn, on a backlighted panel. An overhead scanning camera records the right and left hand reference templates on the smart card. The Securitymore » Officer adds to the card: name, personal identification number (PIN), and access restrictions such as permitted days of the week, times of day, and doors. To gain access, cardowner inserts card into a reader slot and places either hand on the panel. Resulting access template is matched to the reference template by three sameness algorithms. The final match score is an average of 12 scores (each of the four fingers, matched for shape, length, and width), expressing the degree of sameness. (A perfect match would score 100.00.) The final match score is compared to a predetermined score (threshold), generating an accept or reject decision.« less
Application of the perfectly matched layer in 3-D marine controlled-source electromagnetic modelling
NASA Astrophysics Data System (ADS)
Li, Gang; Li, Yuguo; Han, Bo; Liu, Zhan
2018-01-01
In this study, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 3-D frequency-domain marine controlled-source electromagnetic (CSEM) field modelling. The Dirichlet boundary, which is usually used within the traditional framework of EM modelling algorithms, assumes that the electric or magnetic field values are zero at the boundaries. This requires the boundaries to be sufficiently far away from the area of interest. To mitigate the boundary artefacts, a large modelling area may be necessary even though cell sizes are allowed to grow toward the boundaries due to the diffusion of the electromagnetic wave propagation. Compared with the conventional Dirichlet boundary, the PML boundary is preferred as the modelling area of interest could be restricted to the target region and only a few absorbing layers surrounding can effectively depress the artificial boundary effect without losing the numerical accuracy. Furthermore, for joint inversion of seismic and marine CSEM data, if we use the PML for CSEM field simulation instead of the conventional Dirichlet, the modelling area for these two different geophysical data collected from the same survey area could be the same, which is convenient for joint inversion grid matching. We apply the CFS-PML boundary to 3-D marine CSEM modelling by using the staggered finite-difference discretization. Numerical test indicates that the modelling algorithm using the CFS-PML also shows good accuracy compared to the Dirichlet. Furthermore, the modelling algorithm using the CFS-PML shows advantages in computational time and memory saving than that using the Dirichlet boundary. For the 3-D example in this study, the memory saving using the PML is nearly 42 per cent and the time saving is around 48 per cent compared to using the Dirichlet.
Kuipers, J G; Koller, M; Zeman, F; Müller, K; Rüffer, J U
2018-04-24
Disabilities in daily living and quality of life are key endpoints for evaluating the treatment outcome for rheumatoid arthritis (RA). Factors possibly contributing to good outcome are adherence and health literacy. The survey included a representative nationwide sample of German rheumatologists and their patients with RA. The physician questionnaire included the disease activity score (DAS28) and medical prescriptions. The patient questionnaire included fatigue (EORTC QLQ-FA13), health assessment questionnaire (HAQ), quality of life (SF-12), health literacy (HELP), and patients' listings of their medications. Adherence was operationalized as follows: patient-reported (CQR5), behavioral (concordance between physicians' and patients' listings of medications), physician-assessed, and a combined measure of physician rating (1 = very adherent, 0 = less adherent) and the match between physicians' prescriptions and patients' accounts of their medications (1 = perfect match, 0 = no perfect match) that yielded three categories of adherence: high, medium, and low. Simple and multiple linear regressions (controlling for age, sex, smoking, drinking alcohol, and sport) were calculated using adherence and health literacy as predictor variables, and disease activity and patient-reported outcomes as dependent variables. 708 pairs of patient and physician questionnaires were analyzed. The mean patient age (73% women) was 60 years (SD = 12). Multiple regression analyses showed that high adherence was significantly associated with 5/7 outcome variables and health literacy with 7/7 outcome variables. Adherence and health literacy had weak but consistent effects on most outcomes. Thus, enhancing adherence and understanding of medical information could improve outcome, which should be investigated in future interventional studies.
GaAsPN-based PIN solar cells MBE-grown on GaP substrates: toward the III-V/Si tandem solar cell
NASA Astrophysics Data System (ADS)
Da Silva, M.; Almosni, S.; Cornet, C.; Létoublon, A.; Levallois, C.; Rale, P.; Lombez, L.; Guillemoles, J.-F.; Durand, O.
2015-03-01
GaAsPN semiconductors are promising material for the elaboration of high efficiencies tandem solar cells on silicon substrates. GaAsPN diluted nitride alloy is studied as the top junction material due to its perfect lattice matching with the Si substrate and its ideal bandgap energy allowing a perfect current matching with the Si bottom cell. We review our recent progress in materials development of the GaAsPN alloy and our recent studies of some of the different building blocks toward the elaboration of a PIN solar cell. A lattice matched (with a GaP(001) substrate, as a first step toward the elaboration on a Si substrate) 1μm-thick GaAsPN alloy has been grown by MBE. After a post-growth annealing step, this alloy displays a strong absorption around 1.8-1.9 eV, and efficient photoluminescence at room temperature suitable for the elaboration of the targeted solar cell top junction. Early stage GaAsPN PIN solar cells prototypes have been grown on GaP (001) substrates, with 2 different absorber thicknesses (1μm and 0.3μm). The external quantum efficiencies and the I-V curves show that carriers have been extracted from the GaAsPN alloy absorbers, with an open-circuit voltage of 1.18 V, while displaying low short circuit currents meaning that the GaAsPN structural properties needs a further optimization. A better carrier extraction has been observed with the absorber displaying the smallest thickness, which is coherent with a low carriers diffusion length in our GaAsPN compound. Considering all the pathways for improvement, the efficiency obtained under AM1.5G is however promising.
Impact of a soccer match on the cardiac autonomic control of referees.
Boullosa, Daniel Alexandre; Abreu, Laurinda; Tuimil, José Luis; Leicht, Anthony Scott
2012-06-01
The purpose of this study was to assess the effect of a soccer match on the cardiac autonomic control of heart rate (HR) in soccer referees. Sixteen Spanish regional and third division referees (11 males: 26 ± 7 years, 74.4 ± 4.1 kg, 178 ± 3 cm, Yo-Yo IR1 ~600-1,560 m; 5 females: 22 ± 3 years, 59.3 ± 4.8 kg, 158 ± 8 cm, Yo-Yo IR1 ~200-520 m) participated with 24-h HR recordings measured with a Polar RS800 during a rest and a match day. Autonomic control of HR was assessed from HR variability (HRV) analysis. Inclusion of a soccer match (92.5% spent at >75% maximum HR) reduced pre-match (12:00-17:00 hours; small to moderate), post-match (19:00-00:00 hours; moderate to almost perfect), and night-time (00:00-05:00 hours; small to moderate) HRV. Various moderate-to-large correlations were detected between resting HRV and the rest-to-match day difference in HRV. The rest-to-match day differences of low and high-frequency bands ratio (LF/HF) and HR in the post-match period were moderately correlated with time spent at different exercise intensities. Yo-Yo IR1 performance was highly correlated with jump capacity and peak lactate, but not with any HRV parameter. These results suggest that a greater resting HRV may allow referees to tolerate stresses during a match day with referees who spent more time at higher intensities during matches exhibiting a greater LF/HF increment in the post-match period. The relationship between match activities, [Formula: see text] and HR recovery kinetics in referees and team sport athletes of different competitive levels remains to be clarified.
Ruggieri, M; Fumarola, A; Straniero, A; Maiuolo, A; Coletta, I; Veltri, A; Di Fiore, A; Trimboli, P; Gargiulo, P; Genderini, M; D'Armiento, M
2008-09-01
Actually, thyroid volume >25 ml, obtained by preoperative ultrasound evaluation, is a very important exclusion criteria for minimally invasive thyroidectomy. So far, among different imaging techniques, two-dimensional ultrasonography has become the more accepted method for the assessment of thyroid volume (US-TV). The aims of this study were: (1) to estimate the preoperative thyroid volume in patients undergoing minimally invasive total thyroidectomy using a mathematical formula and (2) to verify its validity by comparing it with the postsurgical TV (PS-TV). In 53 patients who underwent minimally invasive total thyroidectomy (from January 2003 to December 2007), US-TV, obtained by ellipsoid volume formula, was compared to PS-TV determined by the Archimedes' principle. A mathematical formula able to predict the TV from the US-TV was applied in 34 cases in the last 2 years. Mean US-TV (14.4 +/- 5.9 ml) was significantly lower than mean PS-TV (21.7 +/- 10.3 ml). This underestimation was related to gland multinodularity and/or nodular involvement of the isthmus. A mathematical formula to reduce US-TV underestimation and predict the real TV was developed using a linear model. Mean predicted TV (16.8 +/- 3.7 ml) perfectly matched mean PS-TV, underestimating PS-TV in 19% of cases. We verified the accuracy of this mathematical model in patients' eligibility for minimally invasive total thyroidectomy, and we demonstrated that a predicted TV <25 ml was confirmed post-surgery in 94% of cases. We demonstrated that using a linear model, it is possible to predict from US the PS-TV with high accuracy. In fact, the mean predicted TV perfectly matched the mean PS-TV in all cases. In particular, the percentage of cases in which the predicted TV perfectly matched the PS-TV increases from 23%, estimated by US, to 43%. Moreover, the percentage of TV underestimation was reduced from 77% to 19%, as well as the range of the disagreement from up to 200% to 80%. This study shows that two-dimensional US can provide the accurate estimation of thyroid volume but that it can be improved by a mathematical model. This may contribute to a more appropriate surgical management of thyroid diseases.
NASA Astrophysics Data System (ADS)
Wang, Dongyi; Vinson, Robert; Holmes, Maxwell; Seibel, Gary; Tao, Yang
2018-04-01
The Atlantic blue crab is among the highest-valued seafood found in the American Eastern Seaboard. Currently, the crab processing industry is highly dependent on manual labor. However, there is great potential for vision-guided intelligent machines to automate the meat picking process. Studies show that the back-fin knuckles are robust features containing information about a crab's size, orientation, and the position of the crab's meat compartments. Our studies also make it clear that detecting the knuckles reliably in images is challenging due to the knuckle's small size, anomalous shape, and similarity to joints in the legs and claws. An accurate and reliable computer vision algorithm was proposed to detect the crab's back-fin knuckles in digital images. Convolutional neural networks (CNNs) can localize rough knuckle positions with 97.67% accuracy, transforming a global detection problem into a local detection problem. Compared to the rough localization based on human experience or other machine learning classification methods, the CNN shows the best localization results. In the rough knuckle position, a k-means clustering method is able to further extract the exact knuckle positions based on the back-fin knuckle color features. The exact knuckle position can help us to generate a crab cutline in XY plane using a template matching method. This is a pioneering research project in crab image analysis and offers advanced machine intelligence for automated crab processing.
Control system of hexacopter using color histogram footprint and convolutional neural network
NASA Astrophysics Data System (ADS)
Ruliputra, R. N.; Darma, S.
2017-07-01
The development of unmanned aerial vehicles (UAV) has been growing rapidly in recent years. The use of logic thinking which is implemented into the program algorithms is needed to make a smart system. By using visual input from a camera, UAV is able to fly autonomously by detecting a target. However, some weaknesses arose as usage in the outdoor environment might change the target's color intensity. Color histogram footprint overcomes the problem because it divides color intensity into separate bins that make the detection tolerant to the slight change of color intensity. Template matching compare its detection result with a template of the reference image to determine the target position and use it to position the vehicle in the middle of the target with visual feedback control based on Proportional-Integral-Derivative (PID) controller. Color histogram footprint method localizes the target by calculating the back projection of its histogram. It has an average success rate of 77 % from a distance of 1 meter. It can position itself in the middle of the target by using visual feedback control with an average positioning time of 73 seconds. After the hexacopter is in the middle of the target, Convolutional Neural Networks (CNN) classifies a number contained in the target image to determine a task depending on the classified number, either landing, yawing, or return to launch. The recognition result shows an optimum success rate of 99.2 %.
Duangsuwan, Pornsawan; Phoungpetchara, Ittipon; Tinikul, Yotsawan; Poljaroen, Jaruwan; Wanichanon, Chaitip; Sobhon, Prasert
2008-04-01
The normal lymphoid organ of Penaeus monodon (which tested negative for WSSV and YHV) was composed of two parts: lymphoid tubules and interstitial spaces, which were permeated with haemal sinuses filled with large numbers of haemocytes. There were three permanent types of cells present in the wall of lymphoid tubules: endothelial, stromal and capsular cells. Haemocytes penetrated the endothelium of the lymphoid tubule's wall to reside among the fixed cells. The outermost layer of the lymphoid tubule was covered by a network of fibers embedded in a PAS-positive extracellular matrix, which corresponded to a basket-like network that covered all the lymphoid tubules as visualized by a scanning electron microscope (SEM). Argyrophilic reticular fibers surrounded haemal sinuses and lymphoid tubules. Together they formed the scaffold that supported the lymphoid tubule. Using vascular cast and SEM, the three dimensional structure of the subgastric artery that supplies each lobe of the lymphoid organ was reconstructed. This artery branched into highly convoluted and blind-ending terminal capillaries, each forming the lumen of a lymphoid tubule around which haemocytes and other cells aggregated to form a cuff-like wall. Stromal cells which form part of the tubular scaffold were immunostained for vimentin. Examination of the whole-mounted lymphoid organ, immunostained for vimentin, by confocal microscopy exhibited the highly branching and convoluted lymphoid tubules matching the pattern of the vascular cast observed in SEM.
Scalable Video Transmission Over Multi-Rate Multiple Access Channels
2007-06-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their ap- plications,” IEEE...source encoded using the MPEG-4 video codec. The source encoded bitstream is then channel encoded with Rate Compatible Punctured Convolutional (RCPC...Clark, and J. M. Geist, “ Punctured convolutional codes or rate (n-1)/n and simplified maximum likelihood decoding,” IEEE Transactions on
Wireless Visual Sensor Network Resource Allocation using Cross-Layer Optimization
2009-01-01
Rate Compatible Punctured Convolutional (RCPC) codes for channel...vol. 44, pp. 2943–2959, November 1998. [22] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE... coding rate for H.264/AVC video compression is determined. At the data link layer, the Rate - Compatible Puctured Convolutional (RCPC) channel coding
The general theory of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Stanley, R. P.
1993-01-01
This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.
Bag of Visual Words Model with Deep Spatial Features for Geographical Scene Classification
Wu, Lin
2017-01-01
With the popular use of geotagging images, more and more research efforts have been placed on geographical scene classification. In geographical scene classification, valid spatial feature selection can significantly boost the final performance. Bag of visual words (BoVW) can do well in selecting feature in geographical scene classification; nevertheless, it works effectively only if the provided feature extractor is well-matched. In this paper, we use convolutional neural networks (CNNs) for optimizing proposed feature extractor, so that it can learn more suitable visual vocabularies from the geotagging images. Our approach achieves better performance than BoVW as a tool for geographical scene classification, respectively, in three datasets which contain a variety of scene categories. PMID:28706534
Djaker, Nadia; Wulfman, Claudine; Sadoun, Michaël; Lamy de la Chapelle, Marc
2013-01-01
Subsurface hydrothermal degradation of yttria stabilized tetragonal zirconia polycrystals (3Y-TZP) is presented. Evaluation of low temperature degradation (LTD) phase transformation induced by aging in 3Y-TZP is experimentally studied by Raman confocal microspectroscopy. A non-linear distribution of monoclinic volume fraction is determined in depth by using different pinhole sizes. A theoretical simulation is proposed based on the convolution of the excitation intensity profile and the Beer-Lambert law (optical properties of zirconia) to compare between experiment and theory. The calculated theoretical degradation curves matche closely to the experimental ones. Surface transformation (V0) and transformation factor in depth (T) are obtained by comparing simulation and experience for each sample with nondestructive optical sectioning. PMID:23667788
Combined invariants to similarity transformation and to blur using orthogonal Zernike moments
Beijing, Chen; Shu, Huazhong; Zhang, Hui; Coatrieux, Gouenou; Luo, Limin; Coatrieux, Jean-Louis
2011-01-01
The derivation of moment invariants has been extensively investigated in the past decades. In this paper, we construct a set of invariants derived from Zernike moments which is simultaneously invariant to similarity transformation and to convolution with circularly symmetric point spread function (PSF). Two main contributions are provided: the theoretical framework for deriving the Zernike moments of a blurred image and the way to construct the combined geometric-blur invariants. The performance of the proposed descriptors is evaluated with various PSFs and similarity transformations. The comparison of the proposed method with the existing ones is also provided in terms of pattern recognition accuracy, template matching and robustness to noise. Experimental results show that the proposed descriptors perform on the overall better. PMID:20679028
Matching methods evaluation framework for stereoscopic breast x-ray images.
Rousson, Johanna; Naudin, Mathieu; Marchessoux, Cédric
2016-01-01
Three-dimensional (3-D) imaging has been intensively studied in the past few decades. Depth information is an important added value of 3-D systems over two-dimensional systems. Special focuses were devoted to the development of stereo matching methods for the generation of disparity maps (i.e., depth information within a 3-D scene). Dedicated frameworks were designed to evaluate and rank the performance of different stereo matching methods but never considering x-ray medical images. Yet, 3-D x-ray acquisition systems and 3-D medical displays have already been introduced into the diagnostic market. To access the depth information within x-ray stereoscopic images, computing accurate disparity maps is essential. We aimed at developing a framework dedicated to x-ray stereoscopic breast images used to evaluate and rank several stereo matching methods. A multiresolution pyramid optimization approach was integrated to the framework to increase the accuracy and the efficiency of the stereo matching techniques. Finally, a metric was designed to score the results of the stereo matching compared with the ground truth. Eight methods were evaluated and four of them [locally scaled sum of absolute differences (LSAD), zero mean sum of absolute differences, zero mean sum of squared differences, and locally scaled mean sum of squared differences] appeared to perform equally good with an average error score of 0.04 (0 is the perfect matching). LSAD was selected for generating the disparity maps.
Monovalent Streptavidin that Senses Oligonucleotides**
Wang, Jingxian; Kostic, Natasa; Stojanovic, Milan N.
2013-01-01
We report a straightforward chemical route to monovalent streptavidin, a valuable reagent for imaging. The one-step process is based on a (tris)biotinylated-oligonucleotide blocking three of streptavidin’s four biotin binding sites. Further, the complex is highly sensitive to single-base differences - whereby perfectly matched oligonucleotides trigger dissociation of the biotin-streptavidin interaction at higher rates than single-base mismatches. Unique properties and ease of synthesis open wide opportunities for practical applications in imaging and biosensing. PMID:23606329
Development of a Three Dimensional Perfectly Matched Layer for Transient Elasto-Dynamic Analyses
2006-12-01
MacLean [Ref. 47] intro- duced a small tracked vehicle with dual inertial mass shakers mounted on top as a mobile source. It excited Rayleigh waves, but...routine initializes and set default values for; * the aplication parameters * the material data base parameters * the entries to appear on the...Underground seismic array experiments. National In- stitute of Nuclear Physics, 2005. [47] D. J. MacLean. Mobile source development for seismic-sonar based
Toward high-resolution NMR spectroscopy of microscopic liquid samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Mark C.; Mehta, Hardeep S.; Chen, Ying
A longstanding limitation of high-resolution NMR spectroscopy is the requirement for samples to have macroscopic dimensions. Commercial probes, for example, are designed for volumes of at least 5 mL, in spite of decades of work directed toward the goal of miniaturization. Progress in miniaturizing inductive detectors has been limited by a perceived need to meet two technical requirements: (1) minimal separation between the sample and the detector, which is essential for sensitivity, and (2) near-perfect magnetic-field homogeneity at the sample, which is typically needed for spectral resolution. The first of these requirements is real, but the second can be relaxed,more » as we demonstrate here. By using pulse sequences that yield high-resolution spectra in an inhomogeneous field, we eliminate the need for near-perfect field homogeneity and the accompanying requirement for susceptibility matching of microfabricated detector components. With this requirement removed, typical imperfections in microfabricated components can be tolerated, and detector dimensions can be matched to those of the sample, even for samples of volume << 5 uL. Pulse sequences that are robust to field inhomogeneity thus enable small-volume detection with optimal sensitivity. We illustrate the potential of this approach to miniaturization by presenting spectra acquired with a flat-wire detector that can easily be scaled to subnanoliter volumes. In particular, we report high-resolution NMR spectroscopy of an alanine sample of volume 500 pL.« less
Kim, Tae Young; Badsha, Md. Alamgir; Yoon, Junho; Lee, Seon Young; Jun, Young Chul; Hwangbo, Chang Kwon
2016-01-01
We propose a general, easy-to-implement scheme for broadband coherent perfect absorption (CPA) using epsilon-near-zero (ENZ) multilayer films. Specifically, we employ indium tin oxide (ITO) as a tunable ENZ material, and theoretically investigate CPA in the near-infrared region. We first derive general CPA conditions using the scattering matrix and the admittance matching methods. Then, by combining these two methods, we extract analytic expressions for all relevant parameters for CPA. Based on this theoretical framework, we proceed to study ENZ CPA in a single layer ITO film and apply it to all-optical switching. Finally, using an ITO multilayer of different ENZ wavelengths, we implement broadband ENZ CPA structures and investigate multi-wavelength all-optical switching in the technologically important telecommunication window. In our design, the admittance matching diagram was employed to graphically extract not only the structural parameters (the film thicknesses and incident angles), but also the input beam parameters (the irradiance ratio and phase difference between two input beams). We find that the multi-wavelength all-optical switching in our broadband ENZ CPA system can be fully controlled by the phase difference between two input beams. The simple but general design principles and analyses in this work can be widely used in various thin-film devices. PMID:26965195
Maduri, Rodolfo; Viaroli, Edoardo; Levivier, Marc; Daniel, Roy T; Messerer, Mahmoud
2017-01-01
Cranioplasty is considered a simple reconstructive procedure, usually performed in a single stage. In some clinical conditions, such as in children with multifocal flap osteolysis, it could represent a surgical challenge. In these patients, the partially resorbed autologous flap should be removed and replaced with a precustomed prosthesis which should perfectly match the expected bone defect. We describe the technique used for a navigated cranioplasty in a 3-year-old child with multifocal autologous flap osteolysis. We decided to perform a cranioplasty using a custom-made hydroxyapatite porous ceramic flap. The prosthesis was produced with an epoxy resin 3D skull model of the patient, which included a removable flap corresponding to the planned cranioplasty. Preoperatively, a CT scan of the 3D skull model was performed without the removable flap. The CT scan images of the 3D skull model were merged with the preoperative 3D CT scan of the patient and navigated during the cranioplasty to define with precision the cranioplasty margins. After removal of the autologous resorbed flap, the hydroxyapatite prosthesis matched perfectly with the skull defect. The anatomical result was excellent. Thus, the implementation of cranioplasty with image merge navigation of a 3D skull model may improve cranioplasty accuracy, allowing precise anatomic reconstruction in complex skull defect cases. © 2017 S. Karger AG, Basel.
Rose, D. V.; Madrid, E. A.; Welch, D. R.; ...
2015-03-04
Numerical simulations of a vacuum post-hole convolute driven by magnetically insulated vacuum transmission lines (MITLs) are used to study current losses due to charged particle emission from the MITL-convolute-system electrodes. This work builds on the results of a previous study [E.A. Madrid et al. Phys. Rev. ST Accel. Beams 16, 120401 (2013)] and adds realistic power pulses, Ohmic heating of anode surfaces, and a model for the formation and evolution of cathode plasmas. The simulations suggest that modestly larger anode-cathode gaps in the MITLs upstream of the convolute result in significantly less current loss. In addition, longer pulse durations leadmore » to somewhat greater current loss due to cathode-plasma expansion. These results can be applied to the design of future MITL-convolute systems for high-current pulsed-power systems.« less
Classification of urine sediment based on convolution neural network
NASA Astrophysics Data System (ADS)
Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian
2018-04-01
By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.
Influence of Color Education and Training on Shade Matching Skills.
Ristic, Ivan; Stankovic, Sasa; Paravina, Rade D
2016-09-01
To evaluate the influence of education and training on quality of tooth color matching. Dental students (N = 174), matched the color of eight shade tabs in a viewing booth, using VITA Linearguide 3D-Master shade guide. The experimental group had color education and training between the before and after session. The control group did not have any additional information in between two sessions. Color differences between the task tabs and selected tabs were calculated using CIE formulas. The score for the best match (smallest color difference) was 10 points, the 2nd best match 9 points, down to 1 point for the 10th best match. Means and standard deviations were calculated. Differences were analyzed using the Student t-test. Shade matching scores in the experimental group were significantly better after education and training (p < 0.001), with a mean score before and after shade matching sessions of 7.06 (1.19) and 8.43 (0.92), respectively. The percentage of students in the experimental group that selected one of three best matches increased 24.3%. The control group exhibited no significant improvement in the after session. Within the limitations of the study, education and training improved students' shade matching skills. While the vast majority of dental restorations and practically all restorations in the esthetic zone are tooth colored, the profession as a whole is far from perfect when it comes to accurate shade matching. Education and training can improve shade matching ability: enhanced esthetics of dental restorations, increased patient satisfaction, and a reduced number of color corrections are some of the notable benefits and rewards. (J Esthet Restor Dent 00:000-000, 2016) J Esthet Restor Dent 28:287-294, 2016). © 2016 Wiley Periodicals, Inc.
Linear diffusion-wave channel routing using a discrete Hayami convolution method
Li Wang; Joan Q. Wu; William J. Elliot; Fritz R. Feidler; Sergey Lapin
2014-01-01
The convolution of an input with a response function has been widely used in hydrology as a means to solve various problems analytically. Due to the high computation demand in solving the functions using numerical integration, it is often advantageous to use the discrete convolution instead of the integration of the continuous functions. This approach greatly reduces...
NASA Technical Reports Server (NTRS)
Reichelt, Mark
1993-01-01
In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.
A Geometric Construction of Cyclic Cocycles on Twisted Convolution Algebras
NASA Astrophysics Data System (ADS)
Angel, Eitan
2010-09-01
In this thesis we give a construction of cyclic cocycles on convolution algebras twisted by gerbes over discrete translation groupoids. In his seminal book, Connes constructs a map from the equivariant cohomology of a manifold carrying the action of a discrete group into the periodic cyclic cohomology of the associated convolution algebra. Furthermore, for proper étale groupoids, J.-L. Tu and P. Xu provide a map between the periodic cyclic cohomology of a gerbe twisted convolution algebra and twisted cohomology groups. Our focus will be the convolution algebra with a product defined by a gerbe over a discrete translation groupoid. When the action is not proper, we cannot construct an invariant connection on the gerbe; therefore to study this algebra, we instead develop simplicial notions related to ideas of J. Dupont to construct a simplicial form representing the Dixmier-Douady class of the gerbe. Then by using a JLO formula we define a morphism from a simplicial complex twisted by this simplicial Dixmier-Douady form to the mixed bicomplex of certain matrix algebras. Finally, we define a morphism from this complex to the mixed bicomplex computing the periodic cyclic cohomology of the twisted convolution algebras.
Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houshmand, Monireh; Hosseini-Khayat, Saied
2011-02-15
Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation andmore » practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.« less
Coset Codes Viewed as Terminated Convolutional Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1996-01-01
In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.
Finite-Difference Algorithm for Simulating 3D Electromagnetic Wavefields in Conductive Media
NASA Astrophysics Data System (ADS)
Aldridge, D. F.; Bartel, L. C.; Knox, H. A.
2013-12-01
Electromagnetic (EM) wavefields are routinely used in geophysical exploration for detection and characterization of subsurface geological formations of economic interest. Recorded EM signals depend strongly on the current conductivity of geologic media. Hence, they are particularly useful for inferring fluid content of saturated porous bodies. In order to enhance understanding of field-recorded data, we are developing a numerical algorithm for simulating three-dimensional (3D) EM wave propagation and diffusion in heterogeneous conductive materials. Maxwell's equations are combined with isotropic constitutive relations to obtain a set of six, coupled, first-order partial differential equations governing the electric and magnetic vectors. An advantage of this system is that it does not contain spatial derivatives of the three medium parameters electric permittivity, magnetic permeability, and current conductivity. Numerical solution methodology consists of explicit, time-domain finite-differencing on a 3D staggered rectangular grid. Temporal and spatial FD operators have order 2 and N, where N is user-selectable. We use an artificially-large electric permittivity to maximize the FD timestep, and thus reduce execution time. For the low frequencies typically used in geophysical exploration, accuracy is not unduly compromised. Grid boundary reflections are mitigated via convolutional perfectly matched layers (C-PMLs) imposed at the six grid flanks. A shared-memory-parallel code implementation via OpenMP directives enables rapid algorithm execution on a multi-thread computational platform. Good agreement is obtained in comparisons of numerically-generated data with reference solutions. EM wavefields are sourced via point current density and magnetic dipole vectors. Spatially-extended inductive sources (current carrying wire loops) are under development. We are particularly interested in accurate representation of high-conductivity sub-grid-scale features that are common in industrial environments (borehole casing, pipes, railroad tracks). Present efforts are oriented toward calculating the EM responses of these objects via a First Born Approximation approach. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Park, Chanho; Nguyen, Phung K. T.; Nam, Myung Jin; Kim, Jongwook
2013-04-01
Monitoring CO2 migration and storage in geological formations is important not only for the stability of geological sequestration of CO2 but also for efficient management of CO2 injection. Especially, geophysical methods can make in situ observation of CO2 to assess the potential leakage of CO2 and to improve reservoir description as well to monitor development of geologic discontinuity (i.e., fault, crack, joint, etc.). Geophysical monitoring can be based on wireline logging or surface surveys for well-scale monitoring (high resolution and nallow area of investigation) or basin-scale monitoring (low resolution and wide area of investigation). In the meantime, crosswell tomography can make reservoir-scale monitoring to bridge the resolution gap between well logs and surface measurements. This study focuses on reservoir-scale monitoring based on crosswell seismic tomography aiming describe details of reservoir structure and monitoring migration of reservoir fluid (water and CO2). For the monitoring, we first make a sensitivity analysis on crosswell seismic tomography data with respect to CO2 saturation. For the sensitivity analysis, Rock Physics Models (RPMs) are constructed by calculating the values of density and P and S-wave velocities of a virtual CO2 injection reservoir. Since the seismic velocity of the reservoir accordingly changes as CO2 saturation changes when the CO2 saturation is less than about 20%, while when the CO2 saturation is larger than 20%, the seismic velocity is insensitive to the change, sensitivity analysis is mainly made when CO2 saturation is less than 20%. For precise simulation of seismic tomography responses for constructed RPMs, we developed a time-domain 2D elastic modeling based on finite difference method with a staggered grid employing a boundary condition of a convolutional perfectly matched layer. We further make comparison between sensitivities of seismic tomography and surface measurements for RPMs to analysis resolution difference between them. Moreover, assuming a similar reservoir situation to the CO2 storage site in Nagaoka, Japan, we generate time-lapse tomographic data sets for the corresponding CO2 injection process, and make a preliminary interpretation of the data sets.
Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms
2007-09-01
punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data
Using convolutional decoding to improve time delay and phase estimation in digital communications
Ormesher, Richard C [Albuquerque, NM; Mason, John J [Albuquerque, NM
2010-01-26
The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.
Number as a cognitive technology: evidence from Pirahã language and cognition.
Frank, Michael C; Everett, Daniel L; Fedorenko, Evelina; Gibson, Edward
2008-09-01
Does speaking a language without number words change the way speakers of that language perceive exact quantities? The Pirahã are an Amazonian tribe who have been previously studied for their limited numerical system [Gordon, P. (2004). Numerical cognition without words: Evidence from Amazonia. Science 306, 496-499]. We show that the Pirahã have no linguistic method whatsoever for expressing exact quantity, not even "one." Despite this lack, when retested on the matching tasks used by Gordon, Pirahã speakers were able to perform exact matches with large numbers of objects perfectly but, as previously reported, they were inaccurate on matching tasks involving memory. These results suggest that language for exact number is a cultural invention rather than a linguistic universal, and that number words do not change our underlying representations of number but instead are a cognitive technology for keeping track of the cardinality of large sets across time, space, and changes in modality.
Orientational alignment in cavity quantum electrodynamics
NASA Astrophysics Data System (ADS)
Keeling, Jonathan; Kirton, Peter G.
2018-05-01
We consider the orientational alignment of dipoles due to strong matter-light coupling for a nonvanishing density of excitations. We compare various approaches to this problem in the limit of large numbers of emitters and show that direct Monte Carlo integration, mean-field theory, and large deviation methods match exactly in this limit. All three results show that orientational alignment develops in the presence of a macroscopically occupied polariton mode and that the dipoles asymptotically approach perfect alignment in the limit of high density or low temperature.
2004-09-12
Time-Domain Reflectometry (TDR) experiment could serve as a means to determine the most appropriate frequency-domain model for the data at hand. Time...CO. Title: "A review of the perfectly matched layer ABC and some new results." August 2002: NASA Langley Research Center (ICASE), Hampton, VA. Title...ICASE, NASA Langley Research Center, Hamp- ton, VA. July-August 2002. 4. Organized a mini-symposium at the May 2004 Frontiers in Applied and Computational
Quantum key distribution with passive decoy state selection
NASA Astrophysics Data System (ADS)
Mauerer, Wolfgang; Silberhorn, Christine
2007-05-01
We propose a quantum key distribution scheme which closely matches the performance of a perfect single photon source. It nearly attains the physical upper bound in terms of key generation rate and maximally achievable distance. Our scheme relies on a practical setup based on a parametric downconversion source and present day, nonideal photon-number detection. Arbitrary experimental imperfections which lead to bit errors are included. We select decoy states by classical postprocessing. This allows one to improve the effective signal statistics and achievable distance.
Single image super-resolution based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia
2018-03-01
We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.
Error-trellis Syndrome Decoding Techniques for Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Error-trellis syndrome decoding techniques for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1985-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Molecular graph convolutions: moving beyond fingerprints
NASA Astrophysics Data System (ADS)
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
Molecular graph convolutions: moving beyond fingerprints.
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
Meszlényi, Regina J.; Buza, Krisztian; Vidnyánszky, Zoltán
2017-01-01
Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network. PMID:29089883
Face recognition: a convolutional neural-network approach.
Lawrence, S; Giles, C L; Tsoi, A C; Back, A D
1997-01-01
We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.
Meszlényi, Regina J; Buza, Krisztian; Vidnyánszky, Zoltán
2017-01-01
Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network.
NASA Astrophysics Data System (ADS)
Schanz, Martin; Ye, Wenjing; Xiao, Jinyou
2016-04-01
Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.
ASIC-based architecture for the real-time computation of 2D convolution with large kernel size
NASA Astrophysics Data System (ADS)
Shao, Rui; Zhong, Sheng; Yan, Luxin
2015-12-01
Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.
2007-06-01
17 Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code ...Hamming distance between all pairs of non-zero paths. Table 2 lists the best rate r=2/3, punctured convolutional code information weight structure dB...Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code information weight structure. (From: [12]). K freed freeB
A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE
NASA Technical Reports Server (NTRS)
Truong, T. K.
1994-01-01
This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.
NASA Technical Reports Server (NTRS)
Asbury, Scott C.; Hunter, Craig A.
1999-01-01
An investigation was conducted in the model preparation area of the Langley 16-Foot Transonic Tunnel to determine the effects of convoluted divergent-flap contouring on the internal performance of a fixed-geometry, nonaxisymmetric, convergent-divergent exhaust nozzle. Testing was conducted at static conditions using a sub-scale nozzle model with one baseline and four convoluted configurations. All tests were conducted with no external flow at nozzle pressure ratios from 1.25 to approximately 9.50. Results indicate that baseline nozzle performance was dominated by unstable, shock-induced, boundary-layer separation at overexpanded conditions. Convoluted configurations were found to significantly reduce, and in some cases totally alleviate separation at overexpanded conditions. This result was attributed to the ability of convoluted contouring to energize and improve the condition of the nozzle boundary layer. Separation alleviation offers potential for installed nozzle aeropropulsive (thrust-minus-drag) performance benefits by reducing drag at forward flight speeds, even though this may reduce nozzle thrust ratio as much as 6.4% at off-design conditions. At on-design conditions, nozzle thrust ratio for the convoluted configurations ranged from 1% to 2.9% below the baseline configuration; this was a result of increased skin friction and oblique shock losses inside the nozzle.
Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L
2018-04-01
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
Which skills and factors better predict winning and losing in high-level men's volleyball?
Peña, Javier; Rodríguez-Guerra, Jorge; Buscà, Bernat; Serra, Núria
2013-09-01
The aim of this study was to determine which skills and factors better predicted the outcomes of regular season volleyball matches in the Spanish "Superliga" and were significant for obtaining positive results in the game. The study sample consisted of 125 matches played during the 2010-11 Spanish men's first division volleyball championship. Matches were played by 12 teams composed of 148 players from 17 different nations from October 2010 to March 2011. The variables analyzed were the result of the game, team category, home/away court factors, points obtained in the break point phase, number of service errors, number of service aces, number of reception errors, percentage of positive receptions, percentage of perfect receptions, reception efficiency, number of attack errors, number of blocked attacks, attack points, percentage of attack points, attack efficiency, and number of blocks performed by both teams participating in the match. The results showed that the variables of team category, points obtained in the break point phase, number of reception errors, and number of blocked attacks by the opponent were significant predictors of winning or losing the matches. Odds ratios indicated that the odds of winning a volleyball match were 6.7 times greater for the teams belonging to higher rankings and that every additional point in Complex II increased the odds of winning a match by 1.5 times. Every reception and blocked ball error decreased the possibility of winning by 0.6 and 0.7 times, respectively.
On-line bolt-loosening detection method of key components of running trains using binocular vision
NASA Astrophysics Data System (ADS)
Xie, Yanxia; Sun, Junhua
2017-11-01
Bolt loosening, as one of hidden faults, affects the running quality of trains and even causes serious safety accidents. However, the developed fault detection approaches based on two-dimensional images cannot detect bolt-loosening due to lack of depth information. Therefore, we propose a novel online bolt-loosening detection method using binocular vision. Firstly, the target detection model based on convolutional neural network (CNN) is used to locate the target regions. And then, stereo matching and three-dimensional reconstruction are performed to detect bolt-loosening faults. The experimental results show that the looseness of multiple bolts can be characterized by the method simultaneously. The measurement repeatability and precision are less than 0.03mm, 0.09mm respectively, and its relative error is controlled within 1.09%.
MAIZE: a 1 MA LTD-Driven Z-Pinch at The University of Michigan
NASA Astrophysics Data System (ADS)
Gilgenbach, R. M.; Gomez, M. R.; Zier, J. C.; Tang, W. W.; French, D. M.; Lau, Y. Y.; Mazarakis, M. G.; Cuneo, M. E.; Johnston, M. D.; Oliver, B. V.; Mehlhorn, T. A.; Kim, A. A.; Sinebryukhov, V. A.
2009-01-01
Researchers at The University of Michigan have constructed and tested a 1-MA Linear Transformer Driver (LTD), the first of its type to reach the USA. The Michigan Accelerator for Inductive Z-pinch Experiments, (MAIZE), is based on the LTD developed at the Institute of High Current Electronics in collaboration with Sandia National Labs and UM. This LTD utilizes 80 capacitors and 40 spark gap switches, arranged in 40 "bricks," to deliver a 1 MA, 100 kV pulse with 100 ns risetime into a matched resistive load. Preliminary resistive-load test results are presented for the LTD facility. Planned experimental research programs at UM include: a) Studies of Magneto-Raleigh-Taylor instability of planar foils, and b) Vacuum convolute studies including cathode and anode plasma.
Comparison of spectra using a Bayesian approach. An argument using oil spills as an example.
Li, Jianfeng; Hibbert, D Brynn; Fuller, Steven; Cattle, Julie; Pang Way, Christopher
2005-01-15
The problem of assigning a probability of matching a number of spectra is addressed. The context is in environmental spills when an EPA needs to show that the material from a polluting spill (e.g., oil) is likely to have originated at a particular site (factory, refinery) or from a vehicle (road tanker or ship). Samples are taken from the spill, and candidate sources and are analyzed by spectroscopy (IR, fluorescence) or chromatography (GC or GC/MS). A matching algorithm is applied to pairs of spectra giving a single statistic (R). This can be a point-to-point match giving a correlation coefficient or a Euclidean distance or a derivative of these parameters. The distributions of R for same and different samples are established from existing data. For matching statistics with values in the range {0,1} corresponding to no match (0) to a perfect match (1) a beta distribution can be fitted to most data. The values of R from the match of the spectrum of a spilled oil and of each of a number of suspects are calculated and Bayes' theorem is applied to give a probability of matches between spill sample and each candidate and the probability of no match at all. The method is most effective when simple inspection of the matching parameters does not lead to an obvious conclusion; i.e., there is overlap of the distributions giving rise to dubiety of an assignment. The probability of finding a matching statistic if there were a match to the probability of finding it if there were no match, expressed as a ratio (called the likelihood ratio), is a sensitive and useful parameter to guide the analyst. It is proposed that this approach may be acceptable to a court of law and avoid challenges of apparently subjective opinion of an analyst. Examples of matching the fluorescence and infrared spectra of diesel oils are given.
A separable two-dimensional discrete Hartley transform
NASA Technical Reports Server (NTRS)
Watson, A. B.; Poirson, A.
1985-01-01
Bracewell has proposed the Discrete Hartley Transform (DHT) as a substitute for the Discrete Fourier Transform (DFT), particularly as a means of convolution. Here, it is shown that the most natural extension of the DHT to two dimensions fails to be separate in the two dimensions, and is therefore inefficient. An alternative separable form is considered, corresponding convolution theorem is derived. That the DHT is unlikely to provide faster convolution than the DFT is also discussed.
Iterative deep convolutional encoder-decoder network for medical image segmentation.
Jung Uk Kim; Hak Gu Kim; Yong Man Ro
2017-07-01
In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.
Reconfigurable Gabor Filter For Fingerprint Recognition Using FPGA Verilog
NASA Astrophysics Data System (ADS)
Rosshidi, H. T.; Hadi, A. R.
2009-06-01
This paper present the implementations of Gabor filter for fingerprint recognition using Verilog HDL. This work demonstrates the application of Gabor Filter technique to enhance the fingerprint image. The incoming signal in form of image pixel will be filter out or convolute by the Gabor filter to define the ridge and valley regions of fingerprint. This is done with the application of a real time convolve based on Field Programmable Gate Array (FPGA) to perform the convolution operation. The main characteristic of the proposed approach are the usage of memory to store the incoming image pixel and the coefficient of the Gabor filter before the convolution matrix take place. The result was the signal convoluted with the Gabor coefficient.
Convolutional neural network for road extraction
NASA Astrophysics Data System (ADS)
Li, Junping; Ding, Yazhou; Feng, Fajie; Xiong, Baoyu; Cui, Weihong
2017-11-01
In this paper, the convolution neural network with large block input and small block output was used to extract road. To reflect the complex road characteristics in the study area, a deep convolution neural network VGG19 was conducted for road extraction. Based on the analysis of the characteristics of different sizes of input block, output block and the extraction effect, the votes of deep convolutional neural networks was used as the final road prediction. The study image was from GF-2 panchromatic and multi-spectral fusion in Yinchuan. The precision of road extraction was 91%. The experiments showed that model averaging can improve the accuracy to some extent. At the same time, this paper gave some advice about the choice of input block size and output block size.
Foltz, T M; Welsh, B M
1999-01-01
This paper uses the fact that the discrete Fourier transform diagonalizes a circulant matrix to provide an alternate derivation of the symmetric convolution-multiplication property for discrete trigonometric transforms. Derived in this manner, the symmetric convolution-multiplication property extends easily to multiple dimensions using the notion of block circulant matrices and generalizes to multidimensional asymmetric sequences. The symmetric convolution of multidimensional asymmetric sequences can then be accomplished by taking the product of the trigonometric transforms of the sequences and then applying an inverse trigonometric transform to the result. An example is given of how this theory can be used for applying a two-dimensional (2-D) finite impulse response (FIR) filter with nonlinear phase which models atmospheric turbulence.
Molecular graph convolutions: moving beyond fingerprints
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-01-01
Molecular “fingerprints” encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1977-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
NASA Technical Reports Server (NTRS)
Lee, L. N.
1976-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
Unsupervised machine learning account of magnetic transitions in the Hubbard model
NASA Astrophysics Data System (ADS)
Ch'ng, Kelvin; Vazquez, Nick; Khatami, Ehsan
2018-01-01
We employ several unsupervised machine learning techniques, including autoencoders, random trees embedding, and t -distributed stochastic neighboring ensemble (t -SNE), to reduce the dimensionality of, and therefore classify, raw (auxiliary) spin configurations generated, through Monte Carlo simulations of small clusters, for the Ising and Fermi-Hubbard models at finite temperatures. Results from a convolutional autoencoder for the three-dimensional Ising model can be shown to produce the magnetization and the susceptibility as a function of temperature with a high degree of accuracy. Quantum fluctuations distort this picture and prevent us from making such connections between the output of the autoencoder and physical observables for the Hubbard model. However, we are able to define an indicator based on the output of the t -SNE algorithm that shows a near perfect agreement with the antiferromagnetic structure factor of the model in two and three spatial dimensions in the weak-coupling regime. t -SNE also predicts a transition to the canted antiferromagnetic phase for the three-dimensional model when a strong magnetic field is present. We show that these techniques cannot be expected to work away from half filling when the "sign problem" in quantum Monte Carlo simulations is present.
Moro, Daniele; Valdrè, Giovanni; Mesto, Ernesto; Scordari, Fernando; Lacalamita, Maria; Ventura, Giancarlo Della; Bellatreccia, Fabio; Scirè, Salvatore; Schingaro, Emanuela
2017-01-01
This study presents a cross-correlated surface and near surface investigation of two phlogopite polytypes from Kasenyi kamafugitic rocks (SW Uganda) by means of advanced Atomic Force Microscopy (AFM), confocal microscopy and Raman micro-spectroscopy. AFM revealed comparable nanomorphology and electrostatic surface potential for the two mica polytypes. A widespread presence of nano-protrusions located on the mica flake surface was also observed, with an aspect ratio (maximum height/maximum width) from 0.01 to 0.09. Confocal microscopy showed these features to range from few nm to several μm in dimension, and shapes from perfectly circular to ellipsoidic and strongly elongated. Raman spectra collected across the bubbles showed an intense and convolute absorption in the range 3000–2800 cm−1, associated with weaker bands at 1655, 1438 and 1297 cm−1, indicating the presence of fluid inclusions consisting of aliphatic hydrocarbons, alkanes and cycloalkanes, with minor amounts of oxygenated compounds, such as carboxylic acids. High-resolution Raman images provided evidence that these hydrocarbons are confined within the bubbles. This work represents the first direct evidence that phlogopite, a common rock-forming mineral, may be a possible reservoir for hydrocarbons. PMID:28098185
The perfect match: Do criminal stereotypes bias forensic evidence analysis?
Smalarz, Laura; Madon, Stephanie; Yang, Yueran; Guyll, Max; Buck, Sarah
2016-08-01
This research provided the first empirical test of the hypothesis that stereotypes bias evaluations of forensic evidence. A pilot study (N = 107) assessed the content and consensus of 20 criminal stereotypes by identifying perpetrator characteristics (e.g., sex, race, age, religion) that are stereotypically associated with specific crimes. In the main experiment (N = 225), participants read a mock police incident report involving either a stereotyped crime (child molestation) or a nonstereotyped crime (identity theft) and judged whether a suspect's fingerprint matched a fingerprint recovered at the crime scene. Accompanying the suspect's fingerprint was personal information about the suspect of the type that is routinely available to fingerprint analysts (e.g., race, sex) and which could activate a stereotype. Participants most often perceived the fingerprints to match when the suspect fit the criminal stereotype, even though the prints did not actually match. Moreover, participants appeared to be unaware of the extent to which a criminal stereotype had biased their evaluations. These findings demonstrate that criminal stereotypes are a potential source of bias in forensic evidence analysis and suggest that suspects who fit criminal stereotypes may be disadvantaged over the course of the criminal justice process. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares
2018-01-01
This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named “Get Coins,” through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user. PMID:29849549
Perceptual expertise in forensic facial image comparison
White, David; Phillips, P. Jonathon; Hahn, Carina A.; Hill, Matthew; O'Toole, Alice J.
2015-01-01
Forensic facial identification examiners are required to match the identity of faces in images that vary substantially, owing to changes in viewing conditions and in a person's appearance. These identifications affect the course and outcome of criminal investigations and convictions. Despite calls for research on sources of human error in forensic examination, existing scientific knowledge of face matching accuracy is based, almost exclusively, on people without formal training. Here, we administered three challenging face matching tests to a group of forensic examiners with many years' experience of comparing face images for law enforcement and government agencies. Examiners outperformed untrained participants and computer algorithms, thereby providing the first evidence that these examiners are experts at this task. Notably, computationally fusing responses of multiple experts produced near-perfect performance. Results also revealed qualitative differences between expert and non-expert performance. First, examiners' superiority was greatest at longer exposure durations, suggestive of more entailed comparison in forensic examiners. Second, experts were less impaired by image inversion than non-expert students, contrasting with face memory studies that show larger face inversion effects in high performers. We conclude that expertise in matching identity across unfamiliar face images is supported by processes that differ qualitatively from those supporting memory for individual faces. PMID:26336174
Leite, Harlei Miguel de Arruda; de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares
2018-01-01
This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named "Get Coins," through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gurvits, L.
2002-01-01
Classical matching theory can be defined in terms of matrices with nonnegative entries. The notion of Positive operator, central in Quantum Theory, is a natural generalization of matrices with non-negative entries. Based on this point of view, we introduce a definition of perfect Quantum (operator) matching. We show that the new notion inherits many 'classical' properties, but not all of them. This new notion goes somewhere beyound matroids. For separable bipartite quantum states this new notion coinsides with the full rank property of the intersection of two corresponding geometric matroids. In the classical situation, permanents are naturally associated with perfectsmore » matchings. We introduce an analog of permanents for positive operators, called Quantum Permanent and show how this generalization of the permanent is related to the Quantum Entanglement. Besides many other things, Quantum Permanents provide new rational inequalities necessary for the separability of bipartite quantum states. Using Quantum Permanents, we give deterministic poly-time algorithm to solve Hidden Matroids Intersection Problem and indicate some 'classical' complexity difficulties associated with the Quantum Entanglement. Finally, we prove that the weak membership problem for the convex set of separable bipartite density matrices is NP-HARD.« less
Functional and structural mapping of human cerebral cortex: Solutions are in the surfaces
Van Essen, David C.; Drury, Heather A.; Joshi, Sarang; Miller, Michael I.
1998-01-01
The human cerebral cortex is notorious for the depth and irregularity of its convolutions and for its variability from one individual to the next. These complexities of cortical geography have been a chronic impediment to studies of functional specialization in the cortex. In this report, we discuss ways to compensate for the convolutions by using a combination of strategies whose common denominator involves explicit reconstructions of the cortical surface. Surface-based visualization involves reconstructing cortical surfaces and displaying them, along with associated experimental data, in various complementary formats (including three-dimensional native configurations, two-dimensional slices, extensively smoothed surfaces, ellipsoidal representations, and cortical flat maps). Generating these representations for the cortex of the Visible Man leads to a surface-based atlas that has important advantages over conventional stereotaxic atlases as a substrate for displaying and analyzing large amounts of experimental data. We illustrate this by showing the relationship between functionally specialized regions and topographically organized areas in human visual cortex. Surface-based warping allows data to be mapped from individual hemispheres to a surface-based atlas while respecting surface topology, improving registration of identifiable landmarks, and minimizing unwanted distortions. Surface-based warping also can aid in comparisons between species, which we illustrate by warping a macaque flat map to match the shape of a human flat map. Collectively, these approaches will allow more refined analyses of commonalities as well as individual differences in the functional organization of primate cerebral cortex. PMID:9448242
Functional and structural mapping of human cerebral cortex: solutions are in the surfaces
NASA Technical Reports Server (NTRS)
Van Essen, D. C.; Drury, H. A.; Joshi, S.; Miller, M. I.
1998-01-01
The human cerebral cortex is notorious for the depth and irregularity of its convolutions and for its variability from one individual to the next. These complexities of cortical geography have been a chronic impediment to studies of functional specialization in the cortex. In this report, we discuss ways to compensate for the convolutions by using a combination of strategies whose common denominator involves explicit reconstructions of the cortical surface. Surface-based visualization involves reconstructing cortical surfaces and displaying them, along with associated experimental data, in various complementary formats (including three-dimensional native configurations, two-dimensional slices, extensively smoothed surfaces, ellipsoidal representations, and cortical flat maps). Generating these representations for the cortex of the Visible Man leads to a surface-based atlas that has important advantages over conventional stereotaxic atlases as a substrate for displaying and analyzing large amounts of experimental data. We illustrate this by showing the relationship between functionally specialized regions and topographically organized areas in human visual cortex. Surface-based warping allows data to be mapped from individual hemispheres to a surface-based atlas while respecting surface topology, improving registration of identifiable landmarks, and minimizing unwanted distortions. Surface-based warping also can aid in comparisons between species, which we illustrate by warping a macaque flat map to match the shape of a human flat map. Collectively, these approaches will allow more refined analyses of commonalities as well as individual differences in the functional organization of primate cerebral cortex.
The prisoner’s dilemma on co-evolving networks under perfect rationality
NASA Astrophysics Data System (ADS)
Biely, Christoly; Dragosits, Klaus; Thurner, Stefan
2007-04-01
We consider the prisoner’s dilemma being played repeatedly on a dynamic network, where agents may choose their actions as well as their co-players. This leads to co-evolution of network structure and strategy patterns of the players. Individual decisions are made fully rationally and are based on local information only. They are made such that links to defecting agents are resolved and that cooperating agents build up new links. The exact form of the updating scheme is motivated by profit maximization and not by imitation. If players update their decisions in a synchronized way the system exhibits oscillatory dynamics: Periods of growing cooperation (and total linkage) alternate with periods of increasing defection. The cyclical behavior is reduced and the system stabilizes at significant total cooperation levels when players are less synchronized. In this regime we find emergent network structures resembling ‘complex’ and hierarchical topology. The exponent of the power-law degree distribution ( γ∼8.6) perfectly matches empirical results for human communication networks.
SNV-PPILP: refined SNV calling for tumor data using perfect phylogenies and ILP.
van Rens, Karen E; Mäkinen, Veli; Tomescu, Alexandru I
2015-04-01
Recent studies sequenced tumor samples from the same progenitor at different development stages and showed that by taking into account the phylogeny of this development, single-nucleotide variant (SNV) calling can be improved. Accurate SNV calls can better reveal early-stage tumors, identify mechanisms of cancer progression or help in drug targeting. We present SNV-PPILP, a fast and easy to use tool for refining GATK's Unified Genotyper SNV calls, for multiple samples assumed to form a phylogeny. We tested SNV-PPILP on simulated data, with a varying number of samples, SNVs, read coverage and violations of the perfect phylogeny assumption. We always match or improve the accuracy of GATK, with a significant improvement on low read coverage. SNV-PPILP, available at cs.helsinki.fi/gsa/snv-ppilp/, is written in Python and requires the free ILP solver lp_solve. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Self-Similar Apical Sharpening of an Ideal Perfecting Conducting Fluid Subject to Maxwell Stresses
NASA Astrophysics Data System (ADS)
Zhou, Chengzhe; Troian, Sandra M.
2016-11-01
We examine the apical behavior of an ideal, perfectly conducting incompressible fluid surrounded by vacuum in circumstances where the capillary, Maxwell and inertial forces contribute to formation of a liquid cone. A previous model based on potential flow describes a family of self-similar solutions with conic cusps whose interior angles approach the Taylor cone angle. These solutions were obtained by matching powers of the leading order terms in the velocity and electric field potential to the asymptotic form dictated by a stationary cone shape. In re-examining this earlier work, we have found a more important, neglected leading order term in the velocity and field potentials, which satisfies the governing, interfacial and far-field conditions as well. This term allows for the development of additional self-similar, sharpening apical shapes, including time reversed solutions for conic tip recoil after fluid ejection. We outline the boundary-element technique for solving the exact similarity solutions, which have parametric dependence on the far-field conditions, and discuss consequences of our findings.
Transient and Sharvin resistances of Luttinger liquids
NASA Astrophysics Data System (ADS)
Kloss, Thomas; Weston, Joseph; Waintal, Xavier
2018-04-01
Although the intrinsic conductance of an interacting one-dimensional system is renormalized by the electron-electron correlations, it has been known for some time that this renormalization is washed out by the presence of the (noninteracting) electrodes to which the wire is connected. Here, we study the transient conductance of such a wire: a finite voltage bias is suddenly applied across the wire and we measure the current before it has enough time to reach its stationary value. These calculations allow us to extract the Sharvin (contact) resistance of Luttinger and Fermi liquids. In particular, we find that a perfect junction between a Fermi liquid electrode and a Luttinger liquid electrode is characterized by a contact resistance that consists of half the quantum of conductance in series with half the intrinsic resistance of an infinite Luttinger liquid. These results were obtained using two different methods: a dynamical Hartree-Fock approach and a self-consistent Boltzmann approach. Although these methods are formally approximate, we find a perfect match with the exact results of Luttinger/Fermi liquid theory.
Relationship between visuospatial neglect and kinesthetic deficits after stroke.
Semrau, Jennifer A; Wang, Jeffery C; Herter, Troy M; Scott, Stephen H; Dukelow, Sean P
2015-05-01
After stroke, visuospatial and kinesthetic (sense of limb motion) deficits are common, occurring in approximately 30% and 60% of individuals, respectively. Although both types of deficits affect aspects of spatial processing necessary for daily function, few studies have investigated the relationship between these 2 deficits after stroke. We aimed to characterize the relationship between visuospatial and kinesthetic deficits after stroke using the Behavioral Inattention Test (BIT) and a robotic measure of kinesthetic function. Visuospatial attention (using the BIT) and kinesthesia (using robotics) were measured in 158 individuals an average of 18 days after stroke. In the kinesthetic matching task, the robot moved the participant's stroke-affected arm at a preset direction, speed, and magnitude. Participants mirror-matched the robotic movement with the less/unaffected arm as soon as they felt movement in their stroke affected arm. We found that participants with visuospatial inattention (neglect) had impaired kinesthesia 100% of the time, whereas only 59% of participants without neglect were impaired. For those without neglect, we observed that a higher percentage of participants with lower but passing BIT scores displayed impaired kinesthetic behavior (78%) compared with those participants who scored perfect or nearly perfect on the BIT (49%). The presence of visuospatial neglect after stroke is highly predictive of the presence of kinesthetic deficits. However, the presence of kinesthetic deficits does not necessarily always indicate the presence of visuospatial neglect. Our findings highlight the importance of assessment and treatment of kinesthetic deficits after stroke, especially in patients with visuospatial neglect. © The Author(s) 2014.
Opto-mechanical door locking system
NASA Astrophysics Data System (ADS)
Patil, Saurabh S.; Rodrigues, Vanessa M.; Patil, Ajeetkumar; Chidangil, Santhosh
2015-09-01
We present an Opto-mechanical Door Locking System which is an optical system that combines a simple combination of a coherent light source (Laser) and a photodiode based sensor with focus toward security applications. The basic construct of the KEY comprises a Laser source in a cylindrical enclosure that slides perfectly into the LOCK. The Laser is pulsed at a fixed encrypted frequency unique to that locking system. Transistor-transistor logic (TTL) circuitry is used to achieve encryption. The casing of the key is designed in such a way that it will power the pulsing laser only when the key is inserted in the slot provided for it. The Lock includes a photo-sensor that will convert the detected light intensity to a corresponding electrical signal by decrypting the frequency. The lock also consists of a circuit with a feedback system that will carry the digital information regarding the encryption frequency code. The information received from the sensor is matched with the stored code; if found a perfect match, a signal will be sent to the servo to unlock the mechanical lock or to carry out any other operation. This technique can be incorporated in security systems for residences and safe houses, and can easily replace all conventional locks which formerly used fixed patterns to unlock. The major advantage of this proposed optomechanical system over conventional ones is that it no longer relies on a solid/imprinted pattern to perform its task and hence makes it almost impossible to tamper with.
A digital pixel cell for address event representation image convolution processing
NASA Astrophysics Data System (ADS)
Camunas-Mesa, Luis; Acosta-Jimenez, Antonio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2005-06-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number of neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate events according to their information levels. Neurons with more information (activity, derivative of activities, contrast, motion, edges,...) generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. AER technology has been used and reported for the implementation of various type of image sensors or retinae: luminance with local agc, contrast retinae, motion retinae,... Also, there has been a proposal for realizing programmable kernel image convolution chips. Such convolution chips would contain an array of pixels that perform weighted addition of events. Once a pixel has added sufficient event contributions to reach a fixed threshold, the pixel fires an event, which is then routed out of the chip for further processing. Such convolution chips have been proposed to be implemented using pulsed current mode mixed analog and digital circuit techniques. In this paper we present a fully digital pixel implementation to perform the weighted additions and fire the events. This way, for a given technology, there is a fully digital implementation reference against which compare the mixed signal implementations. We have designed, implemented and tested a fully digital AER convolution pixel. This pixel will be used to implement a full AER convolution chip for programmable kernel image convolution processing.
2006-12-01
Convolutional encoder of rate 1/2 (From [10]). Table 3 shows the puncturing patterns used to derive the different code rates . X precedes Y in the order... convolutional code with puncturing configuration (From [10])......11 Table 4. Mandatory channel coding per modulation (From [10...a concatenation of a Reed– Solomon outer code and a rate -adjustable convolutional inner code . At the transmitter, data shall first be encoded with
Synchronization Analysis and Simulation of a Standard IEEE 802.11G OFDM Signal
2004-03-01
Figure 26 Convolutional Encoder Parameters. Figure 27 Puncturing Parameters. As per Table 3, the required code rate is 3 4r = which requires...to achieve the higher data rates required by the Standard 802.11b was accomplished by using packet binary convolutional coding (PBCC). Essentially...higher data rates are achieved by using convolutional coding combined with BPSK or QPSK modulation. The data is first encoded with a rate one-half
Design and System Implications of a Family of Wideband HF Data Waveforms
2010-09-01
code rates (i.e. 8/9, 9/10) will be used to attain the highest data rates for surface wave links. Very high puncturing of convolutional codes can...Communication Links”, Edition 1, North Atlantic Treaty Organization, 2009. [14] Yasuda, Y., Kashiki, K., Hirata, Y. “High- Rate Punctured Convolutional Codes ...length 7 convolutional code that has been used for over two decades in 110A. In addition, repetition coding and puncturing was
Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.
Huang, Yan; Wang, Wei; Wang, Liang
2018-04-01
Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.
Further Developments in the Communication Link and Error Analysis (CLEAN) Simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1995-01-01
During the period 1 July 1993 - 30 June 1994, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed. Many of these were reported in the Semi-Annual report dated December 1993 which has been included in this report in Appendix A. Since December 1993, a number of additional modules have been added involving Unit-Memory Convolutional codes (UMC). These are: (1) Unit-Memory Convolutional Encoder module (UMCEncd); (2) Hard decision Unit-Memory Convolutional Decoder using the Viterbi decoding algorithm (VitUMC); and (3) a number of utility modules designed to investigate the performance of LTMC's such as LTMC column distance function (UMCdc), UMC free distance function (UMCdfree), UMC row distance function (UMCdr), and UMC Transformation (UMCTrans). The study of UMC's was driven, in part, by the desire to investigate high-rate convolutional codes which are better suited as inner codes for a concatenated coding scheme. A number of high-rate LTMC's were found which are good candidates for inner codes. Besides the further developments of the simulation, a study was performed to construct a table of the best known Unit-Memory Convolutional codes. Finally, a preliminary study of the usefulness of the Periodic Convolutional Interleaver (PCI) was completed and documented in a Technical note dated March 17, 1994. This technical note has also been included in this final report.
The effects of kinesio taping on the color intensity of superficial skin hematomas: A pilot study.
Vercelli, Stefano; Colombo, Claudio; Tolosa, Francesca; Moriondo, Andrea; Bravini, Elisabetta; Ferriero, Giorgio; Francesco, Sartorio
2017-01-01
To analyze the effects of kinesio taping (KT) -applied with three different strains that induced or not the formation of skin creases (called convolutions)- on color intensity of post-surgical superficial hematomas. Single-blind paired study. Rehabilitation clinic. A convenience sample of 13 inpatients with post-surgical superficial hematomas. The tape was applied for 24 consecutive hours. Three tails of KT were randomly applied with different degrees of strain: none (SN); light (SL); and full longitudinal stretch (SF). We expected to obtain correct formation of convolutions with SL, some convolutions with SN, and no convolutions with SF. The change in color intensity of hematomas, measured by means of polar coordinates CIE L*a*b* using a validated and standardized digital images system. Applying KT to hematomas did not significantly change the color intensity in the central area under the tape (p > 0.05). There was a significant treatment effect (p < 0.05) under the edges of the tape, independently of the formation of convolutions (p > 0.05). The changes observed along the edges of the tape could be related to the formation of a pressure gradient between the KT and the adjacent area, but were not dependent on the formation of skin convolutions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter
2017-11-01
Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.
Towards dropout training for convolutional neural networks.
Wu, Haibing; Gu, Xiaodong
2015-11-01
Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage. Copyright © 2015 Elsevier Ltd. All rights reserved.
Frame prediction using recurrent convolutional encoder with residual learning
NASA Astrophysics Data System (ADS)
Yue, Boxuan; Liang, Jun
2018-05-01
The prediction for the frame of a video is difficult but in urgent need in auto-driving. Conventional methods can only predict some abstract trends of the region of interest. The boom of deep learning makes the prediction for frames possible. In this paper, we propose a novel recurrent convolutional encoder and DE convolutional decoder structure to predict frames. We introduce the residual learning in the convolution encoder structure to solve the gradient issues. The residual learning can transform the gradient back propagation to an identity mapping. It can reserve the whole gradient information and overcome the gradient issues in Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN). Besides, compared with the branches in CNNs and the gated structures in RNNs, the residual learning can save the training time significantly. In the experiments, we use UCF101 dataset to train our networks, the predictions are compared with some state-of-the-art methods. The results show that our networks can predict frames fast and efficiently. Furthermore, our networks are used for the driving video to verify the practicability.
Corcodel, N; Rammelsberg, P; Jakstat, H; Moldovan, O; Schwarz, S; Hassel, A J
2010-11-01
Visual tooth colour assessment by use of the Vita 3D-Master(®) (3D; Vita Zahnfabrik, Bad Säckingen, Germany) is well documented. To improve handling, a new linear arrangement of the shade tabs has been introduced (LG; Linearguide 3D-Master(®) ). The purpose of this study was to investigate whether the linear design has an effect on shade matching. Fifty-six students underwent identical, theoretical and practical training, by use of an Internet learning module [Toothguide Training Software(®) (TT)] and a standardised training programme [Toothguide Training Box(®) (TTB)]. Each student then matched 30 randomly chosen shade tabs presented in an intra-oral setting by a standardised device [Toothguide Check Box(®) (TCB)]; 15 matches were made using the 3D and 15 using the LG shade guide system, under a daylight lamp (840 matches for each guide). It was recorded to what extent the presented and selected shade tabs, or the lightness group of the tabs, matched, also the needed time for colour matching. The results showed that 35% of perfect matches were observed for the 3D and 32% for the LG. The lightness group was correct in 59% of cases for 3D and 56% for LG. Mean time needed for matching of tabs and lightness group was no different between groups (no significant difference for any assessment). Within the limitations of the study design, the colour assessment with regard to performance and time needed in shade matching was not different with the LG or the 3D. Therefore, the user should choose which shade tab arrangement is more applicable. © 2010 Blackwell Publishing Ltd.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1992-01-01
Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.
Efficient airport detection using region-based fully convolutional neural networks
NASA Astrophysics Data System (ADS)
Xin, Peng; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Lv, Chao
2018-04-01
This paper presents a model for airport detection using region-based fully convolutional neural networks. To achieve fast detection with high accuracy, we shared the conv layers between the region proposal procedure and the airport detection procedure and used graphics processing units (GPUs) to speed up the training and testing time. For lack of labeled data, we transferred the convolutional layers of ZF net pretrained by ImageNet to initialize the shared convolutional layers, then we retrained the model using the alternating optimization training strategy. The proposed model has been tested on an airport dataset consisting of 600 images. Experiments show that the proposed method can distinguish airports in our dataset from similar background scenes almost real-time with high accuracy, which is much better than traditional methods.
Monitoring the capacity of working memory: Executive control and effects of listening effort
Amichetti, Nicole M.; Stanley, Raymond S.; White, Alison G.
2013-01-01
In two experiments, we used an interruption-and-recall (IAR) task to explore listeners’ ability to monitor the capacity of working memory as new information arrived in real time. In this task, listeners heard recorded word lists with instructions to interrupt the input at the maximum point that would still allow for perfect recall. Experiment 1 demonstrated that the most commonly selected segment size closely matched participants’ memory span, as measured in a baseline span test. Experiment 2 showed that reducing the sound level of presented word lists to a suprathreshold but effortful listening level disrupted the accuracy of matching selected segment sizes with participants’ memory spans. The results are discussed in terms of whether online capacity monitoring may be subsumed under other, already enumerated working memory executive functions (inhibition, set shifting, and memory updating). PMID:23400826
Review of Plasmonic Nanocomposite Metamaterial Absorber
Hedayati, Mehdi Keshavarz; Faupel, Franz; Elbahri, Mady
2014-01-01
Plasmonic metamaterials are artificial materials typically composed of noble metals in which the features of photonics and electronics are linked by coupling photons to conduction electrons of metal (known as surface _lasmon). These rationally designed structures have spurred interest noticeably since they demonstrate some fascinating properties which are unattainable with naturally occurring materials. Complete absorption of light is one of the recent exotic properties of plasmonic metamaterials which has broadened its application area considerably. This is realized by designing a medium whose impedance matches that of free space while being opaque. If such a medium is filled with some lossy medium, the resulting structure can absorb light totally in a sharp or broad frequency range. Although several types of metamaterials perfect absorber have been demonstrated so far, in the current paper we overview (and focus on) perfect absorbers based on nanocomposites where the total thickness is a few tens of nanometer and the absorption band is broad, tunable and insensitive to the angle of incidence. The nanocomposites consist of metal nanoparticles embedded in a dielectric matrix with a high filling factor close to the percolation threshold. The filling factor can be tailored by the vapor phase co-deposition of the metallic and dielectric components. In addition, novel wet chemical approaches are discussed which are bio-inspired or involve synthesis within levitating Leidenfrost drops, for instance. Moreover, theoretical considerations, optical properties, and potential application of perfect absorbers will be presented. PMID:28788511
NASA Astrophysics Data System (ADS)
Zheng, Guangdi; Pan, Mingbo; Liu, Wei; Wu, Xuetong
2018-03-01
The target identification of the sea battlefield is the prerequisite for the judgment of the enemy in the modern naval battle. In this paper, a collaborative identification method based on convolution neural network is proposed to identify the typical targets of sea battlefields. Different from the traditional single-input/single-output identification method, the proposed method constructs a multi-input/single-output co-identification architecture based on optimized convolution neural network and weighted D-S evidence theory. The simulation results show that
A convolution model for computing the far-field directivity of a parametric loudspeaker array.
Shi, Chuang; Kajikawa, Yoshinobu
2015-02-01
This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost.
NASA Astrophysics Data System (ADS)
Liu, Fushun; Liu, Chengcheng; Chen, Jiefeng; Wang, Bin
2017-08-01
The key concept of spectrum response estimation with commercial software, such as the SESAM software tool, typically includes two main steps: finding a suitable loading spectrum and computing the response amplitude operators (RAOs) subjected to a frequency-specified wave component. In this paper, we propose a nontraditional spectrum response estimation method that uses a numerical representation of the retardation functions. Based on estimated added mass and damping matrices of the structure, we decompose and replace the convolution terms with a series of poles and corresponding residues in the Laplace domain. Then, we estimate the power density corresponding to each frequency component using the improved periodogram method. The advantage of this approach is that the frequency-dependent motion equations in the time domain can be transformed into the Laplace domain without requiring Laplace-domain expressions for the added mass and damping. To validate the proposed method, we use a numerical semi-submerged pontoon from the SESAM. The numerical results show that the responses of the proposed method match well with those obtained from the traditional method. Furthermore, the estimated spectrum also matches well, which indicates its potential application to deep-water floating structures.
[Glossary of terms used by radiologists in image processing].
Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P
1995-01-01
We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.
A Comparison of Three PML Treatments for CAA (and CFD)
NASA Technical Reports Server (NTRS)
Goodrich, John W.
2008-01-01
In this paper we compare three Perfectly Matched Layer (PML) treatments by means of a series of numerical experiments, using common numerical algorithms, computational grids, and code implementations. These comparisons are with the Linearized Euler Equations, for base uniform base flow. We see that there are two very good PML candidates, and that can both control the introduced error. Furthermore, we also show that corners can be handled with essentially no increase in the introduced error, and that with a good PML, the outer boundary is the most significant source of err
Radiation Boundary Conditions for Maxwell’s Equations: A Review of Accurate Time-Domain Formulations
2007-01-01
conditions have only been constructed for the case ne = 0. Lastly we note that exact reflection formulas have recently been derived by Diaz and Joly [20, 21...SIAM J. Numer. Anal. 41 (2003), 287–305. 6. E. Bécache and P. Joly , On the analysis of Bérenger’s perfectly matched layers for Maxwell’s equations...Computational Wave Propagation (M. Ainsworth, P. Davies, D. Duncan, P. Martin , and B. Rynne, eds.), Springer-Verlag, 2003, pp. 43–82. 13. O. Bruno and D. Hoch
Evidence of β-antimonene at the Sb/Bi2Se3 interface.
Flammini, Roberto; Colonna, Stefano; Hogan, Conor; Mahatha, Sanjoy; Papagno, Marco; Barla, Alessandro; Sheverdyaeva, Polina; Moras, Paolo; Aliev, Ziya; Babanly, M B; Chulkov, Evgueni V; Carbone, Carlo; Ronci, Fabio
2017-12-19
We report a study of the interface between antimony and the prototypical topological insulator Sb/Bi<sub>2</sub>Se<sub>3</sub>. Scanning tunnelling microscopy measurements show the presence of ordered domains displaying a perfect lattice match with bismuth selenide. Density functional theory calculations of the most stable atomic configurations demonstrate that the ordered domains can be attributed to stacks of β-antimonene. © 2017 IOP Publishing Ltd.
Polyphase Pulse Compression Waveforms
1982-01-05
nreuction wzsahrvr mfnolhnr mehid for ic-dmurin the "ur4s at nr-tgtec y Snnian1 : and .%ckrfnYd j91 T-henu ap;xroa4ch was fri r-Tlxrh) thei phase’ý of a...errors were due only to the A/D converters and that the matched-filter phases and amplitude were perfect . The results are shown in Fig. 16 where each...Electronic System," May 1981, AES-17, pp. 364-372. 6. C. Cook and M. Bernfield, "Radar Signals, An Introduction to Thery and Applications," New York
Li, Zheng-Wei; Xi, Xiao-Li; Zhang, Jin-Sheng; Liu, Jiang-fan
2015-12-14
The unconditional stable finite-difference time-domain (FDTD) method based on field expansion with weighted Laguerre polynomials (WLPs) is applied to model electromagnetic wave propagation in gyrotropic materials. The conventional Yee cell is modified to have the tightly coupled current density components located at the same spatial position. The perfectly matched layer (PML) is formulated in a stretched-coordinate (SC) system with the complex-frequency-shifted (CFS) factor to achieve good absorption performance. Numerical examples are shown to validate the accuracy and efficiency of the proposed method.
Kwon, Yea-Hoon; Shin, Sae-Byuk; Kim, Shin-Dug
2018-04-30
The purpose of this study is to improve human emotional classification accuracy using a convolution neural networks (CNN) model and to suggest an overall method to classify emotion based on multimodal data. We improved classification performance by combining electroencephalogram (EEG) and galvanic skin response (GSR) signals. GSR signals are preprocessed using by the zero-crossing rate. Sufficient EEG feature extraction can be obtained through CNN. Therefore, we propose a suitable CNN model for feature extraction by tuning hyper parameters in convolution filters. The EEG signal is preprocessed prior to convolution by a wavelet transform while considering time and frequency simultaneously. We use a database for emotion analysis using the physiological signals open dataset to verify the proposed process, achieving 73.4% accuracy, showing significant performance improvement over the current best practice models.
Efficient convolutional sparse coding
Wohlberg, Brendt
2017-06-20
Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.
Multithreaded implicitly dealiased convolutions
NASA Astrophysics Data System (ADS)
Roberts, Malcolm; Bowman, John C.
2018-03-01
Implicit dealiasing is a method for computing in-place linear convolutions via fast Fourier transforms that decouples work memory from input data. It offers easier memory management and, for long one-dimensional input sequences, greater efficiency than conventional zero-padding. Furthermore, for convolutions of multidimensional data, the segregation of data and work buffers can be exploited to reduce memory usage and execution time significantly. This is accomplished by processing and discarding data as it is generated, allowing work memory to be reused, for greater data locality and performance. A multithreaded implementation of implicit dealiasing that accepts an arbitrary number of input and output vectors and a general multiplication operator is presented, along with an improved one-dimensional Hermitian convolution that avoids the loop dependency inherent in previous work. An alternate data format that can accommodate a Nyquist mode and enhance cache efficiency is also proposed.
Detecting atrial fibrillation by deep convolutional neural networks.
Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui
2018-02-01
Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, M.R.; Phillips, S.A.; Sofianos, D.J.
1994-12-31
The adaptive matched filter was implemented as a spatial detector for amplitude-only or complex images, and applied to an image formed by standard narrow band means from a wide angle, wideband radar. Direct performance comparisons were made between different implementations and various matched and mismatched cases by using a novel approach to generate ROC curves parametrically. For perfectly matched cases, performance using imaged targets was found to be significantly lower than potential performance of artificial targets whose features differed from the background. Incremental gain due to whitening the background was also found to be small, indicating little background spatial correlation.more » It is conjectured that the relatively featureless behavior in both targets and background is due to the image formation process, since this technique averages together all wide angle, wideband information. For mismatched cases where the signature was unknown, the amplitude detector losses were approximately equal to whatever gain over noncoherent integration that matching provided. However, the complex detector was generally very sensitive to unknown information, especially phase, and produced much larger losses. Whitening under these mismatched conditions produced further losses. Detector choice thus depends primarily on how reproducible target signatures are, especially if phase is used, and the subsequent number of stored signatures necessary to account for various imaging aspect angles.« less
Surface operators, chiral rings and localization in N =2 gauge theories
NASA Astrophysics Data System (ADS)
Ashok, S. K.; Billò, M.; Dell'Aquila, E.; Frau, M.; Gupta, V.; John, R. R.; Lerda, A.
2017-11-01
We study half-BPS surface operators in supersymmetric gauge theories in four and five dimensions following two different approaches. In the first approach we analyze the chiral ring equations for certain quiver theories in two and three dimensions, coupled respectively to four- and five-dimensional gauge theories. The chiral ring equations, which arise from extremizing a twisted chiral superpotential, are solved as power series in the infrared scales of the quiver theories. In the second approach we use equivariant localization and obtain the twisted chiral superpotential as a function of the Coulomb moduli of the four- and five-dimensional gauge theories, and find a perfect match with the results obtained from the chiral ring equations. In the five-dimensional case this match is achieved after solving a number of subtleties in the localization formulas which amounts to choosing a particular residue prescription in the integrals that yield the Nekrasov-like partition functions for ramified instantons. We also comment on the necessity of including Chern-Simons terms in order to match the superpotentials obtained from dual quiver descriptions of a given surface operator.
Off-resonance artifacts correction with convolution in k-space (ORACLE).
Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne
2012-06-01
Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.
Urtnasan, Erdenebayar; Park, Jong-Uk; Joo, Eun-Yeon; Lee, Kyoung-Joung
2018-04-23
In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F 1 -score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
Convolutional virtual electric field for image segmentation using active contours.
Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden
2014-01-01
Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images.
NASA Technical Reports Server (NTRS)
Doland, G. D.
1970-01-01
Convolutional coding, used to upgrade digital data transmission under adverse signal conditions, has been improved by a method which ensures data transitions, permitting bit synchronizer operation at lower signal levels. Method also increases decoding ability by removing ambiguous condition.
Design of convolutional tornado code
NASA Astrophysics Data System (ADS)
Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu
2017-09-01
As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.
1992-12-01
views expressed in this thesis are those of the author end do net reflect olicsia policy or pokletsm of the Deperteaset of Defame or the US...utempl u v= cncd (2,1,6,G64,u,zeros(l,12));%Convolutional encoding mm=bm(2,v); %Binary to M-ary conversion clear v u; mm=inter(50,200,mm);%Interleaving (50...save result err B. CNCD.X (CONVOLUTIONAL ENCODER FUNCTION) function (v,vr] - cncd (n,k,m,Gr,u,r) % CONVOLUTIONAL ENCODER % Paul H. Moose % Naval
Time history solution program, L225 (TEV126). Volume 1: Engineering and usage
NASA Technical Reports Server (NTRS)
Kroll, R. I.; Tornallyay, A.; Clemmons, R. E.
1979-01-01
Volume 1 of a two volume document is presented. The usage of the convolution program L225 (TEV 126) is described. The program calculates the time response of a linear system by convoluting the impulsive response function with the time-dependent excitation function. The convolution is performed as a multiplication in the frequency domain. Fast Fourier transform techniques are used to transform the product back into the time domain to obtain response time histories. A brief description of the analysis used is presented.
Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram
NASA Technical Reports Server (NTRS)
Lee, P. J.
1984-01-01
The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.
Simulation of ICD-9 to ICD-10-CM Transition for Family Medicine: Simple or Convoluted?
Grief, Samuel N; Patel, Jesal; Kochendorfer, Karl M; Green, Lee A; Lussier, Yves A; Li, Jianrong; Burton, Michael; Boyd, Andrew D
2016-01-01
The objective of this study was to examine the impact of the transition from International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM), to Interactional Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM), on family medicine and to identify areas where additional training might be required. Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million in claims). Using the science of networks, we evaluated each ICD-9-CM code used by family medicine physicians to determine whether the transition was simple or convoluted. A simple transition is defined as 1 ICD-9-CM code mapping to 1 ICD-10-CM code, or 1 ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is nonreciprocal and complex, with multiple codes for which definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Of the 1635 diagnosis codes used by family medicine physicians, 70% of the codes were categorized as simple, 27% of codes were convoluted, and 3% had no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims was similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only <0.1% of the overall diagnosis codes. The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, and for which additional resources need to be invested to ensure a successful transition to ICD-10-CM. © Copyright 2016 by the American Board of Family Medicine.
Simulation of ICD-9 to ICD-10-CM transition for family medicine: simple or convoluted?
Grief, Samuel N.; Patel, Jesal; Lussier, Yves A.; Li, Jianrong; Burton, Michael; Boyd, Andrew D.
2017-01-01
Objectives The objective of this study was to examine the impact of the transition from International Classification of Disease Version Nine Clinical Modification (ICD-9-CM) to Interactional Classification of Disease Version Ten Clinical Modification (ICD-10-CM) on family medicine and identify areas where additional training might be required. Methods Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million dollars in claims). Using the science of networks we evaluated each ICD-9-CM code used by family medicine physicians to determine if the transition was simple or convoluted.1 A simple translation is defined as one ICD-9-CM code mapping to one ICD-10-CM code or one ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is non-reciprocal and complex with multiple codes where definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Results Of the 1635 diagnosis codes used by the family medicine physicians, 70% of the codes were categorized as simple, 27% of the diagnosis codes were convoluted and 3% were found to have no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims were similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only < 0.1% of the overall diagnosis codes. Conclusions The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, where additional resources need to be invested to ensure a successful transition to ICD-10-CM. PMID:26769875
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1994-01-01
Brief summaries of research in the following areas are presented: (1) construction of optimum geometrically uniform trellis codes; (2) a statistical approach to constructing convolutional code generators; and (3) calculating the exact performance of a convolutional code.
NASA Astrophysics Data System (ADS)
Hamdi, Mazda; Kenari, Masoumeh Nasiri
2013-06-01
We consider a time-hopping based multiple access scheme introduced in [1] for communication over dispersive infrared links, and evaluate its performance for correlator and matched filter receivers. In the investigated time-hopping code division multiple access (TH-CDMA) method, the transmitter benefits a low rate convolutional encoder. In this method, the bit interval is divided into Nc chips and the output of the encoder along with a PN sequence assigned to the user determines the position of the chip in which the optical pulse is transmitted. We evaluate the multiple access performance of the system for correlation receiver considering background noise which is modeled as White Gaussian noise due to its large intensity. For the correlation receiver, the results show that for a fixed processing gain, at high transmit power, where the multiple access interference has the dominant effect, the performance improves by the coding gain. But at low transmit power, in which the increase of coding gain leads to the decrease of the chip time, and consequently, to more corruption due to the channel dispersion, there exists an optimum value for the coding gain. However, for the matched filter, the performance always improves by the coding gain. The results show that the matched filter receiver outperforms the correlation receiver in the considered cases. Our results show that, for the same bandwidth and bit rate, the proposed system excels other multiple access techniques, like conventional CDMA and time hopping scheme.
a Novel Deep Convolutional Neural Network for Spectral-Spatial Classification of Hyperspectral Data
NASA Astrophysics Data System (ADS)
Li, N.; Wang, C.; Zhao, H.; Gong, X.; Wang, D.
2018-04-01
Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.
Detection of prostate cancer on multiparametric MRI
NASA Astrophysics Data System (ADS)
Seah, Jarrel C. Y.; Tang, Jennifer S. N.; Kitchen, Andy
2017-03-01
In this manuscript, we describe our approach and methods to the ProstateX challenge, which achieved an overall AUC of 0.84 and the runner-up position. We train a deep convolutional neural network to classify lesions marked on multiparametric MRI of the prostate as clinically significant or not. We implement a novel addition to the standard convolutional architecture described as auto-windowing which is clinically inspired and designed to overcome some of the difficulties faced in MRI interpretation, where high dynamic ranges and low contrast edges may cause difficulty for traditional convolutional neural networks trained on high contrast natural imagery. We demonstrate that this system can be trained end to end and outperforms a similar architecture without such additions. Although a relatively small training set was provided, we use extensive data augmentation to prevent overfitting and transfer learning to improve convergence speed, showing that deep convolutional neural networks can be feasibly trained on small datasets.
No-reference image quality assessment based on statistics of convolution feature maps
NASA Astrophysics Data System (ADS)
Lv, Xiaoxin; Qin, Min; Chen, Xiaohui; Wei, Guo
2018-04-01
We propose a Convolutional Feature Maps (CFM) driven approach to accurately predict image quality. Our motivation bases on the finding that the Nature Scene Statistic (NSS) features on convolution feature maps are significantly sensitive to distortion degree of an image. In our method, a Convolutional Neural Network (CNN) is trained to obtain kernels for generating CFM. We design a forward NSS layer which performs on CFM to better extract NSS features. The quality aware features derived from the output of NSS layer is effective to describe the distortion type and degree an image suffered. Finally, a Support Vector Regression (SVR) is employed in our No-Reference Image Quality Assessment (NR-IQA) model to predict a subjective quality score of a distorted image. Experiments conducted on two public databases demonstrate the promising performance of the proposed method is competitive to state of the art NR-IQA methods.
Sensitivity Kernels for the Cross-Convolution Measure: Eliminate the Source in Waveform Tomography
NASA Astrophysics Data System (ADS)
Menke, W. H.
2017-12-01
We use the adjoint method to derive sensitivity kernels for the cross-convolution measure, a goodness-of-fit criterion that is applicable to seismic data containing closely-spaced multiple arrivals, such as reverberating compressional waves and split shear waves. In addition to a general formulation, specific expressions for sensitivity with respect to density, Lamé parameter and shear modulus are derived for a isotropic elastic solid. As is typical of adjoint methods, the kernels depend upon an adjoint field, the source of which, in this case, is the reference displacement field, pre-multiplied by a matrix of cross-correlations of components of the observed field. We use a numerical simulation to evaluate the resolving power of a topographic inversion that employs the cross-convolution measure. The estimated resolving kernel shows is point-like, indicating that the cross-convolution measure will perform well in waveform tomography settings.
Identification of the remains of King Richard III.
King, Turi E; Fortes, Gloria Gonzalez; Balaresque, Patricia; Thomas, Mark G; Balding, David; Maisano Delser, Pierpaolo; Neumann, Rita; Parson, Walther; Knapp, Michael; Walsh, Susan; Tonasso, Laure; Holt, John; Kayser, Manfred; Appleby, Jo; Forster, Peter; Ekserdjian, David; Hofreiter, Michael; Schürer, Kevin
2014-12-02
In 2012, a skeleton was excavated at the presumed site of the Grey Friars friary in Leicester, the last-known resting place of King Richard III. Archaeological, osteological and radiocarbon dating data were consistent with these being his remains. Here we report DNA analyses of both the skeletal remains and living relatives of Richard III. We find a perfect mitochondrial DNA match between the sequence obtained from the remains and one living relative, and a single-base substitution when compared with a second relative. Y-chromosome haplotypes from male-line relatives and the remains do not match, which could be attributed to a false-paternity event occurring in any of the intervening generations. DNA-predicted hair and eye colour are consistent with Richard's appearance in an early portrait. We calculate likelihood ratios for the non-genetic and genetic data separately, and combined, and conclude that the evidence for the remains being those of Richard III is overwhelming.
Choi, Kyongsik; Chon, James W; Gu, Min; Lee, Byoungho
2007-08-20
In this paper, a simple confocal laser scanning microscopic (CLSM) image mapping technique based on the finite-difference time domain (FDTD) calculation has been proposed and evaluated for characterization of a subwavelength-scale three-dimensional (3D) void structure fabricated inside polymer matrix. The FDTD simulation method adopts a focused Gaussian beam incident wave, Berenger's perfectly matched layer absorbing boundary condition, and the angular spectrum analysis method. Through the well matched simulation and experimental results of the xz-scanned 3D void structure, we first characterize the exact position and the topological shape factor of the subwavelength-scale void structure, which was fabricated by a tightly focused ultrashort pulse laser. The proposed CLSM image mapping technique based on the FDTD can be widely applied from the 3D near-field microscopic imaging, optical trapping, and evanescent wave phenomenon to the state-of-the-art bio- and nanophotonics.
Yuan, Ji; Cheung, Paul K M; Zhang, Huifang M; Chau, David; Yang, Decheng
2005-02-01
Coxsackievirus B3 (CVB3) is the most common causal agent of viral myocarditis, but existing drug therapies are of limited value. Application of small interfering RNA (siRNA) in knockdown of gene expression is an emerging technology in antiviral gene therapy. To investigate whether RNA interference (RNAi) can protect against CVB3 infection, we evaluated the effects of RNAi on viral replication in HeLa cells and murine cardiomyocytes by using five CVB3-specific siRNAs targeting distinct regions of the viral genome. The most effective one is siRNA-4, targeting the viral protease 2A, achieving a 92% inhibition of CVB3 replication. The specific RNAi effects could last at least 48 h, and cell viability assay revealed that 90% of siRNA-4-pretreated cells were still alive and lacked detectable viral protein expression 48 h postinfection. Moreover, administration of siRNAs after viral infection could also effectively inhibit viral replication, indicating its therapeutic potential. Further evaluation by combination found that no enhanced inhibitory effects were observed when siRNA-4 was cotransfected with each of the other four candidates. In mutational analysis of the mechanisms of siRNA action, we found that siRNA functions by targeting the positive strand of virus and requires a perfect sequence match in the central region of the target, but mismatches were more tolerated near the 3' end than the 5' end of the antisense strand. These findings reveal an effective target for CVB3 silencing and provide a new possibility for antiviral intervention.
Using global unique identifiers to link autism collections.
Johnson, Stephen B; Whitney, Glen; McAuliffe, Matthew; Wang, Hailong; McCreedy, Evan; Rozenblit, Leon; Evans, Clark C
2010-01-01
To propose a centralized method for generating global unique identifiers to link collections of research data and specimens. The work is a collaboration between the Simons Foundation Autism Research Initiative and the National Database for Autism Research. The system is implemented as a web service: an investigator inputs identifying information about a participant into a client application and sends encrypted information to a server application, which returns a generated global unique identifier. The authors evaluated the system using a volume test of one million simulated individuals and a field test on 2000 families (over 8000 individual participants) in an autism study. Inverse probability of hash codes; rate of false identity of two individuals; rate of false split of single individual; percentage of subjects for which identifying information could be collected; percentage of hash codes generated successfully. Large-volume simulation generated no false splits or false identity. Field testing in the Simons Foundation Autism Research Initiative Simplex Collection produced identifiers for 96% of children in the study and 77% of parents. On average, four out of five hash codes per subject were generated perfectly (only one perfect hash is required for subsequent matching). The system must achieve balance among the competing goals of distinguishing individuals, collecting accurate information for matching, and protecting confidentiality. Considerable effort is required to obtain approval from institutional review boards, obtain consent from participants, and to achieve compliance from sites during a multicenter study. Generic unique identifiers have the potential to link collections of research data, augment the amount and types of data available for individuals, support detection of overlap between collections, and facilitate replication of research findings.
Akkaynak, Derya; Siemann, Liese A.; Barbosa, Alexandra
2017-01-01
Flounder change colour and pattern for camouflage. We used a spectrometer to measure reflectance spectra and a digital camera to capture body patterns of two flounder species camouflaged on four natural backgrounds of different spatial scale (sand, small gravel, large gravel and rocks). We quantified the degree of spectral match between flounder and background relative to the situation of perfect camouflage in which flounder and background were assumed to have identical spectral distribution. Computations were carried out for three biologically relevant observers: monochromatic squid, dichromatic crab and trichromatic guitarfish. Our computations present a new approach to analysing datasets with multiple spectra that have large variance. Furthermore, to investigate the spatial match between flounder and background, images of flounder patterns were analysed using a custom program originally developed to study cuttlefish camouflage. Our results show that all flounder and background spectra fall within the same colour gamut and that, in terms of different observer visual systems, flounder matched most substrates in luminance and colour contrast. Flounder matched the spatial scales of all substrates except for rocks. We discuss findings in terms of flounder biology; furthermore, we discuss our methodology in light of hyperspectral technologies that combine high-resolution spectral and spatial imaging. PMID:28405370
Perceptual expertise in forensic facial image comparison.
White, David; Phillips, P Jonathon; Hahn, Carina A; Hill, Matthew; O'Toole, Alice J
2015-09-07
Forensic facial identification examiners are required to match the identity of faces in images that vary substantially, owing to changes in viewing conditions and in a person's appearance. These identifications affect the course and outcome of criminal investigations and convictions. Despite calls for research on sources of human error in forensic examination, existing scientific knowledge of face matching accuracy is based, almost exclusively, on people without formal training. Here, we administered three challenging face matching tests to a group of forensic examiners with many years' experience of comparing face images for law enforcement and government agencies. Examiners outperformed untrained participants and computer algorithms, thereby providing the first evidence that these examiners are experts at this task. Notably, computationally fusing responses of multiple experts produced near-perfect performance. Results also revealed qualitative differences between expert and non-expert performance. First, examiners' superiority was greatest at longer exposure durations, suggestive of more entailed comparison in forensic examiners. Second, experts were less impaired by image inversion than non-expert students, contrasting with face memory studies that show larger face inversion effects in high performers. We conclude that expertise in matching identity across unfamiliar face images is supported by processes that differ qualitatively from those supporting memory for individual faces. © 2015 The Author(s).
Akkaynak, Derya; Siemann, Liese A; Barbosa, Alexandra; Mäthger, Lydia M
2017-03-01
Flounder change colour and pattern for camouflage. We used a spectrometer to measure reflectance spectra and a digital camera to capture body patterns of two flounder species camouflaged on four natural backgrounds of different spatial scale (sand, small gravel, large gravel and rocks). We quantified the degree of spectral match between flounder and background relative to the situation of perfect camouflage in which flounder and background were assumed to have identical spectral distribution. Computations were carried out for three biologically relevant observers: monochromatic squid, dichromatic crab and trichromatic guitarfish. Our computations present a new approach to analysing datasets with multiple spectra that have large variance. Furthermore, to investigate the spatial match between flounder and background, images of flounder patterns were analysed using a custom program originally developed to study cuttlefish camouflage. Our results show that all flounder and background spectra fall within the same colour gamut and that, in terms of different observer visual systems, flounder matched most substrates in luminance and colour contrast. Flounder matched the spatial scales of all substrates except for rocks. We discuss findings in terms of flounder biology; furthermore, we discuss our methodology in light of hyperspectral technologies that combine high-resolution spectral and spatial imaging.
NASA Astrophysics Data System (ADS)
Liu, Wanjun; Liang, Xuejian; Qu, Haicheng
2017-11-01
Hyperspectral image (HSI) classification is one of the most popular topics in remote sensing community. Traditional and deep learning-based classification methods were proposed constantly in recent years. In order to improve the classification accuracy and robustness, a dimensionality-varied convolutional neural network (DVCNN) was proposed in this paper. DVCNN was a novel deep architecture based on convolutional neural network (CNN). The input of DVCNN was a set of 3D patches selected from HSI which contained spectral-spatial joint information. In the following feature extraction process, each patch was transformed into some different 1D vectors by 3D convolution kernels, which were able to extract features from spectral-spatial data. The rest of DVCNN was about the same as general CNN and processed 2D matrix which was constituted by by all 1D data. So that the DVCNN could not only extract more accurate and rich features than CNN, but also fused spectral-spatial information to improve classification accuracy. Moreover, the robustness of network on water-absorption bands was enhanced in the process of spectral-spatial fusion by 3D convolution, and the calculation was simplified by dimensionality varied convolution. Experiments were performed on both Indian Pines and Pavia University scene datasets, and the results showed that the classification accuracy of DVCNN improved by 32.87% on Indian Pines and 19.63% on Pavia University scene than spectral-only CNN. The maximum accuracy improvement of DVCNN achievement was 13.72% compared with other state-of-the-art HSI classification methods, and the robustness of DVCNN on water-absorption bands noise was demonstrated.
Convolutional neural networks for prostate cancer recurrence prediction
NASA Astrophysics Data System (ADS)
Kumar, Neeraj; Verma, Ruchika; Arora, Ashish; Kumar, Abhay; Gupta, Sanchit; Sethi, Amit; Gann, Peter H.
2017-03-01
Accurate prediction of the treatment outcome is important for cancer treatment planning. We present an approach to predict prostate cancer (PCa) recurrence after radical prostatectomy using tissue images. We used a cohort whose case vs. control (recurrent vs. non-recurrent) status had been determined using post-treatment follow up. Further, to aid the development of novel biomarkers of PCa recurrence, cases and controls were paired based on matching of other predictive clinical variables such as Gleason grade, stage, age, and race. For this cohort, tissue resection microarray with up to four cores per patient was available. The proposed approach is based on deep learning, and its novelty lies in the use of two separate convolutional neural networks (CNNs) - one to detect individual nuclei even in the crowded areas, and the other to classify them. To detect nuclear centers in an image, the first CNN predicts distance transform of the underlying (but unknown) multi-nuclear map from the input HE image. The second CNN classifies the patches centered at nuclear centers into those belonging to cases or controls. Voting across patches extracted from image(s) of a patient yields the probability of recurrence for the patient. The proposed approach gave 0.81 AUC for a sample of 30 recurrent cases and 30 non-recurrent controls, after being trained on an independent set of 80 case-controls pairs. If validated further, such an approach might help in choosing between a combination of treatment options such as active surveillance, radical prostatectomy, radiation, and hormone therapy. It can also generalize to the prediction of treatment outcomes in other cancers.
Zhu, Yanan; Ouyang, Qi; Mao, Youdong
2017-07-21
Single-particle cryo-electron microscopy (cryo-EM) has become a mainstream tool for the structural determination of biological macromolecular complexes. However, high-resolution cryo-EM reconstruction often requires hundreds of thousands of single-particle images. Particle extraction from experimental micrographs thus can be laborious and presents a major practical bottleneck in cryo-EM structural determination. Existing computational methods for particle picking often use low-resolution templates for particle matching, making them susceptible to reference-dependent bias. It is critical to develop a highly efficient template-free method for the automatic recognition of particle images from cryo-EM micrographs. We developed a deep learning-based algorithmic framework, DeepEM, for single-particle recognition from noisy cryo-EM micrographs, enabling automated particle picking, selection and verification in an integrated fashion. The kernel of DeepEM is built upon a convolutional neural network (CNN) composed of eight layers, which can be recursively trained to be highly "knowledgeable". Our approach exhibits an improved performance and accuracy when tested on the standard KLH dataset. Application of DeepEM to several challenging experimental cryo-EM datasets demonstrated its ability to avoid the selection of un-wanted particles and non-particles even when true particles contain fewer features. The DeepEM methodology, derived from a deep CNN, allows automated particle extraction from raw cryo-EM micrographs in the absence of a template. It demonstrates an improved performance, objectivity and accuracy. Application of this novel method is expected to free the labor involved in single-particle verification, significantly improving the efficiency of cryo-EM data processing.
Improved scatter correction using adaptive scatter kernel superposition
NASA Astrophysics Data System (ADS)
Sun, M.; Star-Lack, J. M.
2010-11-01
Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.
DRREP: deep ridge regressed epitope predictor.
Sher, Gene; Zhi, Degui; Zhang, Shaojie
2017-10-03
The ability to predict epitopes plays an enormous role in vaccine development in terms of our ability to zero in on where to do a more thorough in-vivo analysis of the protein in question. Though for the past decade there have been numerous advancements and improvements in epitope prediction, on average the best benchmark prediction accuracies are still only around 60%. New machine learning algorithms have arisen within the domain of deep learning, text mining, and convolutional networks. This paper presents a novel analytically trained and string kernel using deep neural network, which is tailored for continuous epitope prediction, called: Deep Ridge Regressed Epitope Predictor (DRREP). DRREP was tested on long protein sequences from the following datasets: SARS, Pellequer, HIV, AntiJen, and SEQ194. DRREP was compared to numerous state of the art epitope predictors, including the most recently published predictors called LBtope and DMNLBE. Using area under ROC curve (AUC), DRREP achieved a performance improvement over the best performing predictors on SARS (13.7%), HIV (8.9%), Pellequer (1.5%), and SEQ194 (3.1%), with its performance being matched only on the AntiJen dataset, by the LBtope predictor, where both DRREP and LBtope achieved an AUC of 0.702. DRREP is an analytically trained deep neural network, thus capable of learning in a single step through regression. By combining the features of deep learning, string kernels, and convolutional networks, the system is able to perform residue-by-residue prediction of continues epitopes with higher accuracy than the current state of the art predictors.
Photon spectral characteristics of dissimilar 6 MV linear accelerators.
Hinson, William H; Kearns, William T; deGuzman, Allan F; Bourland, J Daniel
2008-05-01
This work measures and compares the energy spectra of four dosimetrically matched 6 MV beams, generated from four physically different linear accelerators. The goal of this work is twofold. First, this study determines whether the spectra of dosimetrically matched beams are measurably different. This study also demonstrates that the spectra of clinical photon beams can be measured as a part of the beam data collection process for input to a three-dimensional (3D) treatment planning system. The spectra of 6 MV beams that are dosimetrically matched for clinical use were studied to determine if the beam spectra are similarly matched. Each of the four accelerators examined had a standing waveguide, but with different physical designs. The four accelerators were two Varian 2100C/Ds (one 6 MV/18 MV waveguide and one 6 MV/10 MV waveguide), one Varian 600 C with a vertically mounted waveguide and no bending magnet, and one Siemens MD 6740 with a 6 MV/10 MV waveguide. All four accelerators had percent depth dose curves for the 6 MV beam that were matched within 1.3%. Beam spectra were determined from narrow beam transmission measurements through successive thicknesses of pure aluminum along the central axis of the accelerator, made with a graphite Farmer ion chamber with a Lucite buildup cap. An iterative nonlinear fit using a Marquardt algorithm was used to find each spectrum. Reconstructed spectra show that all four beams have similar energy distributions with only subtle differences, despite the differences in accelerator design. The measured spectra of different 6 MV beams are similar regardless of accelerator design. The measured spectra show excellent agreement with those found by the auto-modeling algorithm in a commercial 3D treatment planning system that uses a convolution dose calculation algorithm. Thus, beam spectra can be acquired in a clinical setting at the time of commissioning as a part of the routine beam data collection.
A unitary convolution approximation for the impact-parameter dependent electronic energy loss
NASA Astrophysics Data System (ADS)
Schiwietz, G.; Grande, P. L.
1999-06-01
In this work, we propose a simple method to calculate the impact-parameter dependence of the electronic energy loss of bare ions for all impact parameters. This perturbative convolution approximation (PCA) is based on first-order perturbation theory, and thus, it is only valid for fast particles with low projectile charges. Using Bloch's stopping-power result and a simple scaling, we get rid of the restriction to low charge states and derive the unitary convolution approximation (UCA). Results of the UCA are then compared with full quantum-mechanical coupled-channel calculations for the impact-parameter dependent electronic energy loss.
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.
1976-01-01
The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.
NASA Technical Reports Server (NTRS)
Truong, T. K.; Lipes, R.; Reed, I. S.; Wu, C.
1980-01-01
A fast algorithm is developed to compute two dimensional convolutions of an array of d sub 1 X d sub 2 complex number points, where d sub 2 = 2(M) and d sub 1 = 2(m-r+) for some 1 or = r or = m. This algorithm requires fewer multiplications and about the same number of additions as the conventional fast fourier transform method for computing the two dimensional convolution. It also has the advantage that the operation of transposing the matrix of data can be avoided.
Cascaded K-means convolutional feature learner and its application to face recognition
NASA Astrophysics Data System (ADS)
Zhou, Daoxiang; Yang, Dan; Zhang, Xiaohong; Huang, Sheng; Feng, Shu
2017-09-01
Currently, considerable efforts have been devoted to devise image representation. However, handcrafted methods need strong domain knowledge and show low generalization ability, and conventional feature learning methods require enormous training data and rich parameters tuning experience. A lightened feature learner is presented to solve these problems with application to face recognition, which shares similar topology architecture as a convolutional neural network. Our model is divided into three components: cascaded convolution filters bank learning layer, nonlinear processing layer, and feature pooling layer. Specifically, in the filters learning layer, we use K-means to learn convolution filters. Features are extracted via convoluting images with the learned filters. Afterward, in the nonlinear processing layer, hyperbolic tangent is employed to capture the nonlinear feature. In the feature pooling layer, to remove the redundancy information and incorporate the spatial layout, we exploit multilevel spatial pyramid second-order pooling technique to pool the features in subregions and concatenate them together as the final representation. Extensive experiments on four representative datasets demonstrate the effectiveness and robustness of our model to various variations, yielding competitive recognition results on extended Yale B and FERET. In addition, our method achieves the best identification performance on AR and labeled faces in the wild datasets among the comparative methods.
NASA Astrophysics Data System (ADS)
Wu, Leyuan
2018-01-01
We present a brief review of gravity forward algorithms in Cartesian coordinate system, including both space-domain and Fourier-domain approaches, after which we introduce a truly general and efficient algorithm, namely the convolution-type Gauss fast Fourier transform (Conv-Gauss-FFT) algorithm, for 2D and 3D modeling of gravity potential and its derivatives due to sources with arbitrary geometry and arbitrary density distribution which are defined either by discrete or by continuous functions. The Conv-Gauss-FFT algorithm is based on the combined use of a hybrid rectangle-Gaussian grid and the fast Fourier transform (FFT) algorithm. Since the gravity forward problem in Cartesian coordinate system can be expressed as continuous convolution-type integrals, we first approximate the continuous convolution by a weighted sum of a series of shifted discrete convolutions, and then each shifted discrete convolution, which is essentially a Toeplitz system, is calculated efficiently and accurately by combining circulant embedding with the FFT algorithm. Synthetic and real model tests show that the Conv-Gauss-FFT algorithm can obtain high-precision forward results very efficiently for almost any practical model, and it works especially well for complex 3D models when gravity fields on large 3D regular grids are needed.
A convolutional neural network to filter artifacts in spectroscopic MRI.
Gurbani, Saumya S; Schreibmann, Eduard; Maudsley, Andrew A; Cordova, James Scott; Soher, Brian J; Poptani, Harish; Verma, Gaurav; Barker, Peter B; Shim, Hyunsuk; Cooper, Lee A D
2018-03-09
Proton MRSI is a noninvasive modality capable of generating volumetric maps of in vivo tissue metabolism without the need for ionizing radiation or injected contrast agent. Magnetic resonance spectroscopic imaging has been shown to be a viable imaging modality for studying several neuropathologies. However, a key hurdle in the routine clinical adoption of MRSI is the presence of spectral artifacts that can arise from a number of sources, possibly leading to false information. A deep learning model was developed that was capable of identifying and filtering out poor quality spectra. The core of the model used a tiled convolutional neural network that analyzed frequency-domain spectra to detect artifacts. When compared with a panel of MRS experts, our convolutional neural network achieved high sensitivity and specificity with an area under the curve of 0.95. A visualization scheme was implemented to better understand how the convolutional neural network made its judgement on single-voxel or multivoxel MRSI, and the convolutional neural network was embedded into a pipeline capable of producing whole-brain spectroscopic MRI volumes in real time. The fully automated method for assessment of spectral quality provides a valuable tool to support clinical MRSI or spectroscopic MRI studies for use in fields such as adaptive radiation therapy planning. © 2018 International Society for Magnetic Resonance in Medicine.
Baczewski, Andrew David; Vikram, Melapudi; Shanker, Balasubramaniam; ...
2010-08-27
Diffusion, lossy wave, and Klein–Gordon equations find numerous applications in practical problems across a range of diverse disciplines. The temporal dependence of all three Green’s functions are characterized by an infinite tail. This implies that the cost complexity of the spatio-temporal convolutions, associated with evaluating the potentials, scales as O(N s 2N t 2), where N s and N t are the number of spatial and temporal degrees of freedom, respectively. In this paper, we discuss two new methods to rapidly evaluate these spatio-temporal convolutions by exploiting their block-Toeplitz nature within the framework of accelerated Cartesian expansions (ACE). The firstmore » scheme identifies a convolution relation in time amongst ACE harmonics and the fast Fourier transform (FFT) is used for efficient evaluation of these convolutions. The second method exploits the rank deficiency of the ACE translation operators with respect to time and develops a recursive numerical compression scheme for the efficient representation and evaluation of temporal convolutions. It is shown that the cost of both methods scales as O(N sN tlog 2N t). Furthermore, several numerical results are presented for the diffusion equation to validate the accuracy and efficacy of the fast algorithms developed here.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noronha, Jorge; Denicol, Gabriel S.
In this paper we obtain an analytical solution of the relativistic Boltzmann equation under the relaxation time approximation that describes the out-of-equilibrium dynamics of a radially expanding massless gas. This solution is found by mapping this expanding system in flat spacetime to a static flow in the curved spacetime AdS 2 Ⓧ S 2. We further derive explicit analytic expressions for the momentum dependence of the single-particle distribution function as well as for the spatial dependence of its moments. We find that this dissipative system has the ability to flow as a perfect fluid even though its entropy density doesmore » not match the equilibrium form. The nonequilibrium contribution to the entropy density is shown to be due to higher-order scalar moments (which possess no hydrodynamical interpretation) of the Boltzmann equation that can remain out of equilibrium but do not couple to the energy-momentum tensor of the system. Furthermore, in this system the slowly moving hydrodynamic degrees of freedom can exhibit true perfect fluidity while being totally decoupled from the fast moving, nonhydrodynamical microscopic degrees of freedom that lead to entropy production.« less
Enhanced line integral convolution with flow feature detection
DOT National Transportation Integrated Search
1995-01-01
Prepared ca. 1995. The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain [Cabral & Leedom '93]. The method produces a flow texture imag...
NASA Astrophysics Data System (ADS)
Pereira, Carina; Dighe, Manjiri; Alessio, Adam M.
2018-02-01
Various Computer Aided Diagnosis (CAD) systems have been developed that characterize thyroid nodules using the features extracted from the B-mode ultrasound images and Shear Wave Elastography images (SWE). These features, however, are not perfect predictors of malignancy. In other domains, deep learning techniques such as Convolutional Neural Networks (CNNs) have outperformed conventional feature extraction based machine learning approaches. In general, fully trained CNNs require substantial volumes of data, motivating several efforts to use transfer learning with pre-trained CNNs. In this context, we sought to compare the performance of conventional feature extraction, fully trained CNNs, and transfer learning based, pre-trained CNNs for the detection of thyroid malignancy from ultrasound images. We compared these approaches applied to a data set of 964 B-mode and SWE images from 165 patients. The data were divided into 80% training/validation and 20% testing data. The highest accuracies achieved on the testing data for the conventional feature extraction, fully trained CNN, and pre-trained CNN were 0.80, 0.75, and 0.83 respectively. In this application, classification using a pre-trained network yielded the best performance, potentially due to the relatively limited sample size and sub-optimal architecture for the fully trained CNN.
Mobile robots: motor challenges and materials solutions.
Madden, John D
2007-11-16
Bolted-down robots labor in our factories, performing the same task over and over again. Where are the robots that run and jump? Equaling human performance is very difficult for many reasons, including the basic challenge of demonstrating motors and transmissions that efficiently match the power per unit mass of muscle. In order to exceed animal agility, new actuators are needed. Materials that change dimension in response to applied voltage, so-called artificial muscle technologies, outperform muscle in most respects and so provide a promising means of improving robots. In the longer term, robots powered by atomically perfect fibers will outrun us all.
NASA Astrophysics Data System (ADS)
Huang, Jinxia; Wang, Junfa; Yu, Yonghong
This article aims to design a kind of gripping-belt speed automatic tracking system of traditional Chinese herbal harvester by AT89C52 single-chip micro computer as a core combined with fuzzy PID control algorithm. The system can adjust the gripping-belt speed in accordance with the variation of the machine's operation, so there is a perfect matching between the machine operation speed and the gripping-belt speed. The harvesting performance of the machine can be improved greatly. System design includes hardware and software.
Low Temperature and Neutron Physics Studies: Final Progress Report, March 1, 1986--May 31, 1987
DOE R&D Accomplishments Database
Shull, C.G.
1989-07-27
A search for a novel coupling interaction between the Pendelloesung periodicity which is formed in a diffracting crystal and the Larmor precession of neutrons in a magnetic field has been carried out. This interaction is expected to exhibit a resonant behavior when the two spatial periodicities become matched upon scanning the magnetic field being applied to the crystal. Observations on a diffracting, perfect crystal of silicon with neutrons of wavelength 1 Angstrom show the expected resonant action but some discrepancy between the observed magnitude of the resonance effects remains for interpretation.
Building block synthesis using the polymerase chain assembly method.
Marchand, Julie A; Peccoud, Jean
2012-01-01
De novo gene synthesis allows the creation of custom DNA molecules without the typical constraints of traditional cloning assembly: scars, restriction site incompatibility, and the quest to find all the desired parts to name a few. Moreover, with the help of computer-assisted design, the perfect DNA molecule can be created along with its matching sequence ready to download. The challenge is to build the physical DNA molecules that have been designed with the software. Although there are several DNA assembly methods, this section presents and describes a method using the polymerase chain assembly (PCA).
Deep learning in color: towards automated quark/gluon jet discrimination
Komiske, Patrick T.; Metodiev, Eric M.; Schwartz, Matthew D.
2017-01-25
Artificial intelligence offers the potential to automate challenging data-processing tasks in collider physics. Here, to establish its prospects, we explore to what extent deep learning with convolutional neural networks can discriminate quark and gluon jets better than observables designed by physicists. Our approach builds upon the paradigm that a jet can be treated as an image, with intensity given by the local calorimeter deposits. We supplement this construction by adding color to the images, with red, green and blue intensities given by the transverse momentum in charged particles, transverse momentum in neutral particles, and pixel-level charged particle counts. Overall, themore » deep networks match or outperform traditional jet variables. We also find that, while various simulations produce different quark and gluon jets, the neural networks are surprisingly insensitive to these differences, similar to traditional observables. This suggests that the networks can extract robust physical information from imperfect simulations.« less
Deep learning in color: towards automated quark/gluon jet discrimination
NASA Astrophysics Data System (ADS)
Komiske, Patrick T.; Metodiev, Eric M.; Schwartz, Matthew D.
2017-01-01
Artificial intelligence offers the potential to automate challenging data-processing tasks in collider physics. To establish its prospects, we explore to what extent deep learning with convolutional neural networks can discriminate quark and gluon jets better than observables designed by physicists. Our approach builds upon the paradigm that a jet can be treated as an image, with intensity given by the local calorimeter deposits. We supplement this construction by adding color to the images, with red, green and blue intensities given by the transverse momentum in charged particles, transverse momentum in neutral particles, and pixel-level charged particle counts. Overall, the deep networks match or outperform traditional jet variables. We also find that, while various simulations produce different quark and gluon jets, the neural networks are surprisingly insensitive to these differences, similar to traditional observables. This suggests that the networks can extract robust physical information from imperfect simulations.
Deep learning in color: towards automated quark/gluon jet discrimination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Komiske, Patrick T.; Metodiev, Eric M.; Schwartz, Matthew D.
Artificial intelligence offers the potential to automate challenging data-processing tasks in collider physics. Here, to establish its prospects, we explore to what extent deep learning with convolutional neural networks can discriminate quark and gluon jets better than observables designed by physicists. Our approach builds upon the paradigm that a jet can be treated as an image, with intensity given by the local calorimeter deposits. We supplement this construction by adding color to the images, with red, green and blue intensities given by the transverse momentum in charged particles, transverse momentum in neutral particles, and pixel-level charged particle counts. Overall, themore » deep networks match or outperform traditional jet variables. We also find that, while various simulations produce different quark and gluon jets, the neural networks are surprisingly insensitive to these differences, similar to traditional observables. This suggests that the networks can extract robust physical information from imperfect simulations.« less
Matching between the light spots and lenslets of an artificial compound eye system
NASA Astrophysics Data System (ADS)
He, Jianzheng; Jian, Huijie; Zhu, Qitao; Ma, Mengchao; Wang, Keyi
2017-10-01
As the visual organ of many arthropods, the compound eye has attracted a lot of attention with the advantage of wide field-of-view, multi-channel imaging ability and high agility. Extended from this concept, a new kind of artificial compound eye device is developed. There are 141 lenslets which share one image sensor distributed evenly on a curved surface, thus it is difficult to distinguish the lenslets which the light spot belongs to during calibration and positioning process. Therefore, the matching algorithm is proposed based on the device structure and the principle of calibration and positioning. Region partition of lenslet array is performed at first. Each lenslet and its adjacent lenslets are defined as cluster eyes and constructed into an index table. In the calibration process, a polar coordinate system is established, and the matching can be accomplished by comparing the rotary table position in the polar coordinate system and the central light spot angle in the image. In the positioning process, the spot is paired to the correct region according to the spots distribution firstly, and the final results is determined by the dispersion of the distance from the target point to the incident ray in the region traversal matching. Finally, the experiment results show that the presented algorithms provide a feasible and efficient way to match the spot to the lenslet, and perfectly meet the needs in the practical application of the compound eye system.
The decoding of majority-multiplexed signals by means of dyadic convolution
NASA Astrophysics Data System (ADS)
Losev, V. V.
1980-09-01
The maximum likelihood method can often not be used for the decoding of majority-multiplexed signals because of the large number of computations required. This paper describes a fast dyadic convolution transform which can be used to reduce the number of computations.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.
2014-01-01
This Essay traces the centuries-long history of the phenomenological disciplines of directional radiometry and radiative transfer in turbid media, discusses their fundamental weaknesses, and outlines the convoluted process of their conversion into legitimate branches of physical optics.
[Application of numerical convolution in in vivo/in vitro correlation research].
Yue, Peng
2009-01-01
This paper introduced the conception and principle of in vivo/in vitro correlation (IVIVC) and convolution/deconvolution methods, and elucidated in details the convolution strategy and method for calculating the in vivo absorption performance of the pharmaceutics according to the their pharmacokinetic data in Excel, then put the results forward to IVIVC research. Firstly, the pharmacokinetic data ware fitted by mathematical software to make up the lost points. Secondly, the parameters of the optimal fitted input function were defined by trail-and-error method according to the convolution principle in Excel under the hypothesis that all the input functions fit the Weibull functions. Finally, the IVIVC between in vivo input function and the in vitro dissolution was studied. In the examples, not only the application of this method was demonstrated in details but also its simplicity and effectiveness were proved by comparing with the compartment model method and deconvolution method. It showed to be a powerful tool for IVIVC research.
DeepFix: A Fully Convolutional Neural Network for Predicting Human Eye Fixations.
Kruthiventi, Srinivas S S; Ayush, Kumar; Babu, R Venkatesh
2017-09-01
Understanding and predicting the human visual attention mechanism is an active area of research in the fields of neuroscience and computer vision. In this paper, we propose DeepFix, a fully convolutional neural network, which models the bottom-up mechanism of visual attention via saliency prediction. Unlike classical works, which characterize the saliency map using various hand-crafted features, our model automatically learns features in a hierarchical fashion and predicts the saliency map in an end-to-end manner. DeepFix is designed to capture semantics at multiple scales while taking global context into account, by using network layers with very large receptive fields. Generally, fully convolutional nets are spatially invariant-this prevents them from modeling location-dependent patterns (e.g., centre-bias). Our network handles this by incorporating a novel location-biased convolutional layer. We evaluate our model on multiple challenging saliency data sets and show that it achieves the state-of-the-art results.
Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network.
Yoon, Jaehong; Lee, Jungnyun; Whang, Mincheol
2018-01-01
Feature of event-related potential (ERP) has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain-computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects' ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.
Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network
2018-01-01
Feature of event-related potential (ERP) has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain–computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects' ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.
NASA Astrophysics Data System (ADS)
Liu, Miaofeng
2017-07-01
In recent years, deep convolutional neural networks come into use in image inpainting and super-resolution in many fields. Distinct to most of the former methods requiring to know beforehand the local information for corrupted pixels, we propose a 20-depth fully convolutional network to learn an end-to-end mapping a dataset of damaged/ground truth subimage pairs realizing non-local blind inpainting and super-resolution. As there often exist image with huge corruptions or inpainting on a low-resolution image that the existing approaches unable to perform well, we also share parameters in local area of layers to achieve spatial recursion and enlarge the receptive field. To avoid the difficulty of training this deep neural network, skip-connections between symmetric convolutional layers are designed. Experimental results shows that the proposed method outperforms state-of-the-art methods for diverse corrupting and low-resolution conditions, it works excellently when realizing super-resolution and image inpainting simultaneously
Convolutional encoding of self-dual codes
NASA Technical Reports Server (NTRS)
Solomon, G.
1994-01-01
There exist almost complete convolutional encodings of self-dual codes, i.e., block codes of rate 1/2 with weights w, w = 0 mod 4. The codes are of length 8m with the convolutional portion of length 8m-2 and the nonsystematic information of length 4m-1. The last two bits are parity checks on the two (4m-1) length parity sequences. The final information bit complements one of the extended parity sequences of length 4m. Solomon and van Tilborg have developed algorithms to generate these for the Quadratic Residue (QR) Codes of lengths 48 and beyond. For these codes and reasonable constraint lengths, there are sequential decodings for both hard and soft decisions. There are also possible Viterbi-type decodings that may be simple, as in a convolutional encoding/decoding of the extended Golay Code. In addition, the previously found constraint length K = 9 for the QR (48, 24;12) Code is lowered here to K = 8.
NASA Astrophysics Data System (ADS)
Park, Sun-Youp; Choi, Jin; Roh, Dong-Goo; Park, Maru; Jo, Jung Hyun; Yim, Hong-Suh; Park, Young-Sik; Bae, Young-Ho; Park, Jang-Hyun; Moon, Hong-Kyu; Choi, Young-Jun; Cho, Sungki; Choi, Eun-Jung
2016-09-01
As described in the previous paper (Park et al. 2013), the detector subsystem of optical wide-field patrol (OWL) provides many observational data points of a single artificial satellite or space debris in the form of small streaks, using a chopper system and a time tagger. The position and the corresponding time data are matched assuming that the length of a streak on the CCD frame is proportional to the time duration of the exposure during which the chopper blades do not obscure the CCD window. In the previous study, however, the length was measured using the diagonal of the rectangle of the image area containing the streak; the results were quite ambiguous and inaccurate, allowing possible matching error of positions and time data. Furthermore, because only one (position, time) data point is created from one streak, the efficiency of the observation decreases. To define the length of a streak correctly, it is important to locate the endpoints of a streak. In this paper, a method using a differential convolution mask pattern is tested. This method can be used to obtain the positions where the pixel values are changed sharply. These endpoints can be regarded as directly detected positional data, and the number of data points is doubled by this result.
Visual Persons Behavior Diary Generation Model based on Trajectories and Pose Estimation
NASA Astrophysics Data System (ADS)
Gang, Chen; Bin, Chen; Yuming, Liu; Hui, Li
2018-03-01
The behavior pattern of persons was the important output of the surveillance analysis. This paper focus on the generation model of visual person behavior diary. The pipeline includes the person detection, tracking, and the person behavior classify. This paper adopts the deep convolutional neural model YOLO (You Only Look Once)V2 for person detection module. Multi person tracking was based on the detection framework. The Hungarian assignment algorithm was used to the matching. The person appearance model was integrated by HSV color model and Hash code model. The person object motion was estimated by the Kalman Filter. The multi objects were matching with exist tracklets through the appearance and motion location distance by the Hungarian assignment method. A long continuous trajectory for one person was get by the spatial-temporal continual linking algorithm. And the face recognition information was used to identify the trajectory. The trajectories with identification information can be used to generate the visual diary of person behavior based on the scene context information and person action estimation. The relevant modules are tested in public data sets and our own capture video sets. The test results show that the method can be used to generate the visual person behavior pattern diary with certain accuracy.
Deep Networks Can Resemble Human Feed-forward Vision in Invariant Object Recognition
Kheradpisheh, Saeed Reza; Ghodrati, Masoud; Ganjtabesh, Mohammad; Masquelier, Timothée
2016-01-01
Deep convolutional neural networks (DCNNs) have attracted much attention recently, and have shown to be able to recognize thousands of object categories in natural image databases. Their architecture is somewhat similar to that of the human visual system: both use restricted receptive fields, and a hierarchy of layers which progressively extract more and more abstracted features. Yet it is unknown whether DCNNs match human performance at the task of view-invariant object recognition, whether they make similar errors and use similar representations for this task, and whether the answers depend on the magnitude of the viewpoint variations. To investigate these issues, we benchmarked eight state-of-the-art DCNNs, the HMAX model, and a baseline shallow model and compared their results to those of humans with backward masking. Unlike in all previous DCNN studies, we carefully controlled the magnitude of the viewpoint variations to demonstrate that shallow nets can outperform deep nets and humans when variations are weak. When facing larger variations, however, more layers were needed to match human performance and error distributions, and to have representations that are consistent with human behavior. A very deep net with 18 layers even outperformed humans at the highest variation level, using the most human-like representations. PMID:27601096
Spectral interpolation - Zero fill or convolution. [image processing
NASA Technical Reports Server (NTRS)
Forman, M. L.
1977-01-01
Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.
NASA Technical Reports Server (NTRS)
Mccallister, R. D.; Crawford, J. J.
1981-01-01
It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.
Langenbucher, Frieder
2003-11-01
Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.
Acral melanoma detection using a convolutional neural network for dermoscopy images.
Yu, Chanki; Yang, Sejung; Kim, Wonoh; Jung, Jinwoong; Chung, Kee-Yang; Lee, Sang Wook; Oh, Byungho
2018-01-01
Acral melanoma is the most common type of melanoma in Asians, and usually results in a poor prognosis due to late diagnosis. We applied a convolutional neural network to dermoscopy images of acral melanoma and benign nevi on the hands and feet and evaluated its usefulness for the early diagnosis of these conditions. A total of 724 dermoscopy images comprising acral melanoma (350 images from 81 patients) and benign nevi (374 images from 194 patients), and confirmed by histopathological examination, were analyzed in this study. To perform the 2-fold cross validation, we split them into two mutually exclusive subsets: half of the total image dataset was selected for training and the rest for testing, and we calculated the accuracy of diagnosis comparing it with the dermatologist's and non-expert's evaluation. The accuracy (percentage of true positive and true negative from all images) of the convolutional neural network was 83.51% and 80.23%, which was higher than the non-expert's evaluation (67.84%, 62.71%) and close to that of the expert (81.08%, 81.64%). Moreover, the convolutional neural network showed area-under-the-curve values like 0.8, 0.84 and Youden's index like 0.6795, 0.6073, which were similar score with the expert. Although further data analysis is necessary to improve their accuracy, convolutional neural networks would be helpful to detect acral melanoma from dermoscopy images of the hands and feet.
Soft Tissue Phantoms for Realistic Needle Insertion: A Comparative Study.
Leibinger, Alexander; Forte, Antonio E; Tan, Zhengchu; Oldfield, Matthew J; Beyrau, Frank; Dini, Daniele; Rodriguez Y Baena, Ferdinando
2016-08-01
Phantoms are common substitutes for soft tissues in biomechanical research and are usually tuned to match tissue properties using standard testing protocols at small strains. However, the response due to complex tool-tissue interactions can differ depending on the phantom and no comprehensive comparative study has been published to date, which could aid researchers to select suitable materials. In this work, gelatin, a common phantom in literature, and a composite hydrogel developed at Imperial College, were matched for mechanical stiffness to porcine brain, and the interactions during needle insertions within them were analyzed. Specifically, we examined insertion forces for brain and the phantoms; we also measured displacements and strains within the phantoms via a laser-based image correlation technique in combination with fluorescent beads. It is shown that the insertion forces for gelatin and brain agree closely, but that the composite hydrogel better mimics the viscous nature of soft tissue. Both materials match different characteristics of brain, but neither of them is a perfect substitute. Thus, when selecting a phantom material, both the soft tissue properties and the complex tool-tissue interactions arising during tissue manipulation should be taken into consideration. These conclusions are presented in tabular form to aid future selection.
NASA Technical Reports Server (NTRS)
hoelzer, H. D.; Fourroux, K. A.; Rickman, D. L.; Schrader, C. M.
2011-01-01
Figures of Merit (FoMs) and the FoM software provide a method for quantitatively evaluating the quality of a regolith simulant by comparing the simulant to a reference material. FoMs may be used for comparing a simulant to actual regolith material, specification by stating the value a simulant s FoMs must attain to be suitable for a given application and comparing simulants from different vendors or production runs. FoMs may even be used to compare different simulants to each other. A single FoM is conceptually an algorithm that computes a single number for quantifying the similarity or difference of a single characteristic of a simulant material and a reference material and provides a clear measure of how well a simulant and reference material match or compare. FoMs have been constructed to lie between zero and 1, with zero indicating a poor or no match and 1 indicating a perfect match. FoMs are defined for modal composition, particle size distribution, particle shape distribution, (aspect ratio and angularity), and density. This TM covers the mathematics, use, installation, and licensing for the existing FoM code in detail.
An Interactive Graphics Program for Assistance in Learning Convolution.
ERIC Educational Resources Information Center
Frederick, Dean K.; Waag, Gary L.
1980-01-01
A program has been written for the interactive computer graphics facility at Rensselaer Polytechnic Institute that is designed to assist the user in learning the mathematical technique of convolving two functions. Because convolution can be represented graphically by a sequence of steps involving folding, shifting, multiplying, and integration, it…
Computational Prediction of Alzheimer’s and Parkinson’s Disease MicroRNAs in Domestic Animals
Wang, Hai Yang; Lin, Zi Li; Yu, Xian Feng; Bao, Yuan; Cui, Xiang-Shun; Kim, Nam-Hyung
2016-01-01
As the most common neurodegenerative diseases, Alzheimer’s disease (AD) and Parkinson’s disease (PD) are two of the main health concerns for the elderly population. Recently, microRNAs (miRNAs) have been used as biomarkers of infectious, genetic, and metabolic diseases in humans but they have not been well studied in domestic animals. Here we describe a computational biology study in which human AD- and PD-associated miRNAs (ADM and PDM) were utilized to predict orthologous miRNAs in the following domestic animal species: dog, cow, pig, horse, and chicken. In this study, a total of 121 and 70 published human ADM and PDM were identified, respectively. Thirty-seven miRNAs were co-regulated in AD and PD. We identified a total of 105 unrepeated human ADM and PDM that had at least one 100% identical animal homolog, among which 81 and 54 showed 100% sequence identity with 241 and 161 domestic animal miRNAs, respectively. Over 20% of the total mature horse miRNAs (92) showed perfect matches to AD/PD-associated miRNAs. Pigs, dogs, and cows have similar numbers of AD/PD-associated miRNAs (63, 62, and 59). Chickens had the least number of perfect matches (34). Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses suggested that humans and dogs are relatively similar in the functional pathways of the five selected highly conserved miRNAs. Taken together, our study provides the first evidence for better understanding the miRNA-AD/PD associations in domestic animals, and provides guidance to generate domestic animal models of AD/PD to replace the current rodent models. PMID:26954182
Parallel Ellipsoidal Perfectly Matched Layers for Acoustic Helmholtz Problems on Exterior Domains
Bunting, Gregory; Prakash, Arun; Walsh, Timothy; ...
2018-01-26
Exterior acoustic problems occur in a wide range of applications, making the finite element analysis of such problems a common practice in the engineering community. Various methods for truncating infinite exterior domains have been developed, including absorbing boundary conditions, infinite elements, and more recently, perfectly matched layers (PML). PML are gaining popularity due to their generality, ease of implementation, and effectiveness as an absorbing boundary condition. PML formulations have been developed in Cartesian, cylindrical, and spherical geometries, but not ellipsoidal. In addition, the parallel solution of PML formulations with iterative solvers for the solution of the Helmholtz equation, and howmore » this compares with more traditional strategies such as infinite elements, has not been adequately investigated. In this study, we present a parallel, ellipsoidal PML formulation for acoustic Helmholtz problems. To faciliate the meshing process, the ellipsoidal PML layer is generated with an on-the-fly mesh extrusion. Though the complex stretching is defined along ellipsoidal contours, we modify the Jacobian to include an additional mapping back to Cartesian coordinates in the weak formulation of the finite element equations. This allows the equations to be solved in Cartesian coordinates, which is more compatible with existing finite element software, but without the necessity of dealing with corners in the PML formulation. Herein we also compare the conditioning and performance of the PML Helmholtz problem with infinite element approach that is based on high order basis functions. On a set of representative exterior acoustic examples, we show that high order infinite element basis functions lead to an increasing number of Helmholtz solver iterations, whereas for PML the number of iterations remains constant for the same level of accuracy. Finally, this provides an additional advantage of PML over the infinite element approach.« less
Parallel Ellipsoidal Perfectly Matched Layers for Acoustic Helmholtz Problems on Exterior Domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bunting, Gregory; Prakash, Arun; Walsh, Timothy
Exterior acoustic problems occur in a wide range of applications, making the finite element analysis of such problems a common practice in the engineering community. Various methods for truncating infinite exterior domains have been developed, including absorbing boundary conditions, infinite elements, and more recently, perfectly matched layers (PML). PML are gaining popularity due to their generality, ease of implementation, and effectiveness as an absorbing boundary condition. PML formulations have been developed in Cartesian, cylindrical, and spherical geometries, but not ellipsoidal. In addition, the parallel solution of PML formulations with iterative solvers for the solution of the Helmholtz equation, and howmore » this compares with more traditional strategies such as infinite elements, has not been adequately investigated. In this study, we present a parallel, ellipsoidal PML formulation for acoustic Helmholtz problems. To faciliate the meshing process, the ellipsoidal PML layer is generated with an on-the-fly mesh extrusion. Though the complex stretching is defined along ellipsoidal contours, we modify the Jacobian to include an additional mapping back to Cartesian coordinates in the weak formulation of the finite element equations. This allows the equations to be solved in Cartesian coordinates, which is more compatible with existing finite element software, but without the necessity of dealing with corners in the PML formulation. Herein we also compare the conditioning and performance of the PML Helmholtz problem with infinite element approach that is based on high order basis functions. On a set of representative exterior acoustic examples, we show that high order infinite element basis functions lead to an increasing number of Helmholtz solver iterations, whereas for PML the number of iterations remains constant for the same level of accuracy. Finally, this provides an additional advantage of PML over the infinite element approach.« less
Automated Detection of Fronts using a Deep Learning Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Biard, J. C.; Kunkel, K.; Racah, E.
2017-12-01
A deeper understanding of climate model simulations and the future effects of global warming on extreme weather can be attained through direct analyses of the phenomena that produce weather. Such analyses require these phenomena to be identified in automatic, unbiased, and comprehensive ways. Atmospheric fronts are centrally important weather phenomena because of the variety of significant weather events, such as thunderstorms, directly associated with them. In current operational meteorology, fronts are identified and drawn visually based on the approximate spatial coincidence of a number of quasi-linear localized features - a trough (relative minimum) in air pressure in combination with gradients in air temperature and/or humidity and a shift in wind, and are categorized as cold, warm, stationary, or occluded, with each type exhibiting somewhat different characteristics. Fronts are extended in space with one dimension much larger than the other (often represented by complex curved lines), which poses a significant challenge for automated approaches. We addressed this challenge by using a Deep Learning Convolutional Neural Network (CNN) to automatically identify and classify fronts. The CNN was trained using a "truth" dataset of front locations identified by National Weather Service meteorologists as part of operational 3-hourly surface analyses. The input to the CNN is a set of 5 gridded fields of surface atmospheric variables, including 2m temperature, 2m specific humidity, surface pressure, and the two components of the 10m horizontal wind velocity vector at 3-hr resolution. The output is a set of feature maps containing the per - grid cell probabilities for the presence of the 4 front types. The CNN was trained on a subset of the data and then used to produce front probabilities for each 3-hr time snapshot over a 14-year period covering the continental United States and some adjacent areas. The total frequencies of fronts derived from the CNN outputs matches very well with the truth dataset. There is a slight underestimate in total numbers in the CNN results but the spatial pattern is a close match. The categorization of front types by CNN is best for cold and occluded and worst for warm. These initial results from our ongoing development highlight the great promise of this technology.
Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber
NASA Astrophysics Data System (ADS)
Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.; Bagby, L.; Baller, B.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Bugel, L.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; James, C.; de Vries, J. Jan; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Jones, B. J. P.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; von Rohr, C. Rudolf; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van de Water, R. G.; Viren, B.; Weber, M.; Weston, J.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Zeller, G. P.; Zennamo, J.; Zhang, C.
2017-03-01
We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.
Rock images classification by using deep convolution neural network
NASA Astrophysics Data System (ADS)
Cheng, Guojian; Guo, Wenhui
2017-08-01
Granularity analysis is one of the most essential issues in authenticate under microscope. To improve the efficiency and accuracy of traditional manual work, an convolutional neural network based method is proposed for granularity analysis from thin section image, which chooses and extracts features from image samples while build classifier to recognize granularity of input image samples. 4800 samples from Ordos basin are used for experiments under colour spaces of HSV, YCbCr and RGB respectively. On the test dataset, the correct rate in RGB colour space is 98.5%, and it is believable in HSV and YCbCr colour space. The results show that the convolution neural network can classify the rock images with high reliability.
Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akabani, G.; Hawkins, W.G.; Eckblade, M.B.
1999-01-01
The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transformmore » (CHT) algorithm.« less
Convolute laminations — a theoretical analysis: example of a Pennsylvanian sandstone
NASA Astrophysics Data System (ADS)
Visher, Glenn S.; Cunningham, Russ D.
1981-03-01
Data from an outcropping laminated interval were collected and analyzed to test the applicability of a theoretical model describing instability of layered systems. Rayleigh—Taylor wave perturbations result at the interface between fluids of contrasting density, viscosity, and thickness. In the special case where reverse density and viscosity interlaminations are developed, the deformation response produces a single wave with predictable amplitudes, wavelengths, and amplification rates. Physical measurements from both the outcropping section and modern sediments suggest the usefulness of the model for the interpretation of convolute laminations. Internal characteristics of the stratigraphic interval, and the developmental sequence of convoluted beds, are used to document the developmental history of these structures.
Detecting of foreign object debris on airfield pavement using convolution neural network
NASA Astrophysics Data System (ADS)
Cao, Xiaoguang; Gu, Yufeng; Bai, Xiangzhi
2017-11-01
It is of great practical significance to detect foreign object debris (FOD) timely and accurately on the airfield pavement, because the FOD is a fatal threaten for runway safety in airport. In this paper, a new FOD detection framework based on Single Shot MultiBox Detector (SSD) is proposed. Two strategies include making the detection network lighter and using dilated convolution, which are proposed to better solve the FOD detection problem. The advantages mainly include: (i) the network structure becomes lighter to speed up detection task and enhance detection accuracy; (ii) dilated convolution is applied in network structure to handle smaller FOD. Thus, we get a faster and more accurate detection system.
Jones, David T; Kandathil, Shaun M
2018-04-26
In addition to substitution frequency data from protein sequence alignments, many state-of-the-art methods for contact prediction rely on additional sources of information, or features, of protein sequences in order to predict residue-residue contacts, such as solvent accessibility, predicted secondary structure, and scores from other contact prediction methods. It is unclear how much of this information is needed to achieve state-of-the-art results. Here, we show that using deep neural network models, simple alignment statistics contain sufficient information to achieve state-of-the-art precision. Our prediction method, DeepCov, uses fully convolutional neural networks operating on amino-acid pair frequency or covariance data derived directly from sequence alignments, without using global statistical methods such as sparse inverse covariance or pseudolikelihood estimation. Comparisons against CCMpred and MetaPSICOV2 show that using pairwise covariance data calculated from raw alignments as input allows us to match or exceed the performance of both of these methods. Almost all of the achieved precision is obtained when considering relatively local windows (around 15 residues) around any member of a given residue pairing; larger window sizes have comparable performance. Assessment on a set of shallow sequence alignments (fewer than 160 effective sequences) indicates that the new method is substantially more precise than CCMpred and MetaPSICOV2 in this regime, suggesting that improved precision is attainable on smaller sequence families. Overall, the performance of DeepCov is competitive with the state of the art, and our results demonstrate that global models, which employ features from all parts of the input alignment when predicting individual contacts, are not strictly needed in order to attain precise contact predictions. DeepCov is freely available at https://github.com/psipred/DeepCov. d.t.jones@ucl.ac.uk.
Spatial Angular Compounding Technique for H-Scan Ultrasound Imaging.
Khairalseed, Mawia; Xiong, Fangyuan; Kim, Jung-Whan; Mattrey, Robert F; Parker, Kevin J; Hoyt, Kenneth
2018-01-01
H-Scan is a new ultrasound imaging technique that relies on matching a model of pulse-echo formation to the mathematics of a class of Gaussian-weighted Hermite polynomials. This technique may be beneficial in the measurement of relative scatterer sizes and in cancer therapy, particularly for early response to drug treatment. Because current H-scan techniques use focused ultrasound data acquisitions, spatial resolution degrades away from the focal region and inherently affects relative scatterer size estimation. Although the resolution of ultrasound plane wave imaging can be inferior to that of traditional focused ultrasound approaches, the former exhibits a homogeneous spatial resolution throughout the image plane. The purpose of this study was to implement H-scan using plane wave imaging and investigate the impact of spatial angular compounding on H-scan image quality. Parallel convolution filters using two different Gaussian-weighted Hermite polynomials that describe ultrasound scattering events are applied to the radiofrequency data. The H-scan processing is done on each radiofrequency image plane before averaging to get the angular compounded image. The relative strength from each convolution is color-coded to represent relative scatterer size. Given results from a series of phantom materials, H-scan imaging with spatial angular compounding more accurately reflects the true scatterer size caused by reductions in the system point spread function and improved signal-to-noise ratio. Preliminary in vivo H-scan imaging of tumor-bearing animals suggests this modality may be useful for monitoring early response to chemotherapeutic treatment. Overall, H-scan imaging using ultrasound plane waves and spatial angular compounding is a promising approach for visualizing the relative size and distribution of acoustic scattering sources. Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.
Dideriksen, Jakob Lund; Feeney, Daniel F; Almuklass, Awad M; Enoka, Roger M
2017-08-01
Force trajectories during isometric force-matching tasks involving isometric contractions vary substantially across individuals. In this study, we investigated if this variability can be explained by discrete time proportional, integral, derivative (PID) control algorithms with varying model parameters. To this end, we analyzed the pinch force trajectories of 24 subjects performing two rapid force-matching tasks with visual feedback. Both tasks involved isometric contractions to a target force of 10% maximal voluntary contraction. One task involved a single action (pinch) and the other required a double action (concurrent pinch and wrist extension). 50,000 force trajectories were simulated with a computational neuromuscular model whose input was determined by a PID controller with different PID gains and frequencies at which the controller adjusted muscle commands. The goal was to find the best match between each experimental force trajectory and all simulated trajectories. It was possible to identify one realization of the PID controller that matched the experimental force produced during each task for most subjects (average index of similarity: 0.87 ± 0.12; 1 = perfect similarity). The similarities for both tasks were significantly greater than that would be expected by chance (single action: p = 0.01; double action: p = 0.04). Furthermore, the identified control frequencies in the simulated PID controller with the greatest similarities decreased as task difficulty increased (single action: 4.0 ± 1.8 Hz; double action: 3.1 ± 1.3 Hz). Overall, the results indicate that discrete time PID controllers are realistic models for the neural control of force in rapid force-matching tasks involving isometric contractions.
Software for Verifying Image-Correlation Tie Points
NASA Technical Reports Server (NTRS)
Klimeck, Gerhard; Yagi, Gary
2008-01-01
A computer program enables assessment of the quality of tie points in the image-correlation processes of the software described in the immediately preceding article. Tie points are computed in mappings between corresponding pixels in the left and right images of a stereoscopic pair. The mappings are sometimes not perfect because image data can be noisy and parallax can cause some points to appear in one image but not the other. The present computer program relies on the availability of a left- right correlation map in addition to the usual right left correlation map. The additional map must be generated, which doubles the processing time. Such increased time can now be afforded in the data-processing pipeline, since the time for map generation is now reduced from about 60 to 3 minutes by the parallelization discussed in the previous article. Parallel cluster processing time, therefore, enabled this better science result. The first mapping is typically from a point (denoted by coordinates x,y) in the left image to a point (x',y') in the right image. The second mapping is from (x',y') in the right image to some point (x",y") in the left image. If (x,y) and(x",y") are identical, then the mapping is considered perfect. The perfect-match criterion can be relaxed by introducing an error window that admits of round-off error and a small amount of noise. The mapping procedure can be repeated until all points in each image not connected to points in the other image are eliminated, so that what remains are verified correlation data.
Coding performance of the Probe-Orbiter-Earth communication link
NASA Technical Reports Server (NTRS)
Divsalar, D.; Dolinar, S.; Pollara, F.
1993-01-01
The coding performance of the Probe-Orbiter-Earth communication link is analyzed and compared for several cases. It is assumed that the coding system consists of a convolutional code at the Probe, a quantizer and another convolutional code at the Orbiter, and two cascaded Viterbi decoders or a combined decoder on the ground.
2012-03-01
advanced antenna systems AMC adaptive modulation and coding AWGN additive white Gaussian noise BPSK binary phase shift keying BS base station BTC ...QAM-16, and QAM-64, and coding types include convolutional coding (CC), convolutional turbo coding (CTC), block turbo coding ( BTC ), zero-terminating
Sequential Syndrome Decoding of Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.
NASA Astrophysics Data System (ADS)
Zeng, X. G.; Liu, J. J.; Zuo, W.; Chen, W. L.; Liu, Y. X.
2018-04-01
Circular structures are widely distributed around the lunar surface. The most typical of them could be lunar impact crater, lunar dome, et.al. In this approach, we are trying to use the Convolutional Neural Network to classify the lunar circular structures from the lunar images.
Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification
Yang, Xinyi
2016-01-01
In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods. PMID:27610128
A pre-trained convolutional neural network based method for thyroid nodule diagnosis.
Ma, Jinlian; Wu, Fa; Zhu, Jiang; Xu, Dong; Kong, Dexing
2017-01-01
In ultrasound images, most thyroid nodules are in heterogeneous appearances with various internal components and also have vague boundaries, so it is difficult for physicians to discriminate malignant thyroid nodules from benign ones. In this study, we propose a hybrid method for thyroid nodule diagnosis, which is a fusion of two pre-trained convolutional neural networks (CNNs) with different convolutional layers and fully-connected layers. Firstly, the two networks pre-trained with ImageNet database are separately trained. Secondly, we fuse feature maps learned by trained convolutional filters, pooling and normalization operations of the two CNNs. Finally, with the fused feature maps, a softmax classifier is used to diagnose thyroid nodules. The proposed method is validated on 15,000 ultrasound images collected from two local hospitals. Experiment results show that the proposed CNN based methods can accurately and effectively diagnose thyroid nodules. In addition, the fusion of the two CNN based models lead to significant performance improvement, with an accuracy of 83.02%±0.72%. These demonstrate the potential clinical applications of this method. Copyright © 2016 Elsevier B.V. All rights reserved.
Alcoholism Detection by Data Augmentation and Convolutional Neural Network with Stochastic Pooling.
Wang, Shui-Hua; Lv, Yi-Ding; Sui, Yuxiu; Liu, Shuai; Wang, Su-Jing; Zhang, Yu-Dong
2017-11-17
Alcohol use disorder (AUD) is an important brain disease. It alters the brain structure. Recently, scholars tend to use computer vision based techniques to detect AUD. We collected 235 subjects, 114 alcoholic and 121 non-alcoholic. Among the 235 image, 100 images were used as training set, and data augmentation method was used. The rest 135 images were used as test set. Further, we chose the latest powerful technique-convolutional neural network (CNN) based on convolutional layer, rectified linear unit layer, pooling layer, fully connected layer, and softmax layer. We also compared three different pooling techniques: max pooling, average pooling, and stochastic pooling. The results showed that our method achieved a sensitivity of 96.88%, a specificity of 97.18%, and an accuracy of 97.04%. Our method was better than three state-of-the-art approaches. Besides, stochastic pooling performed better than other max pooling and average pooling. We validated CNN with five convolution layers and two fully connected layers performed the best. The GPU yielded a 149× acceleration in training and a 166× acceleration in test, compared to CPU.
Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification.
Pang, Shan; Yang, Xinyi
2016-01-01
In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.
NASA Technical Reports Server (NTRS)
Callier, F. M.; Desoer, C. A.
1973-01-01
A class of multivariable, nonlinear time-varying feedback systems with an unstable convolution subsystem as feedforward and a time-varying nonlinear gain as feedback was considered. The impulse response of the convolution subsystem is the sum of a finite number of increasing exponentials multiplied by nonnegative powers of the time t, a term that is absolutely integrable and an infinite series of delayed impulses. The main result is a theorem. It essentially states that if the unstable convolution subsystem can be stabilized by a constant feedback gain F and if incremental gain of the difference between the nonlinear gain function and F is sufficiently small, then the nonlinear system is L(p)-stable for any p between one and infinity. Furthermore, the solutions of the nonlinear system depend continuously on the inputs in any L(p)-norm. The fixed point theorem is crucial in deriving the above theorem.
Evolutionary image simplification for lung nodule classification with convolutional neural networks.
Lückehe, Daniel; von Voigt, Gabriele
2018-05-29
Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.
Producing data-based sensitivity kernels from convolution and correlation in exploration geophysics.
NASA Astrophysics Data System (ADS)
Chmiel, M. J.; Roux, P.; Herrmann, P.; Rondeleux, B.
2016-12-01
Many studies have shown that seismic interferometry can be used to estimate surface wave arrivals by correlation of seismic signals recorded at a pair of locations. In the case of ambient noise sources, the convergence towards the surface wave Green's functions is obtained with the criterion of equipartitioned energy. However, seismic acquisition with active, controlled sources gives more possibilities when it comes to interferometry. The use of controlled sources makes it possible to recover the surface wave Green's function between two points using either correlation or convolution. We investigate the convolutional and correlational approaches using land active-seismic data from exploration geophysics. The data were recorded on 10,710 vertical receivers using 51,808 sources (seismic vibrator trucks). The sources spacing is the same in both X and Y directions (30 m) which is known as a "carpet shooting". The receivers are placed in parallel lines with a spacing 150 m in the X direction and 30 m in the Y direction. Invoking spatial reciprocity between sources and receivers, correlation and convolution functions can thus be constructed between either pairs of receivers or pairs of sources. Benefiting from the dense acquisition, we extract sensitivity kernels from correlation and convolution measurements of the seismic data. These sensitivity kernels are subsequently used to produce phase-velocity dispersion curves between two points and to separate the higher mode from the fundamental mode for surface waves. Potential application to surface wave cancellation is also envisaged.
Characterization of viral siRNA populations in honey bee colony collapse disorder.
Chejanovsky, Nor; Ophir, Ron; Schwager, Michal Sharabi; Slabezki, Yossi; Grossman, Smadar; Cox-Foster, Diana
2014-04-01
Colony Collapse Disorder (CCD), a special case of collapse of honey bee colonies, has resulted in significant losses for beekeepers. CCD-colonies show abundance of pathogens which suggests that they have a weakened immune system. Since honey bee viruses are major players in colony collapse and given the important role of viral RNA interference (RNAi) in combating viral infections we investigated if CCD-colonies elicit an RNAi response. Deep-sequencing analysis of samples from CCD-colonies from US and Israel revealed abundant small interfering RNAs (siRNA) of 21-22 nucleotides perfectly matching the Israeli acute paralysis virus (IAPV), Kashmir virus and Deformed wing virus genomes. Israeli colonies showed high titers of IAPV and a conserved RNAi-pattern of matching the viral genome. That was also observed in sample analysis from colonies experimentally infected with IAPV. Our results suggest that CCD-colonies set out a siRNA response that is specific against predominant viruses associated with colony losses. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Childs, Dara W.; Alexander, Chis
1994-01-01
This viewgraph presentation presents the following results: (1) The analytical results overpredict the experimental results for the direct stiffness values and incorrectly predict increasing stiffness with decreasing pressure ratios. (2) Theory correctly predicts increasing cross-coupled stiffness, K(sub YX), with increasing eccentricity and inlet preswirl. (3) Direct damping, C(sub XX), underpredicts the experimental results, but the analytical results do correctly show that damping increases with increasing eccentricity. (4) The whirl frequency values predicted by theory are insensitive to changes in the static eccentricity ratio. Although these values match perfectly with the experimental results at 16,000 rpm, the results at the lower speed do not correspond. (5) Theoretical and experimental mass flow rates match at 5000 rpm, but at 16,000 rpm the theoretical results overpredict the experimental mass flow rates. (6) Theory correctly shows the linear pressure profiles and the associated entrance losses with the specified rotor positions.
Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection
Denison, Rachel N.; Driver, Jon; Ruff, Christian C.
2013-01-01
Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067
Self-Report Versus Medical Record for Mammography Screening Among Minority Women.
Nandy, Karabi; Menon, Usha; Szalacha, Laura A; Park, HanJong; Lee, Jongwon; Lee, Eunice E
2016-12-01
Self-report is the most common means of obtaining mammography screening data. The purpose of this study was to assess the accuracy of minority women's self-reported mammography by comparing their self-reported dates of mammograms with those in their medical records from a community-based randomized control trial. We found that out of 192 women, 116 signed the Health Information Portability and Accountability Act form and, among these, 97 had medical records that could be verified (97 / 116 = 83.6%). Ninety-two records matched where both sources confirmed a mammogram; 48 of 92 (52.2%) matched perfectly on self-reported date of mammogram. Complexities in the verification process warrant caution when verifying self-reported mammography screening in minority populations. In spite of some limitations, our findings support the usage of self-reported data on mammography as a validated tool for other researchers investigating mammography screening among minority women who continue to have low screening rates. © The Author(s) 2016.
Comparing the locking threshold for rings and chains of oscillators.
Ottino-Löffler, Bertrand; Strogatz, Steven H
2016-12-01
We present a case study of how topology can affect synchronization. Specifically, we consider arrays of phase oscillators coupled in a ring or a chain topology. Each ring is perfectly matched to a chain with the same initial conditions and the same random natural frequencies. The only difference is their boundary conditions: periodic for a ring and open for a chain. For both topologies, stable phase-locked states exist if and only if the spread or "width" of the natural frequencies is smaller than a critical value called the locking threshold (which depends on the boundary conditions and the particular realization of the frequencies). The central question is whether a ring synchronizes more readily than a chain. We show that it usually does, but not always. Rigorous bounds are derived for the ratio between the locking thresholds of a ring and its matched chain, for a variant of the Kuramoto model that also includes a wider family of models.
Design of a broadband hemispherical wave collimator lens using the ray inserting method.
Taskhiri, Mohammad Mahdi; Amirhosseini, Mohammad Khalaj
2017-07-01
This paper presents a novel inhomogeneous hemispherical dielectric lens. The proposed lens is designed based on the ray inserting method (RIM). Applying this approach, a uniform distribution of the rays' end points over the lens plane aperture is achieved while lens matching to the environment refractive index is perfectly fulfilled. We can change the antenna features such as sidelobe level and gain by controlling the end point of each ray propagated through the hemispherical lens. The refractive index of the designed hemispherical inhomogeneous lens is derived and it is validated using COMSOL Multiphysics. The proposed lens is realized using material drilling and multilayer techniques. Analysis of the realized lens is performed using CST-Microwave Studio. The structure has been fabricated. The results of a simulation and experiment indicate good performances of realized planar lens in a wide frequency bandwidth. Comparing with other hemispherical lenses like classical half Maxwell fish-eye, the improvements in the gain, sidelobe levels, and input matching are achieved by using the RIM.
NASA Astrophysics Data System (ADS)
Guddala, Sriram; Narayana Rao, D.; Ramakrishna, S. Anantha
2016-06-01
A tri-layer metamaterial perfect absorber of light, consisting of (Al/ZnS/Al) films with the top aluminum layer patterned as an array of circular disk nanoantennas, is investigated for resonantly enhancing Raman scattering from C60 fullerene molecules deposited on the metamaterial. The metamaterial is designed to have resonant bands due to plasmonic and electromagnetic resonances at the Raman pump frequency (725 nm) as well as Stokes emission bands. The Raman scattering from C60 on the metamaterial with resonantly matched bands is measured to be enhanced by an order of magnitude more than C60 on metamaterials with off-resonant absorption bands peaking at 1090 nm. The Raman pump is significantly enhanced due to the resonance with a propagating surface plasmon band, while the highly impedance-matched electromagnetic resonance is expected to couple out the Raman emission efficiently. The nature and hybridization of the plasmonic and electromagnetic resonances to form compound resonances are investigated by numerical simulations.
Comparing the locking threshold for rings and chains of oscillators
NASA Astrophysics Data System (ADS)
Ottino-Löffler, Bertrand; Strogatz, Steven H.
2016-12-01
We present a case study of how topology can affect synchronization. Specifically, we consider arrays of phase oscillators coupled in a ring or a chain topology. Each ring is perfectly matched to a chain with the same initial conditions and the same random natural frequencies. The only difference is their boundary conditions: periodic for a ring and open for a chain. For both topologies, stable phase-locked states exist if and only if the spread or "width" of the natural frequencies is smaller than a critical value called the locking threshold (which depends on the boundary conditions and the particular realization of the frequencies). The central question is whether a ring synchronizes more readily than a chain. We show that it usually does, but not always. Rigorous bounds are derived for the ratio between the locking thresholds of a ring and its matched chain, for a variant of the Kuramoto model that also includes a wider family of models.
van 't Hag, Leonie; de Campo, Liliana; Garvey, Christopher J; Feast, George C; Leung, Anna E; Yepuri, Nageshwar Rao; Knott, Robert; Greaves, Tamar L; Tran, Nhiem; Gras, Sally L; Drummond, Calum J; Conn, Charlotte E
2016-07-21
An understanding of the location of peptides, proteins, and other biomolecules within the bicontinuous cubic phase is crucial for understanding and evolving biological and biomedical applications of these hybrid biomolecule-lipid materials, including during in meso crystallization and drug delivery. While theoretical modeling has indicated that proteins and additive lipids might phase separate locally and adopt a preferred location in the cubic phase, this has never been experimentally confirmed. We have demonstrated that perfectly contrast-matched cubic phases in D2O can be studied using small-angle neutron scattering by mixing fully deuterated and hydrogenated lipid at an appropriate ratio. The model transmembrane peptide WALP21 showed no preferential location in the membrane of the diamond cubic phase of phytanoyl monoethanolamide and was not incorporated in the gyroid cubic phase. While deuteration had a small effect on the phase behavior of the cubic phase forming lipids, the changes did not significantly affect our results.
Monro, Donald M; Rakshit, Soumyadip; Zhang, Dexin
2007-04-01
This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent Correct Recognition Rate (CRR) and perfect Receiver-Operating Characteristic (ROC) Curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the False Acceptance Rate (FAR) and False Rejection Rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical Equal Error Rate (EER) is predicted to be as low as 2.59 x 10(-4) on the available data sets.
Push-Pull and Feedback Mechanisms Can Align Signaling System Outputs with Inputs.
Andrews, Steven S; Peria, William J; Yu, Richard C; Colman-Lerner, Alejandro; Brent, Roger
2016-11-23
Many cell signaling systems, including the yeast pheromone response system, exhibit "dose-response alignment" (DoRA), in which output of one or more downstream steps closely matches the fraction of occupied receptors. DoRA can improve the fidelity of transmitted dose information. Here, we searched systematically for biochemical network topologies that produced DoRA. Most networks, including many containing feedback and feedforward loops, could not produce DoRA. However, networks including "push-pull" mechanisms, in which the active form of a signaling species stimulates downstream activity and the nominally inactive form reduces downstream activity, enabled perfect DoRA. Networks containing feedbacks enabled DoRA, but only if they also compared feedback to input and adjusted output to match. Our results establish push-pull as a non-feedback mechanism to align output with variable input and maximize information transfer in signaling systems. They also suggest genetic approaches to determine whether particular signaling systems use feedback or push-pull control. Copyright © 2016 Elsevier Inc. All rights reserved.
Identification of the remains of King Richard III
King, Turi E.; Fortes, Gloria Gonzalez; Balaresque, Patricia; Thomas, Mark G.; Balding, David; Delser, Pierpaolo Maisano; Neumann, Rita; Parson, Walther; Knapp, Michael; Walsh, Susan; Tonasso, Laure; Holt, John; Kayser, Manfred; Appleby, Jo; Forster, Peter; Ekserdjian, David; Hofreiter, Michael; Schürer, Kevin
2014-01-01
In 2012, a skeleton was excavated at the presumed site of the Grey Friars friary in Leicester, the last-known resting place of King Richard III. Archaeological, osteological and radiocarbon dating data were consistent with these being his remains. Here we report DNA analyses of both the skeletal remains and living relatives of Richard III. We find a perfect mitochondrial DNA match between the sequence obtained from the remains and one living relative, and a single-base substitution when compared with a second relative. Y-chromosome haplotypes from male-line relatives and the remains do not match, which could be attributed to a false-paternity event occurring in any of the intervening generations. DNA-predicted hair and eye colour are consistent with Richard’s appearance in an early portrait. We calculate likelihood ratios for the non-genetic and genetic data separately, and combined, and conclude that the evidence for the remains being those of Richard III is overwhelming. PMID:25463651
Abdelraouf, Rasha M; Habib, Nour A
2016-01-01
Objectives . To assess visually color-matching and blending-effect (BE) of a universal shade bulk-fill-resin-composite placed in resin-composite-models with different shades and cavity sizes and in natural teeth (extracted and patients' teeth). Materials and Methods . Resin-composite-discs (10 mm × 1 mm) were prepared of universal shade composite and resin-composite of shades: A1, A2, A3, A3.5, and A4. Spectrophotometric-color-measurement was performed to calculate color-difference (Δ E ) between the universal shade and shaded-resin-composites discs and determine their translucency-parameter (TP). Visual assessment was performed by seven normal-color-vision-observers to determine the color-matching between the universal shade and each shade, under Illuminant D65. Color-matching visual scoring (VS) values were expressed numerically (1-5): 1: mismatch/totally unacceptable, 2: Poor-Match/hardly acceptable, 3: Good-Match/acceptable, 4: Close-Match/small-difference, and 5: Exact-Match/no-color-difference. Occlusal cavities of different sizes were prepared in teeth-like resin-composite-models with shades A1, A2, A3, A3.5, and A4. The cavities were filled by the universal shade composite. The same scale was used to score color-matching between the fillings and composite-models. BE was calculated as difference in mean-visual-scores in models and that of discs. Extracted teeth with two different class I-cavity sizes as well as ten patients' lower posterior molars with occlusal caries were prepared, filled by universal shade composite, and assessed similarly. Results . In models, the universal shade composite showed close matching in the different cavity sizes and surrounding shades (4 ≤ VS < 5) (BE = 0.6-2.9 in small cavities and 0.5-2.8 in large cavities). In extracted teeth, there was good-to-close color-matching (VS = 3.7-4.4 in small cavities, BE = 2.5-3.2) (VS = 3-3.5, BE = 1.8-2.3 in large cavities). In patients' molars, the universal shade composite showed good-matching (VS = 3-3.3, BE = -0.9-2.1). Conclusions . Color-matching of universal shade resin-composite was satisfactory rather than perfect in patients' teeth.
Quantifying the interplay effect in prostate IMRT delivery using a convolution-based method.
Li, Haisen S; Chetty, Indrin J; Solberg, Timothy D
2008-05-01
The authors present a segment-based convolution method to account for the interplay effect between intrafraction organ motion and the multileaf collimator position for each particular segment in intensity modulated radiation therapy (IMRT) delivered in a step-and-shoot manner. In this method, the static dose distribution attributed to each segment is convolved with the probability density function (PDF) of motion during delivery of the segment, whereas in the conventional convolution method ("average-based convolution"), the static dose distribution is convolved with the PDF averaged over an entire fraction, an entire treatment course, or even an entire patient population. In the case of IMRT delivered in a step-and-shoot manner, the average-based convolution method assumes that in each segment the target volume experiences the same motion pattern (PDF) as that of population. In the segment-based convolution method, the dose during each segment is calculated by convolving the static dose with the motion PDF specific to that segment, allowing both intrafraction motion and the interplay effect to be accounted for in the dose calculation. Intrafraction prostate motion data from a population of 35 patients tracked using the Calypso system (Calypso Medical Technologies, Inc., Seattle, WA) was used to generate motion PDFs. These were then convolved with dose distributions from clinical prostate IMRT plans. For a single segment with a small number of monitor units, the interplay effect introduced errors of up to 25.9% in the mean CTV dose compared against the planned dose evaluated by using the PDF of the entire fraction. In contrast, the interplay effect reduced the minimum CTV dose by 4.4%, and the CTV generalized equivalent uniform dose by 1.3%, in single fraction plans. For entire treatment courses delivered in either a hypofractionated (five fractions) or conventional (> 30 fractions) regimen, the discrepancy in total dose due to interplay effect was negligible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cates, J; Drzymala, R
2015-06-15
Purpose: The purpose of this study was to develop and use a novel phantom to evaluate the accuracy and usefulness of the Leskell Gamma Plan convolution-based dose calculation algorithm compared with the current TMR10 algorithm. Methods: A novel phantom was designed to fit the Leskell Gamma Knife G Frame which could accommodate various materials in the form of one inch diameter, cylindrical plugs. The plugs were split axially to allow EBT2 film placement. Film measurements were made during two experiments. The first utilized plans generated on a homogeneous acrylic phantom setup using the TMR10 algorithm, with various materials inserted intomore » the phantom during film irradiation to assess the effect on delivered dose due to unplanned heterogeneities upstream in the beam path. The second experiment utilized plans made on CT scans of different heterogeneous setups, with one plan using the TMR10 dose calculation algorithm and the second using the convolution-based algorithm. Materials used to introduce heterogeneities included air, LDPE, polystyrene, Delrin, Teflon, and aluminum. Results: The data shows that, as would be expected, having heterogeneities in the beam path does induce dose delivery error when using the TMR10 algorithm, with the largest errors being due to the heterogeneities with electron densities most different from that of water, i.e. air, Teflon, and aluminum. Additionally, the Convolution algorithm did account for the heterogeneous material and provided a more accurate predicted dose, in extreme cases up to a 7–12% improvement over the TMR10 algorithm. The convolution algorithm expected dose was accurate to within 3% in all cases. Conclusion: This study proves that the convolution algorithm is an improvement over the TMR10 algorithm when heterogeneities are present. More work is needed to determine what the heterogeneity size/volume limits are where this improvement exists, and in what clinical and/or research cases this would be relevant.« less
Black hole entropy in massive Type IIA
NASA Astrophysics Data System (ADS)
Benini, Francesco; Khachatryan, Hrachya; Milan, Paolo
2018-02-01
We study the entropy of static dyonic BPS black holes in AdS4 in 4d N=2 gauged supergravities with vector and hyper multiplets, and how the entropy can be reproduced with a microscopic counting of states in the AdS/CFT dual field theory. We focus on the particular example of BPS black holes in AdS{\\hspace{0pt}}4 × S6 in massive Type IIA, whose dual three-dimensional boundary description is known and simple. To count the states in field theory we employ a supersymmetric topologically twisted index, which can be computed exactly with localization techniques. We find a perfect match at leading order.
NASA Technical Reports Server (NTRS)
Gadi, Jagannath; Yalamanchili, Raj; Shahid, Mohammad
1995-01-01
The need for high efficiency components has grown significantly due to the expanding role of fiber optic communications for various applications. Integrated optics is in a state of metamorphosis and there are many problems awaiting solutions. One of the main problems being the lack of a simple and efficient method of coupling single-mode fibers to thin-film devices for integrated optics. In this paper, optical coupling between a single-mode fiber and a uniform and tapered thin-film waveguide is theoretically modeled and analyzed. A novel tapered structure presented in this paper is shown to produce perfect match for power transfer.
NASA Astrophysics Data System (ADS)
Li, Xinyi; Bao, Jingfu; Huang, Yulin; Zhang, Benfeng; Omori, Tatsuya; Hashimoto, Ken-ya
2018-07-01
In this paper, we propose the use of the hierarchical cascading technique (HCT) for the finite element method (FEM) analysis of bulk acoustic wave (BAW) devices. First, the implementation of this technique is presented for the FEM analysis of BAW devices. It is shown that the traveling-wave excitation sources proposed by the authors are fully compatible with the HCT. Furthermore, a HCT-based absorbing mechanism is also proposed to replace the perfectly matched layer (PML). Finally, it is demonstrated how the technique is much more efficient in terms of memory consumption and execution time than the full FEM analysis.
Through the eyes of young sibling donors: the hematopoietic stem cell donation experience.
D'Auria, Jennifer P; Fitzgerald, Tania M; Presler, Cammie M; Kasow, Kimberly A
2015-01-01
This qualitative study used a grounded theory approach to explore how pediatric sibling donors of a successful hematopoietic stem cell transplantation conceptualized their donation experiences. Saving my sister's (or brother's) life describes the central phenomenon identified by this purposive sample of 8 sibling donors. Five themes captured their memories: being the perfect match, stepping up, worrying about the outcome, the waiting process, and sharing a special bond. Further research surrounding changes in relational issues will provide insight into inter-sibling support and the developmental course of the sibling relationship into adulthood when intensified by a health crisis. Copyright © 2015 Elsevier Inc. All rights reserved.
Efficient multi-mode to single-mode coupling in a photonic lantern.
Noordegraaf, Danny; Skovgaard, Peter M W; Nielsen, Martin D; Bland-Hawthorn, Joss
2009-02-02
We demonstrate the fabrication of a high performance multi-mode (MM) to single-mode (SM) splitter or "photonic lantern", first described by Leon-Saval et al. (2005). Our photonic lantern is a solid all-glass version, and we show experimentally that this device can be used to achieve efficient and reversible coupling between a MM fiber and a number of SM fibers, when perfectly matched launch conditions into the MM fiber are ensured. The fabricated photonic lantern has a coupling loss for a MM to SM tapered transition of only 0.32 dB which proves the feasibility of the technology.
Flow Cytometry and Solid Organ Transplantation: A Perfect Match
Maguire, Orla; Tario, Joseph D.; Shanahan, Thomas C.; Wallace, Paul K.; Minderman, Hans
2015-01-01
In the field of transplantation, flow cytometry serves a well-established role in pre-transplant crossmatching and monitoring immune reconstitution following hematopoietic stem cell transplantation. The capabilities of flow cytometers have continuously expanded and this combined with more detailed knowledge of the constituents of the immune system, their function and interaction and newly developed reagents to study these parameters have led to additional utility of flow cytometry-based analyses, particularly in the post-transplant setting. This review discusses the impact of flow cytometry on managing alloantigen reactions, monitoring opportunistic infections and graft rejection and gauging immunosuppression in the context of solid organ transplantation. PMID:25296232
Modeling of heavy-gas effects on airfoil flows
NASA Technical Reports Server (NTRS)
Drela, Mark
1992-01-01
Thermodynamic models were constructed for a calorically imperfect gas and for a non-ideal gas. These were incorporated into a quasi one dimensional flow solver to develop an understanding of the differences in flow behavior between the new models and the perfect gas model. The models were also incorporated into a two dimensional flow solver to investigate their effects on transonic airfoil flows. Specifically, the calculations simulated airfoil testing in a proposed high Reynolds number heavy gas test facility. The results indicate that the non-idealities caused significant differences in the flow field, but that matching of an appropriate non-dimensional parameter led to flows similar to those in air.
Selective attention in an insect visual neuron.
Wiederman, Steven D; O'Carroll, David C
2013-01-21
Animals need attention to focus on one target amid alternative distracters. Dragonflies, for example, capture flies in swarms comprising prey and conspecifics, a feat that requires neurons to select one moving target from competing alternatives. Diverse evidence, from functional imaging and physiology to psychophysics, highlights the importance of such "competitive selection" in attention for vertebrates. Analogous mechanisms have been proposed in artificial intelligence and even in invertebrates, yet direct neural correlates of attention are scarce from all animal groups. Here, we demonstrate responses from an identified dragonfly visual neuron that perfectly match a model for competitive selection within limits of neuronal variability (r(2) = 0.83). Responses to individual targets moving at different locations within the receptive field differ in both magnitude and time course. However, responses to two simultaneous targets exclusively track those for one target alone rather than any combination of the pair. Irrespective of target size, contrast, or separation, this neuron selects one target from the pair and perfectly preserves the response, regardless of whether the "winner" is the stronger stimulus if presented alone. This neuron is amenable to electrophysiological recordings, providing neuroscientists with a new model system for studying selective attention. Copyright © 2013 Elsevier Ltd. All rights reserved.
Repulsion-based model for contact angle saturation in electrowetting
2015-01-01
We introduce a new model for contact angle saturation phenomenon in electrowetting on dielectric systems. This new model attributes contact angle saturation to repulsion between trapped charges on the cap and base surfaces of the droplet in the vicinity of the three-phase contact line, which prevents these surfaces from converging during contact angle reduction. This repulsion-based saturation is similar to repulsion between charges accumulated on the surfaces of conducting droplets which causes the well known Coulombic fission and Taylor cone formation phenomena. In our model, both the droplet and dielectric coating were treated as lossy dielectric media (i.e., having finite electrical conductivities and permittivities) contrary to the more common assumption of a perfectly conducting droplet and perfectly insulating dielectric. We used theoretical analysis and numerical simulations to find actual charge distribution on droplet surface, calculate repulsion energy, and minimize energy of the total system as a function of droplet contact angle. Resulting saturation curves were in good agreement with previously reported experimental results. We used this proposed model to predict effect of changing liquid properties, such as electrical conductivity, and system parameters, such as thickness of the dielectric layer, on the saturation angle, which also matched experimental results. PMID:25759748
Repulsion-based model for contact angle saturation in electrowetting.
Ali, Hassan Abdelmoumen Abdellah; Mohamed, Hany Ahmed; Abdelgawad, Mohamed
2015-01-01
We introduce a new model for contact angle saturation phenomenon in electrowetting on dielectric systems. This new model attributes contact angle saturation to repulsion between trapped charges on the cap and base surfaces of the droplet in the vicinity of the three-phase contact line, which prevents these surfaces from converging during contact angle reduction. This repulsion-based saturation is similar to repulsion between charges accumulated on the surfaces of conducting droplets which causes the well known Coulombic fission and Taylor cone formation phenomena. In our model, both the droplet and dielectric coating were treated as lossy dielectric media (i.e., having finite electrical conductivities and permittivities) contrary to the more common assumption of a perfectly conducting droplet and perfectly insulating dielectric. We used theoretical analysis and numerical simulations to find actual charge distribution on droplet surface, calculate repulsion energy, and minimize energy of the total system as a function of droplet contact angle. Resulting saturation curves were in good agreement with previously reported experimental results. We used this proposed model to predict effect of changing liquid properties, such as electrical conductivity, and system parameters, such as thickness of the dielectric layer, on the saturation angle, which also matched experimental results.
Experimental characterization of Fresnel-Köhler concentrators
NASA Astrophysics Data System (ADS)
Zamora, Pablo; Benítez, Pablo; Mohedano, Rubén; Cvetković, Aleksandra; Vilaplana, Juan; Li, Yang; Hernández, Maikel; Chaves, Julio; Miñano, Juan C.
2012-01-01
Most cost-effective concentrated photovoltaics (CPV) systems are based on an optical train comprising two stages, the first being a Fresnel lens. Among them, the Fresnel-Köhler (FK) concentrator stands out owing to both performance and practical reasons. We describe the experimental measurements procedure for FK concentrator modules. This procedure includes three main types of measurements: electrical efficiency, acceptance angle, and irradiance uniformity at the solar cell plane. We have collected here the performance features of two different FK prototypes (ranging different f-numbers, concentration ratios, and cell sizes). The electrical efficiencies measured in both prototypes are high and fit well with the models, achieving values up to 32.7% (temperature corrected, and with no antireflective coating on SOE or POE surfaces) in the best case. The measured angular transmission curves show large acceptance angles, again perfectly matching the expected values [measured concentration acceptance product (CAP) values over 0.56]. The irradiance pattern on the cell (obtained with a digital camera) shows an almost perfectly uniform distribution, as predicted by raytrace simulations. All these excellent on-sun results confirm the FK concentrator as a potentially cost-effective solution for the CPV market.
VLSI single-chip (255,223) Reed-Solomon encoder with interleaver
NASA Technical Reports Server (NTRS)
Hsu, In-Shek (Inventor); Deutsch, Leslie J. (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor)
1990-01-01
The invention relates to a concatenated Reed-Solomon/convolutional encoding system consisting of a Reed-Solomon outer code and a convolutional inner code for downlink telemetry in space missions, and more particularly to a Reed-Solomon encoder with programmable interleaving of the information symbols and code correction symbols to combat error bursts in the Viterbi decoder.
USDA-ARS?s Scientific Manuscript database
It is challenging to achieve rapid and accurate processing of large amounts of hyperspectral image data. This research was aimed to develop a novel classification method by employing deep feature representation with the stacked sparse auto-encoder (SSAE) and the SSAE combined with convolutional neur...
A Real-Time Convolution Algorithm and Architecture with Applications in SAR Processing
1993-10-01
multidimensional lOnnulation of the DFT and convolution. IEEE-ASSP, ASSP-25(3):239-242, June 1977. [6] P. Hoogenboom et al. Definition study PHARUS: final...algorithms and Ihe role of lhe tensor product. IEEE-ASSP, ASSP-40( 1 2):292 J-2930, December 1992. 181 P. Hoogenboom , P. Snoeij. P.J. Koomen. and H
Two-level convolution formula for nuclear structure function
NASA Astrophysics Data System (ADS)
Ma, Boqiang
1990-05-01
A two-level convolution formula for the nuclear structure function is derived in considering the nucleus as a composite system of baryon-mesons which are also composite systems of quark-gluons again. The results show that the European Muon Colaboration effect can not be explained by the nuclear effects as nucleon Fermi motion and nuclear binding contributions.
DSN telemetry system performance with convolutionally code data
NASA Technical Reports Server (NTRS)
Mulhall, B. D. L.; Benjauthrit, B.; Greenhall, C. A.; Kuma, D. M.; Lam, J. K.; Wong, J. S.; Urech, J.; Vit, L. D.
1975-01-01
The results obtained to date and the plans for future experiments for the DSN telemetry system were presented. The performance of the DSN telemetry system in decoding convolutionally coded data by both sequential and maximum likelihood techniques is being determined by testing at various deep space stations. The evaluation of performance models is also an objective of this activity.
Two-dimensional convolute integers for analytical instrumentation
NASA Technical Reports Server (NTRS)
Edwards, T. R.
1982-01-01
As new analytical instruments and techniques emerge with increased dimensionality, a corresponding need is seen for data processing logic which can appropriately address the data. Two-dimensional measurements reveal enhanced unknown mixture analysis capability as a result of the greater spectral information content over two one-dimensional methods taken separately. It is noted that two-dimensional convolute integers are merely an extension of the work by Savitzky and Golay (1964). It is shown that these low-pass, high-pass and band-pass digital filters are truly two-dimensional and that they can be applied in a manner identical with their one-dimensional counterpart, that is, a weighted nearest-neighbor, moving average with zero phase shifting, convoluted integer (universal number) weighting coefficients.
A convolutional neural network neutrino event classifier
Aurisano, A.; Radovic, A.; Rocco, D.; ...
2016-09-01
Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less
Airplane detection in remote sensing images using convolutional neural networks
NASA Astrophysics Data System (ADS)
Ouyang, Chao; Chen, Zhong; Zhang, Feng; Zhang, Yifei
2018-03-01
Airplane detection in remote sensing images remains a challenging problem and has also been taking a great interest to researchers. In this paper we propose an effective method to detect airplanes in remote sensing images using convolutional neural networks. Deep learning methods show greater advantages than the traditional methods with the rise of deep neural networks in target detection, and we give an explanation why this happens. To improve the performance on detection of airplane, we combine a region proposal algorithm with convolutional neural networks. And in the training phase, we divide the background into multi classes rather than one class, which can reduce false alarms. Our experimental results show that the proposed method is effective and robust in detecting airplane.
Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber
Acciarri, R.; Adams, C.; An, R.; ...
2017-03-14
Here, we present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. Lastly, we also address technical issues that arise when applying this technique to data from a large LArTPCmore » at or near ground level.« less
Video-based convolutional neural networks for activity recognition from robot-centric videos
NASA Astrophysics Data System (ADS)
Ryoo, M. S.; Matthies, Larry
2016-05-01
In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.
Gas Classification Using Deep Convolutional Neural Networks.
Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin
2018-01-08
In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP).
Gas Classification Using Deep Convolutional Neural Networks
Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin
2018-01-01
In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP). PMID:29316723
Applications of deep convolutional neural networks to digitized natural history collections.
Schuettpelz, Eric; Frandsen, Paul B; Dikow, Rebecca B; Brown, Abel; Orli, Sylvia; Peters, Melinda; Metallo, Adam; Funk, Vicki A; Dorr, Laurence J
2017-01-01
Natural history collections contain data that are critical for many scientific endeavors. Recent efforts in mass digitization are generating large datasets from these collections that can provide unprecedented insight. Here, we present examples of how deep convolutional neural networks can be applied in analyses of imaged herbarium specimens. We first demonstrate that a convolutional neural network can detect mercury-stained specimens across a collection with 90% accuracy. We then show that such a network can correctly distinguish two morphologically similar plant families 96% of the time. Discarding the most challenging specimen images increases accuracy to 94% and 99%, respectively. These results highlight the importance of mass digitization and deep learning approaches and reveal how they can together deliver powerful new investigative tools.
A convolutional neural network neutrino event classifier
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aurisano, A.; Radovic, A.; Rocco, D.
Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less
Clinicopathologic correlations in Alibert-type mycosis fungoides.
Eng, A M; Blekys, I; Worobec, S M
1981-06-01
Five cases of mycosis fungoides of the Alibert type were studied by taking multiple biopsy specimens at different stages of the disease. Large hyperchromatic, slightly irregular mononuclear cells are the most frequent cells. Ultrastructurally, the cells were only slightly convoluted, had prominent heterochromatin banding at the nuclear membrane, and unremarkable cytoplasmic organelles. Highly convoluted cerebriform nucleated cells were few. Large regular vesicular histiocytes were prominent in the early stages. Ultrastructurally, the cells showed evenly distributed euchromatin. Epidermotrophism was equally as important as Pautrier's abscess as a hallmark of the disease. Stereologic techniques comparing the infiltrate with regard to size and convolution of cells in all stages of mycosis fungoides with infiltrates seen in a variety of benign dermatoses showed no statistically significant differences.
Deep Learning with Hierarchical Convolutional Factor Analysis
Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence
2013-01-01
Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342
Time Domain Version of the Uniform Geometrical Theory of Diffraction
NASA Astrophysics Data System (ADS)
Rousseau, Paul R.
1995-01-01
A time domain (TD) version of the uniform geometrical theory of diffraction which is referred to as the TD-UTD is developed to analyze the transient electromagnetic scattering from perfectly conducting objects that are large in terms of pulse width. In particular, the scattering from a perfectly conducting arbitrary curved wedge and an arbitrary smooth convex surface are treated in detail. Note that the canonical geometries of a circular cylinder and a sphere are special cases of the arbitrary smooth convex surface. These TD -UTD solutions are obtained in the form of relatively simple analytical expressions valid for early to intermediate times. The geometries treated here can be used to build up a transient solution to more complex radiating objects via space-time localization, in exactly the same way as is done by invoking spatial localization properties in the frequency domain UTD. The TD-UTD provides the response due to an excitation of a general astigmatic impulsive wavefront with any polarization. This generalized impulse response may then be convolved with other excitation time pulses, to find even more general solutions due to other excitation pulses. Since the TD-UTD uses the same rays as the frequency domain UTD, it provides a simple picture for transient radiation or scattering and is therefore just as physically appealing as the frequency domain UTD. The formulation of an analytic time transform (ATT), which produces an analytic time signal given a frequency response function, is given here. This ATT is used because it provides a very efficient method of inverting the asymptotic high frequency UTD representations to obtain the corresponding TD-UTD expressions even when there are special UTD transition functions which may not be well behaved at the low frequencies; also, using the ATT avoids the difficulties associated with the inversion of UTD ray fields that traverse line or smooth caustics. Another useful aspect of the ATT is the ability to perform an efficient convolution with a broad class of excitation pulse functions, where the frequency response of the excitation function must be expressed as a summation of complex exponential functions.
NASA Technical Reports Server (NTRS)
Desai, S. D.; Yuan, D. -N.
2006-01-01
A computationally efficient approach to reducing omission errors in ocean tide potential models is derived and evaluated using data from the Gravity Recovery and Climate Experiment (GRACE) mission. Ocean tide height models are usually explicitly available at a few frequencies, and a smooth unit response is assumed to infer the response across the tidal spectrum. The convolution formalism of Munk and Cartwright (1966) models this response function with a Fourier series. This allows the total ocean tide height, and therefore the total ocean tide potential, to be modeled as a weighted sum of past, present, and future values of the tide-generating potential. Previous applications of the convolution formalism have usually been limited to tide height models, but we extend it to ocean tide potential models. We use luni-solar ephemerides to derive the required tide-generating potential so that the complete spectrum of the ocean tide potential is efficiently represented. In contrast, the traditionally adopted harmonic model of the ocean tide potential requires the explicit sum of the contributions from individual tidal frequencies. It is therefore subject to omission errors from neglected frequencies and is computationally more intensive. Intersatellite range rate data from the GRACE mission are used to compare convolution and harmonic models of the ocean tide potential. The monthly range rate residual variance is smaller by 4-5%, and the daily residual variance is smaller by as much as 15% when using the convolution model than when using a harmonic model that is defined by twice the number of parameters.
Lloyd, Janice; Budge, Claire; La Grow, Steve; Stafford, Kevin
2016-01-01
Matching a person who is blind or visually impaired with a guide dog is a process of finding the most suitable guide dog available for that individual. Not all guide dog partnerships are successful, and the consequences of an unsuccessful partnership may result in reduced mobility and quality of life for the handler (owner), and are costly in time and resources for guide dog training establishments. This study examined 50 peoples’ partnerships with one or more dogs (118 pairings) to ascertain the outcome of the relationship. Forty-three of the 118 dogs were returned to the guide dog training establishment before reaching retirement age, with the majority (n = 40) being categorized as having dog-related issues. Most (n = 26) of these dogs’ issues were classified as being behavioral in character, including work-related and non-work-related behavior, and 14 were due to physical causes (mainly poor health). Three dogs were returned due to matters relating to the handlers’ behavior. More second dogs were returned than the handlers’ first or third dogs, and dogs that had been previously used as a guide could be rematched successfully. Defining matching success is not clear-cut. Not all dogs that were returned were considered by their handlers to have been mismatched, and not all dogs retained until retirement were thought to have been good matches, suggesting that some handlers were retaining what they considered to be a poorly matched dog. Almost all the handlers who regarded a dog as being mismatched conceded that some aspects of the match were good. For example, a dog deemed mismatched for poor working behavior may have shown good home and/or other social behaviors. The same principle was true for successful matches, where few handlers claimed to have had a perfect dog. It is hoped that these results may help the guide dog industry identify important aspects of the matching process, and/or be used to identify areas where a matching problem exists. PMID:28018910
Imaging of particles with 3D full parallax mode with two-color digital off-axis holography
NASA Astrophysics Data System (ADS)
Kara-Mohammed, Soumaya; Bouamama, Larbi; Picart, Pascal
2018-05-01
This paper proposes an approach based on two orthogonal views and two wavelengths for recording off-axis two-color holograms. The approach permits to discriminate particles aligned along the sight-view axis. The experimental set-up is based on a double Mach-Zehnder architecture in which two different wavelengths provides the reference and the object beams. The digital processing to get images from the particles is based on convolution so as to obtain images with no wavelength dependence. The spatial bandwidth of the angular spectrum transfer function is adapted in order to increase the maximum reconstruction distance which is generally limited to a few tens of millimeters. In order to get the images of particles in the 3D volume, a calibration process is proposed and is based on the modulation theorem to perfectly superimpose the two views in a common XYZ axis. The experimental set-up is applied to two-color hologram recording of moving non-calibrated opaque particles with average diameter at about 150 μm. After processing the two-color holograms with image reconstruction and view calibration, the location of particles in the 3D volume can be obtained. Particularly, ambiguity about close particles, generating hidden particles in a single-view scheme, can be removed to determine the exact number of particles in the region of interest.
The VLSI design of an error-trellis syndrome decoder for certain convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Jensen, J. M.; Hsu, I.-S.; Truong, T. K.
1986-01-01
A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.
System Design for FEC in Aeronautical Telemetry
2012-03-12
rate punctured convolutional codes for soft decision Viterbi...below follows that given in [8]. The final coding rate of exactly 2/3 is achieved by puncturing the rate -1/2 code as follows. We begin with the buffer c1...concatenated convolutional code (SCCC). The contributions of this paper are on the system-design level. One major contribution is to design a SCCC code
Convolutional coding results for the MVM '73 X-band telemetry experiment
NASA Technical Reports Server (NTRS)
Layland, J. W.
1978-01-01
Results of simulation of several short-constraint-length convolutional codes using a noisy symbol stream obtained via the turnaround ranging channels of the MVM'73 spacecraft are presented. First operational use of this coding technique is on the Voyager mission. The relative performance of these codes in this environment is as previously predicted from computer-based simulations.