Science.gov

Sample records for convolution superposition calculations

  1. Real-time dose computation: GPU-accelerated source modeling and superposition/convolution

    SciTech Connect

    Jacques, Robert; Wong, John; Taylor, Russell; McNutt, Todd

    2011-01-15

    Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for the total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3

  2. Performance Evaluation of Algorithms in Lung IMRT: A comparison of Monte Carlo, Pencil Beam, Superposition, Fast Superposition and Convolution Algorithms

    PubMed Central

    Verma, T.; Painuly, N.K.; Mishra, S.P.; Shajahan, M.; Singh, N.; Bhatt, M.L.B.; Jamal, N.; Pant, M.C.

    2016-01-01

    Background: Inclusion of inhomogeneity corrections in intensity modulated small fields always makes conformal irradiation of lung tumor very complicated in accurate dose delivery. Objective: In the present study, the performance of five algorithms via Monte Carlo, Pencil Beam, Convolution, Fast Superposition and Superposition were evaluated in lung cancer Intensity Modulated Radiotherapy planning. Materials and Methods: Treatment plans for ten lung cancer patients previously planned on Monte Carlo algorithm were re-planned using same treatment planning indices (gantry angel, rank, power etc.) in other four algorithms. Results: The values of radiotherapy planning parameters such as Mean dose, volume of 95% isodose line, Conformity Index, Homogeneity Index for target, Maximum dose, Mean dose; %Volume receiving 20Gy or more by contralateral lung; % volume receiving 30 Gy or more; % volume receiving 25 Gy or more, Mean dose received by heart; %volume receiving 35Gy or more; %volume receiving 50Gy or more, Mean dose to Easophagous; % Volume receiving 45Gy or more, Maximum dose received by Spinal cord and Total monitor unit, Volume of 50 % isodose lines were recorded for all ten patients. Performance of different algorithms was also evaluated statistically. Conclusion: MC and PB algorithms found better as for tumor coverage, dose distribution homogeneity in Planning Target Volume and minimal dose to organ at risks are concerned. Superposition algorithms found to be better than convolution and fast superposition. In the case of tumors located centrally, it is recommended to use Monte Carlo algorithms for the optimal use of radiotherapy. PMID:27853720

  3. Ultrafast convolution/superposition using tabulated and exponential kernels on GPU

    SciTech Connect

    Chen Quan; Chen Mingli; Lu Weiguo

    2011-03-15

    Purpose: Collapsed-cone convolution/superposition (CCCS) dose calculation is the workhorse for IMRT dose calculation. The authors present a novel algorithm for computing CCCS dose on the modern graphic processing unit (GPU). Methods: The GPU algorithm includes a novel TERMA calculation that has no write-conflicts and has linear computation complexity. The CCCS algorithm uses either tabulated or exponential cumulative-cumulative kernels (CCKs) as reported in literature. The authors have demonstrated that the use of exponential kernels can reduce the computation complexity by order of a dimension and achieve excellent accuracy. Special attentions are paid to the unique architecture of GPU, especially the memory accessing pattern, which increases performance by more than tenfold. Results: As a result, the tabulated kernel implementation in GPU is two to three times faster than other GPU implementations reported in literature. The implementation of CCCS showed significant speedup on GPU over single core CPU. On tabulated CCK, speedups as high as 70 are observed; on exponential CCK, speedups as high as 90 are observed. Conclusions: Overall, the GPU algorithm using exponential CCK is 1000-3000 times faster over a highly optimized single-threaded CPU implementation using tabulated CCK, while the dose differences are within 0.5% and 0.5 mm. This ultrafast CCCS algorithm will allow many time-sensitive applications to use accurate dose calculation.

  4. Fluence-convolution broad-beam (FCBB) dose calculation.

    PubMed

    Lu, Weiguo; Chen, Mingli

    2010-12-07

    IMRT optimization requires a fast yet relatively accurate algorithm to calculate the iteration dose with small memory demand. In this paper, we present a dose calculation algorithm that approaches these goals. By decomposing the infinitesimal pencil beam (IPB) kernel into the central axis (CAX) component and lateral spread function (LSF) and taking the beam's eye view (BEV), we established a non-voxel and non-beamlet-based dose calculation formula. Both LSF and CAX are determined by a commissioning procedure using the collapsed-cone convolution/superposition (CCCS) method as the standard dose engine. The proposed dose calculation involves a 2D convolution of a fluence map with LSF followed by ray tracing based on the CAX lookup table with radiological distance and divergence correction, resulting in complexity of O(N(3)) both spatially and temporally. This simple algorithm is orders of magnitude faster than the CCCS method. Without pre-calculation of beamlets, its implementation is also orders of magnitude smaller than the conventional voxel-based beamlet-superposition (VBS) approach. We compared the presented algorithm with the CCCS method using simulated and clinical cases. The agreement was generally within 3% for a homogeneous phantom and 5% for heterogeneous and clinical cases. Combined with the 'adaptive full dose correction', the algorithm is well suitable for calculating the iteration dose during IMRT optimization.

  5. A convolution/superposition method using primary and scatter dose kernels formed for energy bins of X-ray spectra reconstructed as a function of off-axis distance: a theoretical study on 10-MV X-ray dose calculations in thorax-like phantoms.

    PubMed

    Iwasaki, Akira; Kimura, Shigenobu; Sutoh, Kohji; Kamimura, Kazuo; Sasamori, Makoto; Komai, Fumio; Seino, Morio; Terashima, Singo; Kubota, Mamoru; Hirota, Junichi; Hosokawa, Yoichiro

    2011-07-01

    A convolution/superposition method is proposed for use with primary and scatter dose kernels formed for energy bins of X-ray spectra reconstructed as a function of off-axis distance. It should be noted that the number of energy bins is usually about ten, and that the reconstructed X-ray spectra can reasonably be applied to media with a wide range of effective Z numbers, ranging from water to lead. The study was carried out for 10-MV X-ray doses in water and thorax-like phantoms with the use of open-jaw-collimated fields. The dose calculations were made separately for primary, scatter, and electron contamination dose components, for which we used two extended radiation sources: one was on the X-ray target and the other on the flattening filter. To calculate the in-air beam intensities at points on the isocenter plane for a given jaw-collimated field, we introduced an in-air output factor (OPF(in-air)) expressed as the product of the off-center jaw-collimator scatter factor (off-center S (c)), the source off-center ratio factor (OCR(source)), and the jaw-collimator radiation reflection factor (RRF(c)). For more accurate dose calculations, we introduce an electron spread fluctuation factor (F (fwd)) to take into account the angular and spatial spread fluctuation for electrons traveling through different media.

  6. An Improved Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm

    NASA Astrophysics Data System (ADS)

    Jacques, Robert; McNutt, Todd

    2014-03-01

    Purpose: To improve the accuracy of convolution/superposition (C/S) in heterogeneous material by developing a new algorithm: heterogeneity compensated superposition (HCS). Methods: C/S has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to the faster fall-off and re-buildup of dose. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to C/S. We implemented the effective density function as a multivariate first-order recursive filter and incorporated it into GPU-accelerated, multi-energetic C/S implementation. We compared HCS against C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. Results: Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. We defined the per-voxel error, %|mm, as the minimum of the distance to agreement in mm and the dosimetric percentage error relative to the maximum MC dose. HCS improved the average mean error by 0.79 %|mm for the patient volumes; reducing the average mean error from 1.93 %|mm to 1.14 %|mm. Very low densities (i.e. < 0.1 g / cm3) remained problematic, but may be solvable with a better filter function. Conclusions: HCS improved upon C/S's density scaled heterogeneity correction with a position and direction sensitive density filter. This method significantly improved the accuracy of the GPU based algorithm reaching the accuracy levels of Monte Carlo based methods with performance in a few tenths of seconds per beam. Acknowledgement: Funding for this research was provided by the NSF Cooperative Agreement EEC9731748, Elekta / IMPAC Medical Systems, Inc. and the Johns Hopkins University. James Satterthwaite provided the Monte Carlo benchmark simulations.

  7. FAST-PT: a novel algorithm to calculate convolution integrals in cosmological perturbation theory

    NASA Astrophysics Data System (ADS)

    McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.; Blazek, Jonathan A.

    2016-09-01

    We present a novel algorithm, FAST-PT, for performing convolution or mode-coupling integrals that appear in nonlinear cosmological perturbation theory. The algorithm uses several properties of gravitational structure formation—the locality of the dark matter equations and the scale invariance of the problem—as well as Fast Fourier Transforms to describe the input power spectrum as a superposition of power laws. This yields extremely fast performance, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation. We describe the algorithm and demonstrate its application to calculating nonlinear corrections to the matter power spectrum, including one-loop standard perturbation theory and the renormalization group approach. We also describe our public code (in Python) to implement this algorithm. The code, along with a user manual and example implementations, is available at https://github.com/JoeMcEwen/FAST-PT.

  8. Calculating Interaction Energies Using First Principle Theories: Consideration of Basis Set Superposition Error and Fragment Relaxation

    ERIC Educational Resources Information Center

    Bowen, J. Philip; Sorensen, Jennifer B.; Kirschner, Karl N.

    2007-01-01

    The analysis explains the basis set superposition error (BSSE) and fragment relaxation involved in calculating the interaction energies using various first principle theories. Interacting the correlated fragment and increasing the size of the basis set can help in decreasing the BSSE to a great extent.

  9. FAST-PT: Convolution integrals in cosmological perturbation theory calculator

    NASA Astrophysics Data System (ADS)

    McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.; Blazek, Jonathan A.

    2016-03-01

    FAST-PT calculates 1-loop corrections to the matter power spectrum in cosmology. The code utilizes Fourier methods combined with analytic expressions to reduce the computation time down to scale as N log N, where N is the number of grid point in the input linear power spectrum. FAST-PT is extremely fast, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation.

  10. Collapsed cone convolution of radiant energy for photon dose calculation in heterogeneous media.

    PubMed

    Ahnesjö, A

    1989-01-01

    A method for photon beam dose calculations is described. The primary photon beam is raytraced through the patient, and the distribution of total radiant energy released into the patient is calculated. Polyenergetic energy deposition kernels are calculated from the spectrum of the beam, using a database of monoenergetic kernels. It is shown that the polyenergetic kernels can be analytically described with high precision by (A exp( -ar) + B exp( -br)/r2, where A, a, B, and b depend on the angle with respect to the impinging photons and the accelerating potential, and r is the radial distance. Numerical values of A, a, B, and b are derived and used to convolve energy deposition kernels with the total energy released per unit mass (TERMA) to yield dose distributions. The convolution is facilitated by the introduction of the collapsed cone approximation. In this approximation, all energy released into coaxial cones of equal solid angle, from volume elements on the cone axis, is rectilinearly transported, attenuated, and deposited in elements on the axis. Scaling of the kernels is implicitly done during the convolution procedure to fully account for inhomogeneities present in the irradiated volume. The number of computational operations needed to compute the dose with the method is proportional to the number of calculation points. The method is tested for five accelerating potentials; 4, 6, 10, 15, and 24 MV, and applied to two geometries; one is a stack of slabs of tissue media, and the other is a mediastinum-like phantom of cork and water. In these geometries, the EGS4 Monte Carlo system has been used to generate reference dose distributions with which the dose computed with the collapsed cone convolution method is compared. Generally, the agreement between the methods is excellent. Deviations are observed in situations of lateral charged particle disequilibrium in low density media, however, but the result is superior compared to that of the generalized Batho method.

  11. A comparison between anisotropic analytical and multigrid superposition dose calculation algorithms in radiotherapy treatment planning

    SciTech Connect

    Wu, Vincent W.C.; Tse, Teddy K.H.; Ho, Cola L.M.; Yeung, Eric C.Y.

    2013-07-01

    Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time.

  12. Influence of the superposition approximation on calculated effective dose rates from galactic cosmic rays at aerospace-related altitudes

    NASA Astrophysics Data System (ADS)

    Copeland, Kyle

    2015-07-01

    The superposition approximation was commonly employed in atmospheric nuclear transport modeling until recent years and is incorporated into flight dose calculation codes such as CARI-6 and EPCARD. The useful altitude range for this approximation is investigated using Monte Carlo transport techniques. CARI-7A simulates atmospheric radiation transport of elements H-Fe using a database of precalculated galactic cosmic radiation showers calculated with MCNPX 2.7.0 and is employed here to investigate the influence of the superposition approximation on effective dose rates, relative to full nuclear transport of galactic cosmic ray primary ions. Superposition is found to produce results less than 10% different from nuclear transport at current commercial and business aviation altitudes while underestimating dose rates at higher altitudes. The underestimate sometimes exceeds 20% at approximately 23 km and exceeds 40% at 50 km. Thus, programs employing this approximation should not be used to estimate doses or dose rates for high-altitude portions of the commercial space and near-space manned flights that are expected to begin soon.

  13. SU-E-T-423: Fast Photon Convolution Calculation with a 3D-Ideal Kernel On the GPU

    SciTech Connect

    Moriya, S; Sato, M; Tachibana, H

    2015-06-15

    Purpose: The calculation time is a trade-off for improving the accuracy of convolution dose calculation with fine calculation spacing of the KERMA kernel. We investigated to accelerate the convolution calculation using an ideal kernel on the Graphic Processing Units (GPU). Methods: The calculation was performed on the AMD graphics hardware of Dual FirePro D700 and our algorithm was implemented using the Aparapi that convert Java bytecode to OpenCL. The process of dose calculation was separated with the TERMA and KERMA steps. The dose deposited at the coordinate (x, y, z) was determined in the process. In the dose calculation running on the central processing unit (CPU) of Intel Xeon E5, the calculation loops were performed for all calculation points. On the GPU computation, all of the calculation processes for the points were sent to the GPU and the multi-thread computation was done. In this study, the dose calculation was performed in a water equivalent homogeneous phantom with 150{sup 3} voxels (2 mm calculation grid) and the calculation speed on the GPU to that on the CPU and the accuracy of PDD were compared. Results: The calculation time for the GPU and the CPU were 3.3 sec and 4.4 hour, respectively. The calculation speed for the GPU was 4800 times faster than that for the CPU. The PDD curve for the GPU was perfectly matched to that for the CPU. Conclusion: The convolution calculation with the ideal kernel on the GPU was clinically acceptable for time and may be more accurate in an inhomogeneous region. Intensity modulated arc therapy needs dose calculations for different gantry angles at many control points. Thus, it would be more practical that the kernel uses a coarse spacing technique if the calculation is faster while keeping the similar accuracy to a current treatment planning system.

  14. Iron-oxygen vacancy defect centers in PbTi O3 : Newman superposition model analysis and density functional calculations

    NASA Astrophysics Data System (ADS)

    Meštrić, H.; Eichel, R.-A.; Kloss, T.; Dinse, K.-P.; Laubach, So.; Laubach, St.; Schmidt, P. C.; Schönau, K. A.; Knapp, M.; Ehrenberg, H.

    2005-04-01

    The Fe3+ center in ferroelectric PbTiO3 together with an oxygen vacancy forms a charged defect associate, oriented along the crystallographic c axis. Its microscopic structure has been analyzed in detail comparing results from a semiempirical Newman superposition model analysis based on fine-structure data and from calculations using density functional theory. Both methods give evidence for a substitution of Fe3+ for Ti4+ as an acceptor center. The position of the iron ion in the ferroelectric phase is found to be similar to the B site in the paraelectric phase. Partial charge compensation is locally provided by a directly coordinated oxygen vacancy. Using high-resolution synchrotron powder diffraction, it was verified that lead titanate remains tetragonal down to 12K , exhibiting a c/a ratio of 1.0721.

  15. Compressed convolution

    NASA Astrophysics Data System (ADS)

    Elsner, Franz; Wandelt, Benjamin D.

    2014-01-01

    We introduce the concept of compressed convolution, a technique to convolve a given data set with a large number of non-orthogonal kernels. In typical applications our technique drastically reduces the effective number of computations. The new method is applicable to convolutions with symmetric and asymmetric kernels and can be easily controlled for an optimal trade-off between speed and accuracy. It is based on linear compression of the collection of kernels into a small number of coefficients in an optimal eigenbasis. The final result can then be decompressed in constant time for each desired convolved output. The method is fully general and suitable for a wide variety of problems. We give explicit examples in the context of simulation challenges for upcoming multi-kilo-detector cosmic microwave background (CMB) missions. For a CMB experiment with detectors with similar beam properties, we demonstrate that the algorithm can decrease the costs of beam convolution by two to three orders of magnitude with negligible loss of accuracy. Likewise, it has the potential to allow the reduction of disk space required to store signal simulations by a similar amount. Applications in other areas of astrophysics and beyond are optimal searches for a large number of templates in noisy data, e.g. from a parametrized family of gravitational wave templates; or calculating convolutions with highly overcomplete wavelet dictionaries, e.g. in methods designed to uncover sparse signal representations.

  16. Geometrical correction for the inter- and intramolecular basis set superposition error in periodic density functional theory calculations.

    PubMed

    Brandenburg, Jan Gerit; Alessio, Maristella; Civalleri, Bartolomeo; Peintinger, Michael F; Bredow, Thomas; Grimme, Stefan

    2013-09-26

    We extend the previously developed geometrical correction for the inter- and intramolecular basis set superposition error (gCP) to periodic density functional theory (DFT) calculations. We report gCP results compared to those from the standard Boys-Bernardi counterpoise correction scheme and large basis set calculations. The applicability of the method to molecular crystals as the main target is tested for the benchmark set X23. It consists of 23 noncovalently bound crystals as introduced by Johnson et al. (J. Chem. Phys. 2012, 137, 054103) and refined by Tkatchenko et al. (J. Chem. Phys. 2013, 139, 024705). In order to accurately describe long-range electron correlation effects, we use the standard atom-pairwise dispersion correction scheme DFT-D3. We show that a combination of DFT energies with small atom-centered basis sets, the D3 dispersion correction, and the gCP correction can accurately describe van der Waals and hydrogen-bonded crystals. Mean absolute deviations of the X23 sublimation energies can be reduced by more than 70% and 80% for the standard functionals PBE and B3LYP, respectively, to small residual mean absolute deviations of about 2 kcal/mol (corresponding to 13% of the average sublimation energy). As a further test, we compute the interlayer interaction of graphite for varying distances and obtain a good equilibrium distance and interaction energy of 6.75 Å and -43.0 meV/atom at the PBE-D3-gCP/SVP level. We fit the gCP scheme for a recently developed pob-TZVP solid-state basis set and obtain reasonable results for the X23 benchmark set and the potential energy curve for water adsorption on a nickel (110) surface.

  17. A novel algorithm for the calculation of physical and biological irradiation quantities in scanned ion beam therapy: the beamlet superposition approach

    NASA Astrophysics Data System (ADS)

    Russo, G.; Attili, A.; Battistoni, G.; Bertrand, D.; Bourhaleb, F.; Cappucci, F.; Ciocca, M.; Mairani, A.; Milian, F. M.; Molinelli, S.; Morone, M. C.; Muraro, S.; Orts, T.; Patera, V.; Sala, P.; Schmitt, E.; Vivaldo, G.; Marchetto, F.

    2016-01-01

    The calculation algorithm of a modern treatment planning system for ion-beam radiotherapy should ideally be able to deal with different ion species (e.g. protons and carbon ions), to provide relative biological effectiveness (RBE) evaluations and to describe different beam lines. In this work we propose a new approach for ion irradiation outcomes computations, the beamlet superposition (BS) model, which satisfies these requirements. This model applies and extends the concepts of previous fluence-weighted pencil-beam algorithms to quantities of radiobiological interest other than dose, i.e. RBE- and LET-related quantities. It describes an ion beam through a beam-line specific, weighted superposition of universal beamlets. The universal physical and radiobiological irradiation effect of the beamlets on a representative set of water-like tissues is evaluated once, coupling the per-track information derived from FLUKA Monte Carlo simulations with the radiobiological effectiveness provided by the microdosimetric kinetic model and the local effect model. Thanks to an extension of the superposition concept, the beamlet irradiation action superposition is applicable for the evaluation of dose, RBE and LET distributions. The weight function for the beamlets superposition is derived from the beam phase space density at the patient entrance. A general beam model commissioning procedure is proposed, which has successfully been tested on the CNAO beam line. The BS model provides the evaluation of different irradiation quantities for different ions, the adaptability permitted by weight functions and the evaluation speed of analitical approaches. Benchmarking plans in simple geometries and clinical plans are shown to demonstrate the model capabilities.

  18. A novel algorithm for the calculation of physical and biological irradiation quantities in scanned ion beam therapy: the beamlet superposition approach.

    PubMed

    Russo, G; Attili, A; Battistoni, G; Bertrand, D; Bourhaleb, F; Cappucci, F; Ciocca, M; Mairani, A; Milian, F M; Molinelli, S; Morone, M C; Muraro, S; Orts, T; Patera, V; Sala, P; Schmitt, E; Vivaldo, G; Marchetto, F

    2016-01-07

    The calculation algorithm of a modern treatment planning system for ion-beam radiotherapy should ideally be able to deal with different ion species (e.g. protons and carbon ions), to provide relative biological effectiveness (RBE) evaluations and to describe different beam lines. In this work we propose a new approach for ion irradiation outcomes computations, the beamlet superposition (BS) model, which satisfies these requirements. This model applies and extends the concepts of previous fluence-weighted pencil-beam algorithms to quantities of radiobiological interest other than dose, i.e. RBE- and LET-related quantities. It describes an ion beam through a beam-line specific, weighted superposition of universal beamlets. The universal physical and radiobiological irradiation effect of the beamlets on a representative set of water-like tissues is evaluated once, coupling the per-track information derived from FLUKA Monte Carlo simulations with the radiobiological effectiveness provided by the microdosimetric kinetic model and the local effect model. Thanks to an extension of the superposition concept, the beamlet irradiation action superposition is applicable for the evaluation of dose, RBE and LET distributions. The weight function for the beamlets superposition is derived from the beam phase space density at the patient entrance. A general beam model commissioning procedure is proposed, which has successfully been tested on the CNAO beam line. The BS model provides the evaluation of different irradiation quantities for different ions, the adaptability permitted by weight functions and the evaluation speed of analitical approaches. Benchmarking plans in simple geometries and clinical plans are shown to demonstrate the model capabilities.

  19. Theoretical calculation on ICI reduction using digital coherent superposition of optical OFDM subcarrier pairs in the presence of laser phase noise.

    PubMed

    Yi, Xingwen; Xu, Bo; Zhang, Jing; Lin, Yun; Qiu, Kun

    2014-12-15

    Digital coherent superposition (DCS) of optical OFDM subcarrier pairs with Hermitian symmetry can reduce the inter-carrier-interference (ICI) noise resulted from phase noise. In this paper, we show two different implementations of DCS-OFDM that have the same performance in the presence of laser phase noise. We complete the theoretical calculation on ICI reduction by using the model of pure Wiener phase noise. By Taylor expansion of the ICI, we show that the ICI power is cancelled to the second order by DCS. The fourth order term is further derived out and only decided by the ratio of laser linewidth to OFDM subcarrier symbol rate, which can greatly simplify the system design. Finally, we verify our theoretical calculations in simulations and use the analytical results to predict the system performance. DCS-OFDM is expected to be beneficial to certain optical fiber transmissions.

  20. A 3D superposition pencil beam dose calculation algorithm for a 60Co therapy unit and its verification by MC simulation

    NASA Astrophysics Data System (ADS)

    Koncek, O.; Krivonoska, J.

    2014-11-01

    The MCNP Monte Carlo code was used to simulate the collimating system of the 60Co therapy unit to calculate the primary and scattered photon fluences as well as the electron contamination incident to the isocentric plane as the functions of the irradiation field size. Furthermore, a Monte Carlo simulation for the polyenergetic Pencil Beam Kernels (PBKs) generation was performed using the calculated photon and electron spectra. The PBK was analytically fitted to speed up the dose calculation using the convolution technique in the homogeneous media. The quality of the PBK fit was verified by comparing the calculated and simulated 60Co broad beam profiles and depth dose curves in a homogeneous water medium. The inhomogeneity correction coefficients were derived from the PBK simulation of an inhomogeneous slab phantom consisting of various materials. The inhomogeneity calculation model is based on the changes in the PBK radial displacement and on the change of the forward and backward electron scattering. The inhomogeneity correction is derived from the electron density values gained from a complete 3D CT array and considers different electron densities through which the pencil beam is propagated as well as the electron density values located between the interaction point and the point of dose deposition. Important aspects and details of the algorithm implementation are also described in this study.

  1. A geometrical correction for the inter- and intra-molecular basis set superposition error in Hartree-Fock and density functional theory calculations for large systems

    NASA Astrophysics Data System (ADS)

    Kruse, Holger; Grimme, Stefan

    2012-04-01

    A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model

  2. Network Class Superposition Analyses

    PubMed Central

    Pearson, Carl A. B.; Zeng, Chen; Simha, Rahul

    2013-01-01

    Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., for the yeast cell cycle process [1]), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix , which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for derived from Boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with . We show how to generate Derrida plots based on . We show that -based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on . We motivate all of these results in terms of a popular molecular biology Boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for , for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses. PMID:23565141

  3. Network class superposition analyses.

    PubMed

    Pearson, Carl A B; Zeng, Chen; Simha, Rahul

    2013-01-01

    Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30) for the yeast cell cycle process), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses.

  4. Superpositions of probability distributions

    NASA Astrophysics Data System (ADS)

    Jizba, Petr; Kleinert, Hagen

    2008-09-01

    Probability distributions which can be obtained from superpositions of Gaussian distributions of different variances v=σ2 play a favored role in quantum theory and financial markets. Such superpositions need not necessarily obey the Chapman-Kolmogorov semigroup relation for Markovian processes because they may introduce memory effects. We derive the general form of the smearing distributions in v which do not destroy the semigroup property. The smearing technique has two immediate applications. It permits simplifying the system of Kramers-Moyal equations for smeared and unsmeared conditional probabilities, and can be conveniently implemented in the path integral calculus. In many cases, the superposition of path integrals can be evaluated much easier than the initial path integral. Three simple examples are presented, and it is shown how the technique is extended to quantum mechanics.

  5. Superposition rendering: Increased realism for interactive walkthroughs

    NASA Astrophysics Data System (ADS)

    Bastos, Rui M. R. De

    1999-11-01

    The light transport equation, conventionally known as the rendering equation in a slightly different form, is an implicit integral equation, which represents the interactions of light with matter and the distribution of light in a scene. This research describes a signals-and- systems approach to light transport and casts the light transport equation in terms of convolution. Additionally, the light transport problem is linearly decomposed into simpler problems with simpler solutions, which are then recombined to approximate the full solution. The central goal is to provide interactive photorealistic rendering of virtual environments. We show how the light transport problem can be cast in terms of signals-and-systems. The light is the signal and the materials are the systems. The outgoing light from a light transfer at a surface point is given by convolving the incoming light with the material's impulse response (the material's BRDF/BTDF). Even though the theoretical approach is presented in directional-space, we present an approximation in screen-space, which enables the exploitation of graphics hardware convolution for approximating the light transport equation. The convolution approach to light transport is not enough to fully solve the light transport problem at interactive rates with current machines. We decompose the light transport problem into simpler problems. The decomposition of the light transport problem is based on distinct characteristics of different parts of the problem: the ideally diffuse, the ideally specular, and the glossy transfers. A technique for interactive rendering of each of these components is presented as well a technique for superposing the independent components in a multipass manner in real time. Given the extensive use of the superposition principle in this research, we name our approach superposition rendering to distinguish it from other standard hardware-aided multipass rendering approaches.

  6. Calculations of the ionization potentials of the halogens by the relativistic Hartree-Rock-Dirac method taking account of superposition of configurations

    SciTech Connect

    Tupitsyn, I.I.

    1988-03-01

    The ionization potentials of the halogen group have been calculated. The calculations were carried out using the relativistic Hartree-Fock method taking into account correlation effects. Comparison of theoretical results with experimental data for the elements F, Cl, Br, and I allows an estimation of the accuracy and reliability of the method. The theoretical values of the ionization potential of astatine obtained here may be of definite interest for the chemistry of astatine.

  7. Stereotactic Body Radiotherapy for Primary Lung Cancer at a Dose of 50 Gy Total in Five Fractions to the Periphery of the Planning Target Volume Calculated Using a Superposition Algorithm

    SciTech Connect

    Takeda, Atsuya; Sanuki, Naoko; Kunieda, Etsuo Ohashi, Toshio; Oku, Yohei; Takeda, Toshiaki; Shigematsu, Naoyuki; Kubo, Atsushi

    2009-02-01

    Purpose: To retrospectively analyze the clinical outcomes of stereotactic body radiotherapy (SBRT) for patients with Stages 1A and 1B non-small-cell lung cancer. Methods and Materials: We reviewed the records of patients with non-small-cell lung cancer treated with curative intent between Dec 2001 and May 2007. All patients had histopathologically or cytologically confirmed disease, increased levels of tumor markers, and/or positive findings on fluorodeoxyglucose positron emission tomography. Staging studies identified their disease as Stage 1A or 1B. Performance status was 2 or less according to World Health Organization guidelines in all cases. The prescribed dose of 50 Gy total in five fractions, calculated by using a superposition algorithm, was defined for the periphery of the planning target volume. Results: One hundred twenty-one patients underwent SBRT during the study period, and 63 were eligible for this analysis. Thirty-eight patients had Stage 1A (T1N0M0) and 25 had Stage 1B (T2N0M0). Forty-nine patients were not appropriate candidates for surgery because of chronic pulmonary disease. Median follow-up of these 49 patients was 31 months (range, 10-72 months). The 3-year local control, disease-free, and overall survival rates in patients with Stages 1A and 1B were 93% and 96% (p = 0.86), 76% and 77% (p = 0.83), and 90% and 63% (p = 0.09), respectively. No acute toxicity was observed. Grade 2 or higher radiation pneumonitis was experienced by 3 patients, and 1 of them had fatal bacterial pneumonia. Conclusions: The SBRT at 50 Gy total in five fractions to the periphery of the planning target volume calculated by using a superposition algorithm is feasible. High local control rates were achieved for both T2 and T1 tumors.

  8. Investigation of the Fe{sup 3+} centers in perovskite KMgF{sub 3} through a combination of ab initio (density functional theory) and semi-empirical (superposition model) calculations

    SciTech Connect

    Emül, Y.; Erbahar, D.; Açıkgöz, M.

    2015-08-14

    Analyses of the local crystal and electronic structure in the vicinity of Fe{sup 3+} centers in perovskite KMgF{sub 3} crystal have been carried out in a comprehensive manner. A combination of density functional theory (DFT) and a semi-empirical superposition model (SPM) is used for a complete analysis of all Fe{sup 3+} centers in this study for the first time. Some quantitative information has been derived from the DFT calculations on both the electronic structure and the local geometry around Fe{sup 3+} centers. All of the trigonal (K-vacancy case, K-Li substitution case, and normal trigonal Fe{sup 3+} center case), FeF{sub 5}O cluster, and tetragonal (Mg-vacancy and Mg-Li substitution cases) centers have been taken into account based on the previously suggested experimental and theoretical inferences. The collaboration between the experimental data and the results of both DFT and SPM calculations provides us to understand most probable structural model for Fe{sup 3+} centers in KMgF{sub 3}.

  9. Quantum superpositions of crystalline structures

    SciTech Connect

    Baltrusch, Jens D.; Morigi, Giovanna; Cormick, Cecilia; De Chiara, Gabriele; Calarco, Tommaso

    2011-12-15

    A procedure is discussed for creating coherent superpositions of motional states of ion strings. The motional states are across the structural transition linear-zigzag, and their coherent superposition is achieved by means of spin-dependent forces, such that a coherent superposition of the electronic states of one ion evolves into an entangled state between the chain's internal and external degrees of freedom. It is shown that the creation of such an entangled state can be revealed by performing Ramsey interferometry with one ion of the chain.

  10. Superposition properties of interacting ion channels.

    PubMed Central

    Keleshian, A M; Yeo, G F; Edeson, R O; Madsen, B W

    1994-01-01

    Quantitative analysis of patch clamp data is widely based on stochastic models of single-channel kinetics. Membrane patches often contain more than one active channel of a given type, and it is usually assumed that these behave independently in order to interpret the record and infer individual channel properties. However, recent studies suggest there are significant channel interactions in some systems. We examine a model of dependence in a system of two identical channels, each modeled by a continuous-time Markov chain in which specified transition rates are dependent on the conductance state of the other channel, changing instantaneously when the other channel opens or closes. Each channel then has, e.g., a closed time density that is conditional on the other channel being open or closed, these being identical under independence. We relate the two densities by a convolution function that embodies information about, and serves to quantify, dependence in the closed class. Distributions of observable (superposition) sojourn times are given in terms of these conditional densities. The behavior of two channel systems based on two- and three-state Markov models is examined by simulation. Optimized fitting of simulated data using reasonable parameters values and sample size indicates that both positive and negative cooperativity can be distinguished from independence. PMID:7524711

  11. Improved scatter correction using adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Sun, M.; Star-Lack, J. M.

    2010-11-01

    Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.

  12. SU-E-T-91: Accuracy of Dose Calculation Algorithms for Patients Undergoing Stereotactic Ablative Radiotherapy

    SciTech Connect

    Tajaldeen, A; Ramachandran, P; Geso, M

    2015-06-15

    Purpose: The purpose of this study was to investigate and quantify the variation in dose distributions in small field lung cancer radiotherapy using seven different dose calculation algorithms. Methods: The study was performed in 21 lung cancer patients who underwent Stereotactic Ablative Body Radiotherapy (SABR). Two different methods (i) Same dose coverage to the target volume (named as same dose method) (ii) Same monitor units in all algorithms (named as same monitor units) were used for studying the performance of seven different dose calculation algorithms in XiO and Eclipse treatment planning systems. The seven dose calculation algorithms include Superposition, Fast superposition, Fast Fourier Transform ( FFT) Convolution, Clarkson, Anisotropic Analytic Algorithm (AAA), Acurous XB and pencil beam (PB) algorithms. Prior to this, a phantom study was performed to assess the accuracy of these algorithms. Superposition algorithm was used as a reference algorithm in this study. The treatment plans were compared using different dosimetric parameters including conformity, heterogeneity and dose fall off index. In addition to this, the dose to critical structures like lungs, heart, oesophagus and spinal cord were also studied. Statistical analysis was performed using Prism software. Results: The mean±stdev with conformity index for Superposition, Fast superposition, Clarkson and FFT convolution algorithms were 1.29±0.13, 1.31±0.16, 2.2±0.7 and 2.17±0.59 respectively whereas for AAA, pencil beam and Acurous XB were 1.4±0.27, 1.66±0.27 and 1.35±0.24 respectively. Conclusion: Our study showed significant variations among the seven different algorithms. Superposition and AcurosXB algorithms showed similar values for most of the dosimetric parameters. Clarkson, FFT convolution and pencil beam algorithms showed large differences as compared to superposition algorithms. Based on our study, we recommend Superposition and AcurosXB algorithms as the first choice of

  13. Convolutional coding techniques for data protection

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  14. Student ability to distinguish between superposition states and mixed states in quantum mechanics

    NASA Astrophysics Data System (ADS)

    Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.

    2015-12-01

    Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the experimental implications of a superposition state. In particular, they fail to recognize how a superposition state and a mixed state (sometimes called a "lack of knowledge" state) can produce different experimental results. We present data that suggest that superposition in quantum mechanics is a difficult concept for students enrolled in sophomore-, junior-, and graduate-level quantum mechanics courses. We illustrate how an interactive lecture tutorial can improve student understanding of quantum mechanical superposition. A longitudinal study suggests that the impact persists after an additional quarter of quantum mechanics instruction that does not specifically address these ideas.

  15. Student Ability to Distinguish between Superposition States and Mixed States in Quantum Mechanics

    ERIC Educational Resources Information Center

    Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.

    2015-01-01

    Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the…

  16. Creating a Superposition of Unknown Quantum States.

    PubMed

    Oszmaniec, Michał; Grudka, Andrzej; Horodecki, Michał; Wójcik, Antoni

    2016-03-18

    The superposition principle is one of the landmarks of quantum mechanics. The importance of quantum superpositions provokes questions about the limitations that quantum mechanics itself imposes on the possibility of their generation. In this work, we systematically study the problem of the creation of superpositions of unknown quantum states. First, we prove a no-go theorem that forbids the existence of a universal probabilistic quantum protocol producing a superposition of two unknown quantum states. Second, we provide an explicit probabilistic protocol generating a superposition of two unknown states, each having a fixed overlap with the known referential pure state. The protocol can be applied to generate coherent superposition of results of independent runs of subroutines in a quantum computer. Moreover, in the context of quantum optics it can be used to efficiently generate highly nonclassical states or non-Gaussian states.

  17. Mesoscopic Superposition States in Relativistic Landau Levels

    SciTech Connect

    Bermudez, A.; Martin-Delgado, M. A.; Solano, E.

    2007-09-21

    We show that a linear superposition of mesoscopic states in relativistic Landau levels can be built when an external magnetic field couples to a relativistic spin 1/2 charged particle. Under suitable initial conditions, the associated Dirac equation produces unitarily superpositions of coherent states involving the particle orbital quanta in a well-defined mesoscopic regime. We demonstrate that these mesoscopic superpositions have a purely relativistic origin and disappear in the nonrelativistic limit.

  18. Determinate-state convolutional codes

    NASA Technical Reports Server (NTRS)

    Collins, O.; Hizlan, M.

    1991-01-01

    A determinate state convolutional code is formed from a conventional convolutional code by pruning away some of the possible state transitions in the decoding trellis. The type of staged power transfer used in determinate state convolutional codes proves to be an extremely efficient way of enhancing the performance of a concatenated coding system. The decoder complexity is analyzed along with free distances of these new codes and extensive simulation results is provided of their performance at the low signal to noise ratios where a real communication system would operate. Concise, practical examples are provided.

  19. Communication: Two measures of isochronal superposition

    NASA Astrophysics Data System (ADS)

    Roed, Lisa Anita; Gundermann, Ditte; Dyre, Jeppe C.; Niss, Kristine

    2013-09-01

    A liquid obeys isochronal superposition if its dynamics is invariant along the isochrones in the thermodynamic phase diagram (the curves of constant relaxation time). This paper introduces two quantitative measures of isochronal superposition. The measures are used to test the following six liquids for isochronal superposition: 1,2,6 hexanetriol, glycerol, polyphenyl ether, diethyl phthalate, tetramethyl tetraphenyl trisiloxane, and dibutyl phthalate. The latter four van der Waals liquids obey isochronal superposition to a higher degree than the two hydrogen-bonded liquids. This is a prediction of the isomorph theory, and it confirms findings by other groups.

  20. A quantum algorithm for Viterbi decoding of classical convolutional codes

    NASA Astrophysics Data System (ADS)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  1. QCDNUM: Fast QCD evolution and convolution

    NASA Astrophysics Data System (ADS)

    Botje, M.

    2011-02-01

    The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline

  2. Inhibitor Discovery by Convolution ABPP.

    PubMed

    Chandrasekar, Balakumaran; Hong, Tram Ngoc; van der Hoorn, Renier A L

    2017-01-01

    Activity-based protein profiling (ABPP) has emerged as a powerful proteomic approach to study the active proteins in their native environment by using chemical probes that label active site residues in proteins. Traditionally, ABPP is classified as either comparative or competitive ABPP. In this protocol, we describe a simple method called convolution ABPP, which takes benefit from both the competitive and comparative ABPP. Convolution ABPP allows one to detect if a reduced signal observed during comparative ABPP could be due to the presence of inhibitors. In convolution ABPP, the proteomes are analyzed by comparing labeling intensities in two mixed proteomes that were labeled either before or after mixing. A reduction of labeling in the mix-and-label sample when compared to the label-and-mix sample indicates the presence of an inhibitor excess in one of the proteomes. This method is broadly applicable to detect inhibitors in proteomes against any proteome containing protein activities of interest. As a proof of concept, we applied convolution ABPP to analyze secreted proteomes from Pseudomonas syringae-infected Nicotiana benthamiana leaves to display the presence of a beta-galactosidase inhibitor.

  3. Two dimensional convolute integers for machine vision and image recognition

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas R.

    1988-01-01

    Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.

  4. Fugacity superposition: a new approach to dynamic multimedia fate modeling.

    PubMed

    Hertwich, E G

    2001-08-01

    The fugacities, concentrations, or inventories of pollutants in environmental compartments as determined by multimedia environmental fate models of the Mackay type can be superimposed on each other. This is true for both steady-state (level III) and dynamic (level IV) models. Any problem in multimedia fate models with linear, time-invariant transfer and transformation coefficients can be solved through a superposition of a set of n independent solutions to a set of coupled, homogeneous first-order differential equations, where n is the number of compartments in the model. For initial condition problems in dynamic models, the initial inventories can be separated, e.g. by a compartment. The solution is obtained by adding the single-compartment solutions. For time-varying emissions, a convolution integral is used to superimpose solutions. The advantage of this approach is that the differential equations have to be solved only once. No numeric integration is required. Alternatively, the dynamic model can be simplified to algebraic equations using the Laplace transform. For time-varying emissions, the Laplace transform of the model equations is simply multiplied with the Laplace transform of the emission profile. It is also shown that the time-integrated inventories of the initial conditions problems are the same as the inventories in the steady-state problem. This implies that important properties of pollutants such as potential dose, persistence, and characteristic travel distance can be derived from the steady state.

  5. Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity

    SciTech Connect

    Sales, J. S.; Silva, L. F. da; Almeida, N. G. de

    2011-03-15

    We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.

  6. Reconstruction of nonstationary sound fields based on the time domain plane wave superposition method.

    PubMed

    Zhang, Xiao-Zheng; Thomas, Jean-Hugh; Bi, Chuan-Xing; Pascal, Jean-Claude

    2012-10-01

    A time-domain plane wave superposition method is proposed to reconstruct nonstationary sound fields. In this method, the sound field is expressed as a superposition of time convolutions between the estimated time-wavenumber spectrum of the sound pressure on a virtual source plane and the time-domain propagation kernel at each wavenumber. By discretizing the time convolutions directly, the reconstruction can be carried out iteratively in the time domain, thus providing the advantage of continuously reconstructing time-dependent pressure signals. In the reconstruction process, the Tikhonov regularization is introduced at each time step to obtain a relevant estimate of the time-wavenumber spectrum on the virtual source plane. Because the double infinite integral of the two-dimensional spatial Fourier transform is discretized directly in the wavenumber domain in the proposed method, it does not need to perform the two-dimensional spatial fast Fourier transform that is generally used in time domain holography and real-time near-field acoustic holography, and therefore it avoids some errors associated with the two-dimensional spatial fast Fourier transform in theory and makes possible to use an irregular microphone array. The feasibility of the proposed method is demonstrated by numerical simulations and an experiment with two speakers.

  7. Experimental superposition of orders of quantum gates.

    PubMed

    Procopio, Lorenzo M; Moqanaki, Amir; Araújo, Mateus; Costa, Fabio; Alonso Calafell, Irati; Dowd, Emma G; Hamel, Deny R; Rozema, Lee A; Brukner, Časlav; Walther, Philip

    2015-08-07

    Quantum computers achieve a speed-up by placing quantum bits (qubits) in superpositions of different states. However, it has recently been appreciated that quantum mechanics also allows one to 'superimpose different operations'. Furthermore, it has been shown that using a qubit to coherently control the gate order allows one to accomplish a task--determining if two gates commute or anti-commute--with fewer gate uses than any known quantum algorithm. Here we experimentally demonstrate this advantage, in a photonic context, using a second qubit to control the order in which two gates are applied to a first qubit. We create the required superposition of gate orders by using additional degrees of freedom of the photons encoding our qubits. The new resource we exploit can be interpreted as a superposition of causal orders, and could allow quantum algorithms to be implemented with an efficiency unlikely to be achieved on a fixed-gate-order quantum computer.

  8. Experimental superposition of orders of quantum gates

    PubMed Central

    Procopio, Lorenzo M.; Moqanaki, Amir; Araújo, Mateus; Costa, Fabio; Alonso Calafell, Irati; Dowd, Emma G.; Hamel, Deny R.; Rozema, Lee A.; Brukner, Časlav; Walther, Philip

    2015-01-01

    Quantum computers achieve a speed-up by placing quantum bits (qubits) in superpositions of different states. However, it has recently been appreciated that quantum mechanics also allows one to ‘superimpose different operations'. Furthermore, it has been shown that using a qubit to coherently control the gate order allows one to accomplish a task—determining if two gates commute or anti-commute—with fewer gate uses than any known quantum algorithm. Here we experimentally demonstrate this advantage, in a photonic context, using a second qubit to control the order in which two gates are applied to a first qubit. We create the required superposition of gate orders by using additional degrees of freedom of the photons encoding our qubits. The new resource we exploit can be interpreted as a superposition of causal orders, and could allow quantum algorithms to be implemented with an efficiency unlikely to be achieved on a fixed-gate-order quantum computer. PMID:26250107

  9. a Logical Account of Quantum Superpositions

    NASA Astrophysics Data System (ADS)

    Krause, Décio Arenhart, Jonas R. Becker

    In this paper we consider the phenomenon of superpositions in quantum mechanics and suggest a way to deal with the idea in a logical setting from a syntactical point of view, that is, as subsumed in the language of the formalism, and not semantically. We restrict the discussion to the propositional level only. Then, after presenting the motivations and a possible world semantics, the formalism is outlined and we also consider within this scheme the claim that superpositions may involve contradictions, as in the case of the Schrödinger's cat, which (it is usually said) is both alive and dead. We argue that this claim is a misreading of the quantum case. Finally, we sketch a new form of quantum logic that involves three kinds of negations and present the relationships among them. The paper is a first approach to the subject, introducing some main guidelines to be developed by a `syntactical' logical approach to quantum superpositions.

  10. Large energy superpositions via Rydberg dressing

    NASA Astrophysics Data System (ADS)

    Khazali, Mohammadsadegh; Lau, Hon Wai; Humeniuk, Adam; Simon, Christoph

    2016-08-01

    We propose to create superposition states of over 100 strontium atoms in a ground state or metastable optical clock state using the Kerr-type interaction due to Rydberg state dressing in an optical lattice. The two components of the superposition can differ by an order of 300 eV in energy, allowing tests of energy decoherence models with greatly improved sensitivity. We take into account the effects of higher-order nonlinearities, spatial inhomogeneity of the interaction, decay from the Rydberg state, collective many-body decoherence, atomic motion, molecular formation, and diminishing Rydberg level separation for increasing principal number.

  11. Dose Calculation Accuracy of the Monte Carlo Algorithm for CyberKnife Compared with Other Commercially Available Dose Calculation Algorithms

    SciTech Connect

    Sharma, Subhash; Ott, Joseph Williams, Jamone; Dickow, Danny

    2011-01-01

    Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required.

  12. Dose calculation accuracy of the Monte Carlo algorithm for CyberKnife compared with other commercially available dose calculation algorithms.

    PubMed

    Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny

    2011-01-01

    Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required.

  13. On the Use of Material-Dependent Damping in ANSYS for Mode Superposition Transient Analysis

    SciTech Connect

    Nie, J.; Wei, X.

    2011-07-17

    The mode superposition method is often used for dynamic analysis of complex structures, such as the seismic Category I structures in nuclear power plants, in place of the less efficient full method, which uses the full system matrices for calculation of the transient responses. In such applications, specification of material-dependent damping is usually desirable because complex structures can consist of multiple types of materials that may have different energy dissipation capabilities. A recent review of the ANSYS manual for several releases found that the use of material-dependent damping is not clearly explained for performing a mode superposition transient dynamic analysis. This paper includes several mode superposition transient dynamic analyses using different ways to specify damping in ANSYS, in order to determine how material-dependent damping can be specified conveniently in a mode superposition transient dynamic analysis.

  14. Transfer of arbitrary quantum emitter states to near-field photon superpositions in nanocavities.

    PubMed

    Thijssen, Arthur C T; Cryan, Martin J; Rarity, John G; Oulton, Ruth

    2012-09-24

    We present a method to analyze the suitability of particular photonic cavity designs for information exchange between arbitrary superposition states of a quantum emitter and the near-field photonic cavity mode. As an illustrative example, we consider whether quantum dot emitters embedded in "L3" and "H1" photonic crystal cavities are able to transfer a spin superposition state to a confined photonic superposition state for use in quantum information transfer. Using an established dyadic Green's function (DGF) analysis, we describe methods to calculate coupling to arbitrary quantum emitter positions and orientations using the modified local density of states (LDOS) calculated using numerical finite-difference time-domain (FDTD) simulations. We find that while superposition states are not supported in L3 cavities, the double degeneracy of the H1 cavities supports superposition states of the two orthogonal modes that may be described as states on a Poincaré-like sphere. Methods are developed to comprehensively analyze the confined superposition state generated from an arbitrary emitter position and emitter dipole orientation.

  15. The Evolution and Development of Neural Superposition

    PubMed Central

    Agi, Egemen; Langen, Marion; Altschuler, Steven J.; Wu, Lani F.; Zimmermann, Timo

    2014-01-01

    Visual systems have a rich history as model systems for the discovery and understanding of basic principles underlying neuronal connectivity. The compound eyes of insects consist of up to thousands of small unit eyes that are connected by photoreceptor axons to set up a visual map in the brain. The photoreceptor axon terminals thereby represent neighboring points seen in the environment in neighboring synaptic units in the brain. Neural superposition is a special case of such a wiring principle, where photoreceptors from different unit eyes that receive the same input converge upon the same synaptic units in the brain. This wiring principle is remarkable, because each photoreceptor in a single unit eye receives different input and each individual axon, among thousands others in the brain, must be sorted together with those few axons that have the same input. Key aspects of neural superposition have been described as early as 1907. Since then neuroscientists, evolutionary and developmental biologists have been fascinated by how such a complicated wiring principle could evolve, how it is genetically encoded, and how it is developmentally realized. In this review article, we will discuss current ideas about the evolutionary origin and developmental program of neural superposition. Our goal is to identify in what way the special case of neural superposition can help us answer more general questions about the evolution and development of genetically “hard-wired” synaptic connectivity in the brain. PMID:24912630

  16. The principle of superposition in human prehension

    PubMed Central

    Zatsiorsky, Vladimir M.; Latash, Mark L.; Gao, Fan; Shim, Jae Kun

    2010-01-01

    SUMMARY The experimental evidence supports the validity of the principle of superposition for multi-finger prehension in humans. Forces and moments of individual digits are defined by two independent commands: “Grasp the object stronger/weaker to prevent slipping” and “Maintain the rotational equilibrium of the object”. The effects of the two commands are summed up. PMID:20186284

  17. The principle of superposition in human prehension.

    PubMed

    Zatsiorsky, Vladimir M; Latash, Mark L; Gao, Fan; Shim, Jae Kun

    2004-03-01

    The experimental evidence supports the validity of the principle of superposition for multi-finger prehension in humans. Forces and moments of individual digits are defined by two independent commands: "Grasp the object stronger/weaker to prevent slipping" and "Maintain the rotational equilibrium of the object". The effects of the two commands are summed up.

  18. The evolution and development of neural superposition.

    PubMed

    Agi, Egemen; Langen, Marion; Altschuler, Steven J; Wu, Lani F; Zimmermann, Timo; Hiesinger, Peter Robin

    2014-01-01

    Visual systems have a rich history as model systems for the discovery and understanding of basic principles underlying neuronal connectivity. The compound eyes of insects consist of up to thousands of small unit eyes that are connected by photoreceptor axons to set up a visual map in the brain. The photoreceptor axon terminals thereby represent neighboring points seen in the environment in neighboring synaptic units in the brain. Neural superposition is a special case of such a wiring principle, where photoreceptors from different unit eyes that receive the same input converge upon the same synaptic units in the brain. This wiring principle is remarkable, because each photoreceptor in a single unit eye receives different input and each individual axon, among thousands others in the brain, must be sorted together with those few axons that have the same input. Key aspects of neural superposition have been described as early as 1907. Since then neuroscientists, evolutionary and developmental biologists have been fascinated by how such a complicated wiring principle could evolve, how it is genetically encoded, and how it is developmentally realized. In this review article, we will discuss current ideas about the evolutionary origin and developmental program of neural superposition. Our goal is to identify in what way the special case of neural superposition can help us answer more general questions about the evolution and development of genetically "hard-wired" synaptic connectivity in the brain.

  19. SUPERPOSITION OF POLYTROPES IN THE INNER HELIOSHEATH

    SciTech Connect

    Livadiotis, G.

    2016-03-15

    This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density–temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log–log scale is now generalized to a concave-downward parabola that is able to describe the observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ∼ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.

  20. Simplified Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Reed, I. S.

    1986-01-01

    Some complicated intermediate steps shortened or eliminated. Decoding of convolutional error-correcting digital codes simplified by new errortrellis syndrome technique. In new technique, syndrome vector not computed. Instead, advantage taken of newly-derived mathematical identities simplify decision tree, folding it back on itself into form called "error trellis." This trellis graph of all path solutions of syndrome equations. Each path through trellis corresponds to specific set of decisions as to received digits. Existing decoding algorithms combined with new mathematical identities reduce number of combinations of errors considered and enable computation of correction vector directly from data and check bits as received.

  1. Macroscopic Quantum Superposition in Cavity Optomechanics

    NASA Astrophysics Data System (ADS)

    Liao, Jie-Qiao; Tian, Lin

    Quantum superposition in mechanical systems is not only a key evidence of macroscopic quantum coherence, but can also be utilized in modern quantum technology. Here we propose an efficient approach for creating macroscopically distinct mechanical superposition states in a two-mode optomechanical system. Photon hopping between the two cavity-modes is modulated sinusoidally. The modulated photon tunneling enables an ultrastrong radiation-pressure force acting on the mechanical resonator, and hence significantly increases the mechanical displacement induced by a single photon. We present systematic studies on the generation of the Yurke-Stoler-like states in the presence of system dissipations. The state generation method is general and it can be implemented with either optomechanical or electromechanical systems. The authors are supported by the National Science Foundation under Award No. NSF-DMR-0956064 and the DARPA ORCHID program through AFOSR.

  2. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  3. Convolution-deconvolution in DIGES

    SciTech Connect

    Philippacopoulos, A.J.; Simos, N.

    1995-05-01

    Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities.

  4. The general theory of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Stanley, R. P.

    1993-01-01

    This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.

  5. Toward quantum superposition of living organisms

    NASA Astrophysics Data System (ADS)

    Romero-Isart, Oriol; Juan, Mathieu L.; Quidant, Romain; Cirac, J. Ignacio

    2010-03-01

    The most striking feature of quantum mechanics is the existence of superposition states, where an object appears to be in different situations at the same time. The existence of such states has been previously tested with small objects, such as atoms, ions, electrons and photons (Zoller et al 2005 Eur. Phys. J. D 36 203-28), and even with molecules (Arndt et al 1999 Nature 401 680-2). More recently, it has been shown that it is possible to create superpositions of collections of photons (Deléglise et al 2008 Nature 455 510-14), atoms (Hammerer et al 2008 arXiv:0807.3358) or Cooper pairs (Friedman et al 2000 Nature 406 43-6). Very recent progress in optomechanical systems may soon allow us to create superpositions of even larger objects, such as micro-sized mirrors or cantilevers (Marshall et al 2003 Phys. Rev. Lett. 91 130401; Kippenberg and Vahala 2008 Science 321 1172-6 Marquardt and Girvin 2009 Physics 2 40; Favero and Karrai 2009 Nature Photon. 3 201-5), and thus to test quantum mechanical phenomena at larger scales. Here we propose a method to cool down and create quantum superpositions of the motion of sub-wavelength, arbitrarily shaped dielectric objects trapped inside a high-finesse cavity at a very low pressure. Our method is ideally suited for the smallest living organisms, such as viruses, which survive under low-vacuum pressures (Rothschild and Mancinelli 2001 Nature 406 1092-101) and optically behave as dielectric objects (Ashkin and Dziedzic 1987 Science 235 1517-20). This opens up the possibility of testing the quantum nature of living organisms by creating quantum superposition states in very much the same spirit as the original Schrödinger's cat 'gedanken' paradigm (Schrödinger 1935 Naturwissenschaften 23 807-12, 823-8, 844-9). We anticipate that our paper will be a starting point for experimentally addressing fundamental questions, such as the role of life and consciousness in quantum mechanics.

  6. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  7. X-ray optics simulation using Gaussian superposition technique.

    PubMed

    Idir, Mourad; Cywiak, Moisés; Morales, Arquímedes; Modi, Mohammed H

    2011-09-26

    We present an efficient method to perform x-ray optics simulation with high or partially coherent x-ray sources using Gaussian superposition technique. In a previous paper, we have demonstrated that full characterization of optical systems, diffractive and geometric, is possible by using the Fresnel Gaussian Shape Invariant (FGSI) previously reported in the literature. The complex amplitude distribution in the object plane is represented by a linear superposition of complex Gaussians wavelets and then propagated through the optical system by means of the referred Gaussian invariant. This allows ray tracing through the optical system and at the same time allows calculating with high precision the complex wave-amplitude distribution at any plane of observation. This technique can be applied in a wide spectral range where the Fresnel diffraction integral applies including visible, x-rays, acoustic waves, etc. We describe the technique and include some computer simulations as illustrative examples for x-ray optical component. We show also that this method can be used to study partial or total coherence illumination problem.

  8. X-ray optics simulation using Gaussian superposition technique

    SciTech Connect

    Idir, M.; Cywiak, M.; Morales, A. and Modi, M.H.

    2011-09-15

    We present an efficient method to perform x-ray optics simulation with high or partially coherent x-ray sources using Gaussian superposition technique. In a previous paper, we have demonstrated that full characterization of optical systems, diffractive and geometric, is possible by using the Fresnel Gaussian Shape Invariant (FGSI) previously reported in the literature. The complex amplitude distribution in the object plane is represented by a linear superposition of complex Gaussians wavelets and then propagated through the optical system by means of the referred Gaussian invariant. This allows ray tracing through the optical system and at the same time allows calculating with high precision the complex wave-amplitude distribution at any plane of observation. This technique can be applied in a wide spectral range where the Fresnel diffraction integral applies including visible, x-rays, acoustic waves, etc. We describe the technique and include some computer simulations as illustrative examples for x-ray optical component. We show also that this method can be used to study partial or total coherence illumination problem.

  9. Laser superposition in multi-pass amplification process

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Liu, Lan-Qin; Wang, Wen-Yi; Huang, Wan-Qing; Geng, Yuan-Chao

    2015-02-01

    Physical model was established to describe the pulse superposition in multi-pass amplification process when the pulse reflected from the cavity mirror and the front and the end of the pulse encountered. Theoretical analysis indicates that pulse superposition will consume more inversion population than that consumed without superposition. The standing wave field will be formed when the front and the end of the pulse is coherent overlapped. The inversion population density is spatial hole-burning by the standing wave field. The pulse gain and pulse are affected by superposition. Based on this physical model, three conditions, without superposition, coherent superposition and incoherent superposition were compared. This study will give instructions for high power solid laser design.

  10. On Kolmogorov's superpositions and Boolean functions

    SciTech Connect

    Beiu, V.

    1998-12-31

    The paper overviews results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on an explicit numerical (i.e., constructive) algorithm for Kolmogorov's superpositions they will show that for obtaining minimum size neutral networks for implementing any Boolean function, the activation function of the neurons is the identity function. Because classical AND-OR implementations, as well as threshold gate implementations require exponential size (in the worst case), it will follow that size-optimal solutions for implementing arbitrary Boolean functions require analog circuitry. Conclusions and several comments on the required precision are ending the paper.

  11. Maximum predictive power and the superposition principle

    NASA Technical Reports Server (NTRS)

    Summhammer, Johann

    1994-01-01

    In quantum physics the direct observables are probabilities of events. We ask how observed probabilities must be combined to achieve what we call maximum predictive power. According to this concept the accuracy of a prediction must only depend on the number of runs whose data serve as input for the prediction. We transform each probability to an associated variable whose uncertainty interval depends only on the amount of data and strictly decreases with it. We find that for a probability which is a function of two other probabilities maximum predictive power is achieved when linearly summing their associated variables and transforming back to a probability. This recovers the quantum mechanical superposition principle.

  12. Design of artificial spherical superposition compound eye

    NASA Astrophysics Data System (ADS)

    Cao, Zhaolou; Zhai, Chunjie; Wang, Keyi

    2015-12-01

    In this research, design of artificial spherical superposition compound eye is presented. The imaging system consists of three layers of lens arrays. In each channel, two lenses are designed to control the angular magnification and a field lens is added to improve the image quality and extend the field of view. Aspherical surfaces are introduced to improve the image quality. Ray tracing results demonstrate that the light from the same object point is focused at the same imaging point through different channels. Therefore the system has much higher energy efficiency than conventional spherical apposition compound eye.

  13. About simple nonlinear and linear superpositions of special exact solutions of Veselov-Novikov equation

    SciTech Connect

    Dubrovsky, V. G.; Topovsky, A. V.

    2013-03-15

    New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u{sup (n)}, n= 1, Horizontal-Ellipsis , N are constructed via Zakharov and Manakov {partial_derivative}-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u{sup (n)} and calculated by {partial_derivative}-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schroedinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums of special solutions u{sup (n)}. It is shown that the sums u=u{sup (k{sub 1})}+...+u{sup (k{sub m})}, 1 Less-Than-Or-Slanted-Equal-To k{sub 1} < k{sub 2} < Horizontal-Ellipsis < k{sub m} Less-Than-Or-Slanted-Equal-To N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schroedinger equation and can serve as model potentials for electrons in planar structures of modern electronics.

  14. A linear algebraic nonlinear superposition formula

    NASA Astrophysics Data System (ADS)

    Gordoa, Pilar R.; Conde, Juan M.

    2002-04-01

    The Darboux transformation provides an iterative approach to the generation of exact solutions for an integrable system. This process can be simplified using the Bäcklund transformation and Bianchi's theorem of permutability; in this way we construct a nonlinear superposition formula, that is, an equation relating a new solution to three previous solutions. In general this equation will be a differential equation; for some examples, such as the Korteweg-de Vries equation, it is a linear algebraic equation. This last is what happens also in the case of the system discussed in this Letter. The linear algebraic nonlinear superposition formula obtained here is a new result. As an example, we use it to construct the two soliton solution, as well as special cases of this last which give rise to solutions exhibiting combinations of fission and fusion. Solutions exhibiting repeated processes of fission and fusion are new phenomena within the area of soliton equations. We also consider obtaining solutions using a symmetry approach; in this way we obtain rational solutions and also the one soliton solution.

  15. Simulating images captured by superposition lens cameras

    NASA Astrophysics Data System (ADS)

    Thangarajan, Ashok Samraj; Kakarala, Ramakrishna

    2011-03-01

    As the demand for reduction in the thickness of cameras rises, so too does the interest in thinner lens designs. One such radical approach toward developing a thin lens is obtained from nature's superposition principle as used in the eyes of many insects. But generally the images obtained from those lenses are fuzzy, and require reconstruction algorithms to complete the imaging process. A hurdle to developing such algorithms is that the existing literature does not provide realistic test images, aside from using commercial ray-tracing software which is costly. A solution for that problem is presented in this paper. Here a Gabor Super Lens (GSL), which is based on the superposition principle, is simulated using the public-domain ray-tracing software POV-Ray. The image obtained is of a grating surface as viewed through an actual GSL, which can be used to test reconstruction algorithms. The large computational time in rendering such images requires further optimization, and methods to do so are discussed.

  16. Transient Response of Shells of Revolution by Direct Integration and Modal Superposition Methods

    NASA Technical Reports Server (NTRS)

    Stephens, W. B.; Adelman, H. M.

    1974-01-01

    The results of an analytical effort to obtain and evaluate transient response data for a cylindrical and a conical shell by use of two different approaches: direct integration and modal superposition are described. The inclusion of nonlinear terms is more important than the inclusion of secondary linear effects (transverse shear deformation and rotary inertia) although there are thin-shell structures where these secondary effects are important. The advantages of the direct integration approach are that geometric nonlinear and secondary effects are easy to include and high-frequency response may be calculated. In comparison to the modal superposition technique the computer storage requirements are smaller. The advantages of the modal superposition approach are that the solution is independent of the previous time history and that once the modal data are obtained, the response for repeated cases may be efficiently computed. Also, any admissible set of initial conditions can be applied.

  17. slate: A method for the superposition of flexible ligands

    NASA Astrophysics Data System (ADS)

    Mills, J. E. J.; de Esch, I. J. P.; Perkins, T. D. J.; Dean, P. M.

    2001-01-01

    A novel program for the superposition of flexible molecules, slate, is presented. It uses simulated annealing to minimise the difference between the distance matrices calculated from the hydrogen-bonding and aromatic-ring properties of two ligands. A method for generating a molecular stack using multiple pairwise matches is illustrated. These stacks are used by the program doh to predict the relative positions of receptor atoms that could form hydrogen bonds to two or more ligands in the dataset. The methodology has been applied to ligands binding to dihydrofolate reductase, thermolysin, H3 histamine receptors, α2 adrenoceptors and 5-HT1D receptors. When there are sufficient numbers and diversity of molecules in the dataset, the prediction of receptor-atom positions is applicable to compound design.

  18. Subfemtosecond steering of hydrocarbon deprotonation through superposition of vibrational modes.

    PubMed

    Alnaser, A S; Kübel, M; Siemering, R; Bergues, B; Kling, Nora G; Betsch, K J; Deng, Y; Schmidt, J; Alahmed, Z A; Azzeer, A M; Ullrich, J; Ben-Itzhak, I; Moshammer, R; Kleineberg, U; Krausz, F; de Vivie-Riedle, R; Kling, M F

    2014-05-08

    Subfemtosecond control of the breaking and making of chemical bonds in polyatomic molecules is poised to open new pathways for the laser-driven synthesis of chemical products. The break-up of the C-H bond in hydrocarbons is an ubiquitous process during laser-induced dissociation. While the yield of the deprotonation of hydrocarbons has been successfully manipulated in recent studies, full control of the reaction would also require a directional control (that is, which C-H bond is broken). Here, we demonstrate steering of deprotonation from symmetric acetylene molecules on subfemtosecond timescales before the break-up of the molecular dication. On the basis of quantum mechanical calculations, the experimental results are interpreted in terms of a novel subfemtosecond control mechanism involving non-resonant excitation and superposition of vibrational degrees of freedom. This mechanism permits control over the directionality of chemical reactions via vibrational excitation on timescales defined by the subcycle evolution of the laser waveform.

  19. Parallel architectures for computing cyclic convolutions

    NASA Technical Reports Server (NTRS)

    Yeh, C.-S.; Reed, I. S.; Truong, T. K.

    1983-01-01

    In the paper two parallel architectural structures are developed to compute one-dimensional cyclic convolutions. The first structure is based on the Chinese remainder theorem and Kung's pipelined array. The second structure is a direct mapping from the mathematical definition of a cyclic convolution to a computational architecture. To compute a d-point cyclic convolution the first structure needs d/2 inner product cells, while the second structure and Kung's linear array require d cells. However, to compute a cyclic convolution, the second structure requires less time than both the first structure and Kung's linear array. Another application of the second structure is to multiply a Toeplitz matrix by a vector. A table is listed to compare these two structures and Kung's linear array. Both structures are simple and regular and are therefore suitable for VLSI implementation.

  20. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  1. Multipartite cellular automata and the superposition principle

    NASA Astrophysics Data System (ADS)

    Elze, Hans-Thomas

    2016-05-01

    Cellular automata (CA) can show well known features of quantum mechanics (QM), such as a linear updating rule that resembles a discretized form of the Schrödinger equation together with its conservation laws. Surprisingly, a whole class of “natural” Hamiltonian CA, which are based entirely on integer-valued variables and couplings and derived from an action principle, can be mapped reversibly to continuum models with the help of sampling theory. This results in “deformed” quantum mechanical models with a finite discreteness scale l, which for l→0 reproduce the familiar continuum limit. Presently, we show, in particular, how such automata can form “multipartite” systems consistently with the tensor product structures of non-relativistic many-body QM, while maintaining the linearity of dynamics. Consequently, the superposition principle is fully operative already on the level of these primordial discrete deterministic automata, including the essential quantum effects of interference and entanglement.

  2. Authentication Protocol using Quantum Superposition States

    SciTech Connect

    Kanamori, Yoshito; Yoo, Seong-Moo; Gregory, Don A.; Sheldon, Frederick T

    2009-01-01

    When it became known that quantum computers could break the RSA (named for its creators - Rivest, Shamir, and Adleman) encryption algorithm within a polynomial-time, quantum cryptography began to be actively studied. Other classical cryptographic algorithms are only secure when malicious users do not have sufficient computational power to break security within a practical amount of time. Recently, many quantum authentication protocols sharing quantum entangled particles between communicators have been proposed, providing unconditional security. An issue caused by sharing quantum entangled particles is that it may not be simple to apply these protocols to authenticate a specific user in a group of many users. An authentication protocol using quantum superposition states instead of quantum entangled particles is proposed. The random number shared between a sender and a receiver can be used for classical encryption after the authentication has succeeded. The proposed protocol can be implemented with the current technologies we introduce in this paper.

  3. On the superposition principle in interference experiments

    PubMed Central

    Sinha, Aninda; H. Vijay, Aravind; Sinha, Urbasi

    2015-01-01

    The superposition principle is usually incorrectly applied in interference experiments. This has recently been investigated through numerics based on Finite Difference Time Domain (FDTD) methods as well as the Feynman path integral formalism. In the current work, we have derived an analytic formula for the Sorkin parameter which can be used to determine the deviation from the application of the principle. We have found excellent agreement between the analytic distribution and those that have been earlier estimated by numerical integration as well as resource intensive FDTD simulations. The analytic handle would be useful for comparing theory with future experiments. It is applicable both to physics based on classical wave equations as well as the non-relativistic Schrödinger equation. PMID:25973948

  4. Superposition and alignment of labeled point clouds.

    PubMed

    Fober, Thomas; Glinca, Serghei; Klebe, Gerhard; Hüllermeier, Eyke

    2011-01-01

    Geometric objects are often represented approximately in terms of a finite set of points in three-dimensional euclidean space. In this paper, we extend this representation to what we call labeled point clouds. A labeled point cloud is a finite set of points, where each point is not only associated with a position in three-dimensional space, but also with a discrete class label that represents a specific property. This type of model is especially suitable for modeling biomolecules such as proteins and protein binding sites, where a label may represent an atom type or a physico-chemical property. Proceeding from this representation, we address the question of how to compare two labeled points clouds in terms of their similarity. Using fuzzy modeling techniques, we develop a suitable similarity measure as well as an efficient evolutionary algorithm to compute it. Moreover, we consider the problem of establishing an alignment of the structures in the sense of a one-to-one correspondence between their basic constituents. From a biological point of view, alignments of this kind are of great interest, since mutually corresponding molecular constituents offer important information about evolution and heredity, and can also serve as a means to explain a degree of similarity. In this paper, we therefore develop a method for computing pairwise or multiple alignments of labeled point clouds. To this end, we proceed from an optimal superposition of the corresponding point clouds and construct an alignment which is as much as possible in agreement with the neighborhood structure established by this superposition. We apply our methods to the structural analysis of protein binding sites.

  5. Multichannel Polarization-Controllable Superpositions of Orbital Angular Momentum States.

    PubMed

    Yue, Fuyong; Wen, Dandan; Zhang, Chunmei; Gerardot, Brian D; Wang, Wei; Zhang, Shuang; Chen, Xianzhong

    2017-04-01

    A facile metasurface approach is shown to realize polarization-controllable multichannel superpositions of orbital angular momentum (OAM) states with various topological charges. By manipulating the polarization state of the incident light, four kinds of superpositions of OAM states are realized using a single metasurface consisting of space-variant arrays of gold nanoantennas.

  6. Macroscopic superpositions and gravimetry with quantum magnetomechanics

    NASA Astrophysics Data System (ADS)

    Johnsson, Mattias T.; Brennen, Gavin K.; Twamley, Jason

    2016-11-01

    Precision measurements of gravity can provide tests of fundamental physics and are of broad practical interest for metrology. We propose a scheme for absolute gravimetry using a quantum magnetomechanical system consisting of a magnetically trapped superconducting resonator whose motion is controlled and measured by a nearby RF-SQUID or flux qubit. By driving the mechanical massive resonator to be in a macroscopic superposition of two different heights our we predict that our interferometry protocol could, subject to systematic errors, achieve a gravimetric sensitivity of Δg/g ~ 2.2 × 10‑10 Hz‑1/2, with a spatial resolution of a few nanometres. This sensitivity and spatial resolution exceeds the precision of current state of the art atom-interferometric and corner-cube gravimeters by more than an order of magnitude, and unlike classical superconducting interferometers produces an absolute rather than relative measurement of gravity. In addition, our scheme takes measurements at ~10 kHz, a region where the ambient vibrational noise spectrum is heavily suppressed compared the ~10 Hz region relevant for current cold atom gravimeters.

  7. Macroscopic superpositions and gravimetry with quantum magnetomechanics

    PubMed Central

    Johnsson, Mattias T.; Brennen, Gavin K.; Twamley, Jason

    2016-01-01

    Precision measurements of gravity can provide tests of fundamental physics and are of broad practical interest for metrology. We propose a scheme for absolute gravimetry using a quantum magnetomechanical system consisting of a magnetically trapped superconducting resonator whose motion is controlled and measured by a nearby RF-SQUID or flux qubit. By driving the mechanical massive resonator to be in a macroscopic superposition of two different heights our we predict that our interferometry protocol could, subject to systematic errors, achieve a gravimetric sensitivity of Δg/g ~ 2.2 × 10−10 Hz−1/2, with a spatial resolution of a few nanometres. This sensitivity and spatial resolution exceeds the precision of current state of the art atom-interferometric and corner-cube gravimeters by more than an order of magnitude, and unlike classical superconducting interferometers produces an absolute rather than relative measurement of gravity. In addition, our scheme takes measurements at ~10 kHz, a region where the ambient vibrational noise spectrum is heavily suppressed compared the ~10 Hz region relevant for current cold atom gravimeters. PMID:27869142

  8. Controlling coherent state superpositions with superconducting circuits

    NASA Astrophysics Data System (ADS)

    Vlastakis, Brian Michael

    Quantum computation requires a large yet controllable Hilbert space. While many implementations use discrete quantum variables such as the energy states of a two-level system to encode quantum information, continuous variables could allow access to a larger computational space while minimizing the amount of re- quired hardware. With a toolset of conditional qubit-photon logic, we encode quantum information into the amplitude and phase of coherent state superpositions in a resonator, also known as Schrddinger cat states. We achieve this using a superconducting transmon qubit with a strong off-resonant coupling to a waveguide cavity. This dispersive interaction is much greater than decoherence rates and higher-order nonlinearites and therefore allows for simultaneous control of over one hundred photons. Furthermore, we combine this experiment with fast, high-fidelity qubit state readout to perform composite qubit-cavity state tomography and detect entanglement between a physical qubit and a cat-state encoded qubit. These results have promising applications for redundant encoding in a cavity state and ultimately quantum error correction with superconducting circuits.

  9. Molecular graph convolutions: moving beyond fingerprints.

    PubMed

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  10. Cyclic Cocycles on Twisted Convolution Algebras

    NASA Astrophysics Data System (ADS)

    Angel, Eitan

    2013-01-01

    We give a construction of cyclic cocycles on convolution algebras twisted by gerbes over discrete translation groupoids. For proper étale groupoids, Tu and Xu (Adv Math 207(2):455-483, 2006) provide a map between the periodic cyclic cohomology of a gerbe-twisted convolution algebra and twisted cohomology groups which is similar to the construction of Mathai and Stevenson (Adv Math 200(2):303-335, 2006). When the groupoid is not proper, we cannot construct an invariant connection on the gerbe; therefore to study this algebra, we instead develop simplicial techniques to construct a simplicial curvature 3-form representing the class of the gerbe. Then by using a JLO formula we define a morphism from a simplicial complex twisted by this simplicial curvature 3-form to the mixed bicomplex computing the periodic cyclic cohomology of the twisted convolution algebras.

  11. Molecular graph convolutions: moving beyond fingerprints

    NASA Astrophysics Data System (ADS)

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  12. Astronomical Image Subtraction by Cross-Convolution

    NASA Astrophysics Data System (ADS)

    Yuan, Fang; Akerlof, Carl W.

    2008-04-01

    In recent years, there has been a proliferation of wide-field sky surveys to search for a variety of transient objects. Using relatively short focal lengths, the optics of these systems produce undersampled stellar images often marred by a variety of aberrations. As participants in such activities, we have developed a new algorithm for image subtraction that no longer requires high-quality reference images for comparison. The computational efficiency is comparable with similar procedures currently in use. The general technique is cross-convolution: two convolution kernels are generated to make a test image and a reference image separately transform to match as closely as possible. In analogy to the optimization technique for generating smoothing splines, the inclusion of an rms width penalty term constrains the diffusion of stellar images. In addition, by evaluating the convolution kernels on uniformly spaced subimages across the total area, these routines can accommodate point-spread functions that vary considerably across the focal plane.

  13. Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution

    SciTech Connect

    Akabani, G.; Hawkins, W.G.; Eckblade, M.B.; Leichner, P.K.

    1999-01-01

    The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transform (CHT) algorithm.

  14. Fast computation algorithm for the Rayleigh-Sommerfeld diffraction formula using a type of scaled convolution.

    PubMed

    Nascov, Victor; Logofătu, Petre Cătălin

    2009-08-01

    We describe a fast computational algorithm able to evaluate the Rayleigh-Sommerfeld diffraction formula, based on a special formulation of the convolution theorem and the fast Fourier transform. What is new in our approach compared to other algorithms is the use of a more general type of convolution with a scale parameter, which allows for independent sampling intervals in the input and output computation windows. Comparison between the calculations made using our algorithm and direct numeric integration show a very good agreement, while the computation speed is increased by orders of magnitude.

  15. Sequential Syndrome Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.

  16. [Application of numerical convolution in in vivo/in vitro correlation research].

    PubMed

    Yue, Peng

    2009-01-01

    This paper introduced the conception and principle of in vivo/in vitro correlation (IVIVC) and convolution/deconvolution methods, and elucidated in details the convolution strategy and method for calculating the in vivo absorption performance of the pharmaceutics according to the their pharmacokinetic data in Excel, then put the results forward to IVIVC research. Firstly, the pharmacokinetic data ware fitted by mathematical software to make up the lost points. Secondly, the parameters of the optimal fitted input function were defined by trail-and-error method according to the convolution principle in Excel under the hypothesis that all the input functions fit the Weibull functions. Finally, the IVIVC between in vivo input function and the in vitro dissolution was studied. In the examples, not only the application of this method was demonstrated in details but also its simplicity and effectiveness were proved by comparing with the compartment model method and deconvolution method. It showed to be a powerful tool for IVIVC research.

  17. Nonclassical properties and quantum resources of hierarchical photonic superposition states

    SciTech Connect

    Volkoff, T. J.

    2015-11-15

    We motivate and introduce a class of “hierarchical” quantum superposition states of N coupled quantum oscillators. Unlike other well-known multimode photonic Schrödinger-cat states such as entangled coherent states, the hierarchical superposition states are characterized as two-branch superpositions of tensor products of single-mode Schrödinger-cat states. In addition to analyzing the photon statistics and quasiprobability distributions of prominent examples of these nonclassical states, we consider their usefulness for highprecision quantum metrology of nonlinear optical Hamiltonians and quantify their mode entanglement. We propose two methods for generating hierarchical superpositions in N = 2 coupled microwave cavities, exploiting currently existing quantum optical technology for generating entanglement between spatially separated electromagnetic field modes.

  18. Quantum State Engineering Via Coherent-State Superpositions

    NASA Technical Reports Server (NTRS)

    Janszky, Jozsef; Adam, P.; Szabo, S.; Domokos, P.

    1996-01-01

    The quantum interference between the two parts of the optical Schrodinger-cat state makes possible to construct a wide class of quantum states via discrete superpositions of coherent states. Even a small number of coherent states can approximate the given quantum states at a high accuracy when the distance between the coherent states is optimized, e. g. nearly perfect Fock state can be constructed by discrete superpositions of n + 1 coherent states lying in the vicinity of the vacuum state.

  19. Effectiveness of Convolutional Code in Multipath Underwater Acoustic Channel

    NASA Astrophysics Data System (ADS)

    Park, Jihyun; Seo, Chulwon; Park, Kyu-Chil; Yoon, Jong Rak

    2013-07-01

    The forward error correction (FEC) is achieved by increasing redundancy of information. Convolutional coding with Viterbi decoding is a typical FEC technique in channel corrupted by additive white gaussian noise. But the FEC effectiveness of convolutional code is questioned in multipath frequency selective fading channel. In this paper, how convolutional code works in multipath channel in underwater, is examined. Bit error rates (BER) with and without 1/2 convolutional code are analyzed based on channel bandwidth which is frequency selectivity parameter. It is found that convolution code performance is well matched in non selective channel and also effective in selective channel.

  20. A Mathematical Motivation for Complex-Valued Convolutional Networks.

    PubMed

    Tygert, Mark; Bruna, Joan; Chintala, Soumith; LeCun, Yann; Piantino, Serkan; Szlam, Arthur

    2016-05-01

    A complex-valued convolutional network (convnet) implements the repeated application of the following composition of three operations, recursively applying the composition to an input vector of nonnegative real numbers: (1) convolution with complex-valued vectors, followed by (2) taking the absolute value of every entry of the resulting vectors, followed by (3) local averaging. For processing real-valued random vectors, complex-valued convnets can be viewed as data-driven multiscale windowed power spectra, data-driven multiscale windowed absolute spectra, data-driven multiwavelet absolute values, or (in their most general configuration) data-driven nonlinear multiwavelet packets. Indeed, complex-valued convnets can calculate multiscale windowed spectra when the convnet filters are windowed complex-valued exponentials. Standard real-valued convnets, using rectified linear units (ReLUs), sigmoidal (e.g., logistic or tanh) nonlinearities, or max pooling, for example, do not obviously exhibit the same exact correspondence with data-driven wavelets (whereas for complex-valued convnets, the correspondence is much more than just a vague analogy). Courtesy of the exact correspondence, the remarkably rich and rigorous body of mathematical analysis for wavelets applies directly to (complex-valued) convnets.

  1. A Superposition Technique for Deriving Photon Scattering Statistics in Plane-Parallel Cloudy Atmospheres

    NASA Technical Reports Server (NTRS)

    Platnick, S.

    1999-01-01

    Photon transport in a multiple scattering medium is critically dependent on scattering statistics, in particular the average number of scatterings. A superposition technique is derived to accurately determine the average number of scatterings encountered by reflected and transmitted photons within arbitrary layers in plane-parallel, vertically inhomogeneous clouds. As expected, the resulting scattering number profiles are highly dependent on cloud particle absorption and solar/viewing geometry. The technique uses efficient adding and doubling radiative transfer procedures, avoiding traditional time-intensive Monte Carlo methods. Derived superposition formulae are applied to a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Cloud remote sensing techniques that use solar reflectance or transmittance measurements generally assume a homogeneous plane-parallel cloud structure. The scales over which this assumption is relevant, in both the vertical and horizontal, can be obtained from the superposition calculations. Though the emphasis is on photon transport in clouds, the derived technique is applicable to any scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers in the atmosphere.

  2. Digital Correlation By Optical Convolution/Correlation

    NASA Astrophysics Data System (ADS)

    Trimble, Joel; Casasent, David; Psaltis, Demetri; Caimi, Frank; Carlotto, Mark; Neft, Deborah

    1980-12-01

    Attention is given to various methods by which the accuracy achieveable and the dynamic range requirements of an optical computer can be enhanced. A new time position coding acousto-optic technique for optical residue arithmetic processing is presented and experimental demonstration is included. Major attention is given to the implementation of a correlator operating on digital or decimal encoded signals. Using a convolution description of multiplication, we realize such a correlator by optical convolution in one dimension and optical correlation in the other dimension of a optical system. A coherent matched spatial filter system operating on digital encoded signals, a noncoherent processor operating on complex-valued digital-encoded data, and a real-time multi-channel acousto-optic system for such operations are described and experimental verifications are included.

  3. Performance of convolutionally coded unbalanced QPSK systems

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Yuen, J. H.

    1980-01-01

    An evaluation is presented of the performance of three representative convolutionally coded unbalanced quadri-phase-shift-keying (UQPSK) systems in the presence of noisy carrier reference and crosstalk. The use of a coded UQPSK system for transmitting two telemetry data streams with different rates and different powers has been proposed for the Venus Orbiting Imaging Radar mission. Analytical expressions for bit error rates in the presence of a noisy carrier phase reference are derived for three representative cases: (1) I and Q channels are coded independently; (2) I channel is coded, Q channel is uncoded; and (3) I and Q channels are coded by a common 1/2 code. For rate 1/2 convolutional codes, QPSK modulation can be used to reduce the bandwidth requirement.

  4. Convoluted accommodation structures in folded rocks

    NASA Astrophysics Data System (ADS)

    Dodwell, T. J.; Hunt, G. W.

    2012-10-01

    A simplified variational model for the formation of convoluted accommodation structures, as seen in the hinge zones of larger-scale geological folds, is presented. The model encapsulates some important and intriguing nonlinear features, notably: infinite critical loads, formation of plastic hinges, and buckling on different length-scales. An inextensible elastic beam is forced by uniform overburden pressure and axial load into a V-shaped geometry dictated by formation of a plastic hinge. Using variational methods developed by Dodwell et al., upon which this paper leans heavily, energy minimisation leads to representation as a fourth-order nonlinear differential equation with free boundary conditions. Equilibrium solutions are found using numerical shooting techniques. Under the Maxwell stability criterion, it is recognised that global energy minimisers can exist with convoluted physical shapes. For such solutions, parallels can be drawn with some of the accommodation structures seen in exposed escarpments of real geological folds.

  5. A convolutional neural network neutrino event classifier

    DOE PAGES

    Aurisano, A.; Radovic, A.; Rocco, D.; ...

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  6. A convolutional neural network neutrino event classifier

    SciTech Connect

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  7. A Construction of MDS Quantum Convolutional Codes

    NASA Astrophysics Data System (ADS)

    Zhang, Guanghui; Chen, Bocong; Li, Liangchen

    2015-09-01

    In this paper, two new families of MDS quantum convolutional codes are constructed. The first one can be regarded as a generalization of [36, Theorem 6.5], in the sense that we do not assume that q≡1 (mod 4). More specifically, we obtain two classes of MDS quantum convolutional codes with parameters: (i) [( q 2+1, q 2-4 i+3,1;2,2 i+2)] q , where q≥5 is an odd prime power and 2≤ i≤( q-1)/2; (ii) , where q is an odd prime power with the form q=10 m+3 or 10 m+7 ( m≥2), and 2≤ i≤2 m-1.

  8. A convolutional neural network neutrino event classifier

    NASA Astrophysics Data System (ADS)

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  9. Deep Learning with Hierarchical Convolutional Factor Analysis

    PubMed Central

    Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence

    2013-01-01

    Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342

  10. Quantum superposition at the half-metre scale.

    PubMed

    Kovachy, T; Asenbaum, P; Overstreet, C; Donnelly, C A; Dickerson, S M; Sugarbaker, A; Hogan, J M; Kasevich, M A

    2015-12-24

    The quantum superposition principle allows massive particles to be delocalized over distant positions. Though quantum mechanics has proved adept at describing the microscopic world, quantum superposition runs counter to intuitive conceptions of reality and locality when extended to the macroscopic scale, as exemplified by the thought experiment of Schrödinger's cat. Matter-wave interferometers, which split and recombine wave packets in order to observe interference, provide a way to probe the superposition principle on macroscopic scales and explore the transition to classical physics. In such experiments, large wave-packet separation is impeded by the need for long interaction times and large momentum beam splitters, which cause susceptibility to dephasing and decoherence. Here we use light-pulse atom interferometry to realize quantum interference with wave packets separated by up to 54 centimetres on a timescale of 1 second. These results push quantum superposition into a new macroscopic regime, demonstrating that quantum superposition remains possible at the distances and timescales of everyday life. The sub-nanokelvin temperatures of the atoms and a compensation of transverse optical forces enable a large separation while maintaining an interference contrast of 28 per cent. In addition to testing the superposition principle in a new regime, large quantum superposition states are vital to exploring gravity with atom interferometers in greater detail. We anticipate that these states could be used to increase sensitivity in tests of the equivalence principle, measure the gravitational Aharonov-Bohm effect, and eventually detect gravitational waves and phase shifts associated with general relativity.

  11. Modifying real convolutional codes for protecting digital filtering systems

    NASA Technical Reports Server (NTRS)

    Redinbo, G. R.; Zagar, Bernhard

    1993-01-01

    A novel method is proposed for protecting digital filters from temporary and permanent failures that are not easily detected by conventional fault-tolerant computer design principles, on the basis of the error-detecting properties of real convolutional codes. Erroneous behavior is detected by externally comparing the calculated and regenerated parity samples. Great simplifications are obtainable by modifying the code structure to yield simplified parity channels with finite impulse response structures. A matrix equation involving the original parity values of the code and the polynomial of the digital filter's transfer function is formed, and row manipulations separate this equation into a set of homogeneous equations constraining the modifying scaling coefficients and another set which defines the code parity values' implementation.

  12. Robust mesoscopic superposition of strongly correlated ultracold atoms

    SciTech Connect

    Hallwood, David W.; Ernst, Thomas; Brand, Joachim

    2010-12-15

    We propose a scheme to create coherent superpositions of annular flow of strongly interacting bosonic atoms in a one-dimensional ring trap. The nonrotating ground state is coupled to a vortex state with mesoscopic angular momentum by means of a narrow potential barrier and an applied phase that originates from either rotation or a synthetic magnetic field. We show that superposition states in the Tonks-Girardeau regime are robust against single-particle loss due to the effects of strong correlations. The coupling between the mesoscopically distinct states scales much more favorably with particle number than in schemes relying on weak interactions, thus making particle numbers of hundreds or thousands feasible. Coherent oscillations induced by time variation of parameters may serve as a 'smoking gun' signature for detecting superposition states.

  13. Optimal control of quantum superpositions in a bosonic Josephson junction

    NASA Astrophysics Data System (ADS)

    Lapert, M.; Ferrini, G.; Sugny, D.

    2012-02-01

    We show how to optimally control the creation of quantum superpositions in a bosonic Josephson junction within the two-site Bose-Hubbard-model framework. Both geometric and purely numerical optimal-control approaches are used, the former providing a generalization of the proposal of Micheli [Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.67.013607 67, 013607 (2003)]. While this method is shown not to lead to significant improvements in terms of time of formation and fidelity of the superposition, a numerical optimal-control approach appears more promising, as it allows creation of an almost perfect superposition, within a time short compared to other existing protocols. We analyze the robustness of the optimal solution against atom-number variations. Finally, we discuss the extent to which these optimal solutions could be implemented with state-of-the-art technology.

  14. The analysis of convolutional codes via the extended Smith algorithm

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Onyszchuk, I.

    1993-01-01

    Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.

  15. Dimensional limits for arthropod eyes with superposition optics.

    PubMed

    Meyer-Rochow, Victor Benno; Gál, József

    2004-01-01

    An essential feature of the superposition type of compound eye is the presence of a wide zone, which is transparent and devoid of pigment and interposed between the distal array of dioptric elements and the proximally placed photoreceptive layer. Parallel rays, collected by many lenses, must (through reflection or refraction) cross this transparent clear-zone in such a way that they become focused on one receptor. Superposition depends mostly on diameter and curvature of the cornea, size and shape of the crystalline cone, lens cylinder properties of cornea and cone, dimensions of the receptor cells, and width of the clear-zone. We examined the role of the latter by geometrical, geometric-optical, and anatomical measurements and concluded that a minimal size exists, below which effective superposition can no longer occur. For an eye of a given size, it is not possible to increase the width of the clear-zone cz=dcz/R1 and decrease R2 (i.e., the radius of curvature of the distal retinal surface) and/or c=dc/R1 without reaching a limit. In the equations 'cz' is the width of the clear-zone dcz relative to the radius R1 of the eye and c is the length of the cornea-cone unit relative to R1. Our results provide one explanation as to why apposition eyes exist in very small scarabaeid beetles, when generally the taxon Scarabaeoidea is characterized by the presence of superposition eyes. The results may also provide the answer for the puzzle why juveniles or the young of species, in which the adults possess superposition (=clear-zone) eyes, frequently bear eyes that do not contain a clear zone, but resemble apposition eyes. The eyes of the young and immature specimens may simply be too small to permit superposition to occur.

  16. Superposition of helical beams by using a Michelson interferometer.

    PubMed

    Gao, Chunqing; Qi, Xiaoqing; Liu, Yidong; Weber, Horst

    2010-01-04

    Orbital angular momentum (OAM) of a helical beam is of great interests in the high density optical communication due to its infinite number of eigen-states. In this paper, an experimental setup is realized to the information encoding and decoding on the OAM eigen-states. A hologram designed by the iterative method is used to generate the helical beams, and a Michelson interferometer with two Porro prisms is used for the superposition of two helical beams. The experimental results of the collinear superposition of helical beams and their OAM eigen-states detection are presented.

  17. A fast complex integer convolution using a hybrid transform

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; K Truong, T.

    1978-01-01

    It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.

  18. Human Parsing with Contextualized Convolutional Neural Network.

    PubMed

    Liang, Xiaodan; Xu, Chunyan; Shen, Xiaohui; Yang, Jianchao; Tang, Jinhui; Lin, Liang; Yan, Shuicheng

    2016-03-02

    In this work, we address the human parsing task with a novel Contextualized Convolutional Neural Network (Co-CNN) architecture, which well integrates the cross-layer context, global image-level context, semantic edge context, within-super-pixel context and cross-super-pixel neighborhood context into a unified network. Given an input human image, Co-CNN produces the pixel-wise categorization in an end-to-end way. First, the cross-layer context is captured by our basic local-to-global-to-local structure, which hierarchically combines the global semantic information and the local fine details across different convolutional layers. Second, the global image-level label prediction is used as an auxiliary objective in the intermediate layer of the Co-CNN, and its outputs are further used for guiding the feature learning in subsequent convolutional layers to leverage the global imagelevel context. Third, semantic edge context is further incorporated into Co-CNN, where the high-level semantic boundaries are leveraged to guide pixel-wise labeling. Finally, to further utilize the local super-pixel contexts, the within-super-pixel smoothing and cross-super-pixel neighbourhood voting are formulated as natural sub-components of the Co-CNN to achieve the local label consistency in both training and testing process. Comprehensive evaluations on two public datasets well demonstrate the significant superiority of our Co-CNN over other state-of-the-arts for human parsing. In particular, the F-1 score on the large dataset [1] reaches 81:72% by Co-CNN, significantly higher than 62:81% and 64:38% by the state-of-the-art algorithms, MCNN [2] and ATR [1], respectively. By utilizing our newly collected large dataset for training, our Co-CNN can achieve 85:36% in F-1 score.

  19. Applications of convolution voltammetry in electroanalytical chemistry.

    PubMed

    Bentley, Cameron L; Bond, Alan M; Hollenkamp, Anthony F; Mahon, Peter J; Zhang, Jie

    2014-02-18

    The robustness of convolution voltammetry for determining accurate values of the diffusivity (D), bulk concentration (C(b)), and stoichiometric number of electrons (n) has been demonstrated by applying the technique to a series of electrode reactions in molecular solvents and room temperature ionic liquids (RTILs). In acetonitrile, the relatively minor contribution of nonfaradaic current facilitates analysis with macrodisk electrodes, thus moderate scan rates can be used without the need to perform background subtraction to quantify the diffusivity of iodide [D = 1.75 (±0.02) × 10(-5) cm(2) s(-1)] in this solvent. In the RTIL 1-ethyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide, background subtraction is necessary at a macrodisk electrode but can be avoided at a microdisk electrode, thereby simplifying the analytical procedure and allowing the diffusivity of iodide [D = 2.70 (±0.03) × 10(-7) cm(2) s(-1)] to be quantified. Use of a convolutive procedure which simultaneously allows D and nC(b) values to be determined is also demonstrated. Three conditions under which a technique of this kind may be applied are explored and are related to electroactive species which display slow dissolution kinetics, undergo a single multielectron transfer step, or contain multiple noninteracting redox centers using ferrocene in an RTIL, 1,4-dinitro-2,3,5,6-tetramethylbenzene, and an alkynylruthenium trimer, respectively, as examples. The results highlight the advantages of convolution voltammetry over steady-state techniques such as rotating disk electrode voltammetry and microdisk electrode voltammetry, as it is not restricted by the mode of diffusion (planar or radial), hence removing limitations on solvent viscosity, electrode geometry, and voltammetric scan rate.

  20. Bacterial colony counting by Convolutional Neural Networks.

    PubMed

    Ferrari, Alessandro; Lombardi, Stefano; Signoroni, Alberto

    2015-01-01

    Counting bacterial colonies on microbiological culture plates is a time-consuming, error-prone, nevertheless fundamental task in microbiology. Computer vision based approaches can increase the efficiency and the reliability of the process, but accurate counting is challenging, due to the high degree of variability of agglomerated colonies. In this paper, we propose a solution which adopts Convolutional Neural Networks (CNN) for counting the number of colonies contained in confluent agglomerates, that scored an overall accuracy of the 92.8% on a large challenging dataset. The proposed CNN-based technique for estimating the cardinality of colony aggregates outperforms traditional image processing approaches, becoming a promising approach to many related applications.

  1. Zebrafish tracking using convolutional neural networks

    PubMed Central

    XU, Zhiping; Cheng, Xi En

    2017-01-01

    Keeping identity for a long term after occlusion is still an open problem in the video tracking of zebrafish-like model animals, and accurate animal trajectories are the foundation of behaviour analysis. We utilize the highly accurate object recognition capability of a convolutional neural network (CNN) to distinguish fish of the same congener, even though these animals are indistinguishable to the human eye. We used data augmentation and an iterative CNN training method to optimize the accuracy for our classification task, achieving surprisingly accurate trajectories of zebrafish of different size and age zebrafish groups over different time spans. This work will make further behaviour analysis more reliable. PMID:28211462

  2. Convolutional coding combined with continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Pizzi, S. V.; Wilson, S. G.

    1985-01-01

    Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.

  3. Zebrafish tracking using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Xu, Zhiping; Cheng, Xi En

    2017-02-01

    Keeping identity for a long term after occlusion is still an open problem in the video tracking of zebrafish-like model animals, and accurate animal trajectories are the foundation of behaviour analysis. We utilize the highly accurate object recognition capability of a convolutional neural network (CNN) to distinguish fish of the same congener, even though these animals are indistinguishable to the human eye. We used data augmentation and an iterative CNN training method to optimize the accuracy for our classification task, achieving surprisingly accurate trajectories of zebrafish of different size and age zebrafish groups over different time spans. This work will make further behaviour analysis more reliable.

  4. Measuring orbital angular momentum superpositions of light by mode transformation.

    PubMed

    Berkhout, Gregorius C G; Lavery, Martin P J; Padgett, Miles J; Beijersbergen, Marco W

    2011-05-15

    We recently reported on a method for measuring orbital angular momentum (OAM) states of light based on the transformation of helically phased beams to tilted plane waves [Phys. Rev. Lett.105, 153601 (2010)]. Here we consider the performance of such a system for superpositions of OAM states by measuring the modal content of noninteger OAM states and beams produced by a Heaviside phase plate.

  5. Real-time feedback control of a mesoscopic superposition

    SciTech Connect

    Jacobs, Kurt; Finn, Justin; Vinjanampathy, Sai

    2011-04-15

    We show that continuous real-time feedback can be used to track, control, and protect a mesoscopic superposition of two spatially separated wave packets. The feedback protocol is enabled by an approximate state estimator and requires two continuous measurements, performed simultaneously. For nanomechanical and superconducting resonators, both measurements can be implemented by coupling the resonators to superconducting qubits.

  6. Convolutional fountain distribution over fading wireless channels

    NASA Astrophysics Data System (ADS)

    Usman, Mohammed

    2012-08-01

    Mobile broadband has opened the possibility of a rich variety of services to end users. Broadcast/multicast of multimedia data is one such service which can be used to deliver multimedia to multiple users economically. However, the radio channel poses serious challenges due to its time-varying properties, resulting in each user experiencing different channel characteristics, independent of other users. Conventional methods of achieving reliability in communication, such as automatic repeat request and forward error correction do not scale well in a broadcast/multicast scenario over radio channels. Fountain codes, being rateless and information additive, overcome these problems. Although the design of fountain codes makes it possible to generate an infinite sequence of encoded symbols, the erroneous nature of radio channels mandates the need for protecting the fountain-encoded symbols, so that the transmission is feasible. In this article, the performance of fountain codes in combination with convolutional codes, when used over radio channels, is presented. An investigation of various parameters, such as goodput, delay and buffer size requirements, pertaining to the performance of fountain codes in a multimedia broadcast/multicast environment is presented. Finally, a strategy for the use of 'convolutional fountain' over radio channels is also presented.

  7. NUCLEI SEGMENTATION VIA SPARSITY CONSTRAINED CONVOLUTIONAL REGRESSION

    PubMed Central

    Zhou, Yin; Chang, Hang; Barner, Kenneth E.; Parvin, Bahram

    2017-01-01

    Automated profiling of nuclear architecture, in histology sections, can potentially help predict the clinical outcomes. However, the task is challenging as a result of nuclear pleomorphism and cellular states (e.g., cell fate, cell cycle), which are compounded by the batch effect (e.g., variations in fixation and staining). Present methods, for nuclear segmentation, are based on human-designed features that may not effectively capture intrinsic nuclear architecture. In this paper, we propose a novel approach, called sparsity constrained convolutional regression (SCCR), for nuclei segmentation. Specifically, given raw image patches and the corresponding annotated binary masks, our algorithm jointly learns a bank of convolutional filters and a sparse linear regressor, where the former is used for feature extraction, and the latter aims to produce a likelihood for each pixel being nuclear region or background. During classification, the pixel label is simply determined by a thresholding operation applied on the likelihood map. The method has been evaluated using the benchmark dataset collected from The Cancer Genome Atlas (TCGA). Experimental results demonstrate that our method outperforms traditional nuclei segmentation algorithms and is able to achieve competitive performance compared to the state-of-the-art algorithm built upon human-designed features with biological prior knowledge. PMID:28101301

  8. Convolution Inequalities for the Boltzmann Collision Operator

    NASA Astrophysics Data System (ADS)

    Alonso, Ricardo J.; Carneiro, Emanuel; Gamba, Irene M.

    2010-09-01

    We study integrability properties of a general version of the Boltzmann collision operator for hard and soft potentials in n-dimensions. A reformulation of the collisional integrals allows us to write the weak form of the collision operator as a weighted convolution, where the weight is given by an operator invariant under rotations. Using a symmetrization technique in L p we prove a Young’s inequality for hard potentials, which is sharp for Maxwell molecules in the L 2 case. Further, we find a new Hardy-Littlewood-Sobolev type of inequality for Boltzmann collision integrals with soft potentials. The same method extends to radially symmetric, non-increasing potentials that lie in some {Ls_{weak}} or L s . The method we use resembles a Brascamp, Lieb and Luttinger approach for multilinear weighted convolution inequalities and follows a weak formulation setting. Consequently, it is closely connected to the classical analysis of Young and Hardy-Littlewood-Sobolev inequalities. In all cases, the inequality constants are explicitly given by formulas depending on integrability conditions of the angular cross section (in the spirit of Grad cut-off). As an additional application of the technique we also obtain estimates with exponential weights for hard potentials in both conservative and dissipative interactions.

  9. Enhanced interference-pattern visibility using multislit optical superposition method for imaging-type two-dimensional Fourier spectroscopy.

    PubMed

    Qi, Wei; Suzuki, Yo; Sato, Shun; Fujiwara, Masaru; Kawashima, Natsumi; Suzuki, Satoru; Abeygunawardhana, Pradeep; Wada, Kenji; Nishiyama, Akira; Ishimaru, Ichiro

    2015-07-10

    A solution is found for the problem of phase cancellation between adjacent bright points in wavefront-division phase-shift interferometry. To this end, a design is proposed that optimizes the visibility of the interference pattern from multiple slits. The method is explained in terms of Fraunhofer diffraction and convolution imaging. Optical simulations verify the technique. The final design can be calculated using a simple equation.

  10. Experimental Investigation of Convoluted Contouring for Aircraft Afterbody Drag Reduction

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Hunter, Craig A.

    1999-01-01

    An experimental investigation was performed in the NASA Langley 16-Foot Transonic Tunnel to determine the aerodynamic effects of external convolutions, placed on the boattail of a nonaxisymmetric nozzle for drag reduction. Boattail angles of 15 and 22 were tested with convolutions placed at a forward location upstream of the boattail curvature, at a mid location along the curvature and at a full location that spanned the entire boattail flap. Each of the baseline nozzle afterbodies (no convolutions) had a parabolic, converging contour with a parabolically decreasing corner radius. Data were obtained at several Mach numbers from static conditions to 1.2 for a range of nozzle pressure ratios and angles of attack. An oil paint flow visualization technique was used to qualitatively assess the effect of the convolutions. Results indicate that afterbody drag reduction by convoluted contouring is convolution location, Mach number, boattail angle, and NPR dependent. The forward convolution location was the most effective contouring geometry for drag reduction on the 22 afterbody, but was only effective for M < 0.95. At M = 0.8, drag was reduced 20 and 36 percent at NPRs of 5.4 and 7, respectively, but drag was increased 10 percent for M = 0.95 at NPR = 7. Convoluted contouring along the 15 boattail angle afterbody was not effective at reducing drag because the flow was minimally separated from the baseline afterbody, unlike the massive separation along the 22 boattail angle baseline afterbody.

  11. New quantum MDS-convolutional codes derived from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Li, Fengwei; Yue, Qin

    2015-12-01

    In this paper, we utilize a family of Hermitian dual-containing constacyclic codes to construct classical and quantum MDS convolutional codes. Our classical and quantum convolutional codes are optimal in the sense that they attain the classical (quantum) generalized Singleton bound.

  12. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons.

    PubMed

    Sanchez-Parcerisa, D; Cortés-Giraldo, M A; Dolney, D; Kondrla, M; Fager, M; Carabe, A

    2016-02-21

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm(-1)) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.

  13. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons

    NASA Astrophysics Data System (ADS)

    Sanchez-Parcerisa, D.; Cortés-Giraldo, M. A.; Dolney, D.; Kondrla, M.; Fager, M.; Carabe, A.

    2016-02-01

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm-1) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.

  14. A filtering approach based on Gaussian-powerlaw convolutions for local PET verification of proton radiotherapy.

    PubMed

    Parodi, Katia; Bortfeld, Thomas

    2006-04-21

    Because proton beams activate positron emitters in patients, positron emission tomography (PET) has the potential to play a unique role in the in vivo verification of proton radiotherapy. Unfortunately, the PET image is not directly proportional to the delivered radiation dose distribution. Current treatment verification strategies using PET therefore compare the actual PET image with full-blown Monte Carlo simulations of the PET signal. In this paper, we describe a simpler and more direct way to reconstruct the expected PET signal from the local radiation dose distribution near the distal fall-off region, which is calculated by the treatment planning programme. Under reasonable assumptions, the PET image can be described as a convolution of the dose distribution with a filter function. We develop a formalism to derive the filter function analytically. The main concept is the introduction of 'Q' functions defined as the convolution of a Gaussian with a powerlaw function. Special Q functions are the Gaussian itself and the error function. The convolution of two Q functions is another Q function. By fitting elementary dose distributions and their corresponding PET signals with Q functions, we derive the Q function approximation of the filter. The new filtering method has been validated through comparisons with Monte Carlo calculations and, in one case, with measured data. While the basic concept is developed under idealized conditions assuming that the absorbing medium is homogeneous near the distal fall-off region, a generalization to inhomogeneous situations is also described. As a result, the method can determine the distal fall-off region of the PET signal, and consequently the range of the proton beam, with millimetre accuracy. Quantification of the produced activity is possible. In conclusion, the PET activity resulting from a proton beam treatment can be determined by locally filtering the dose distribution as obtained from the treatment planning system. The

  15. relline: Relativistic line profiles calculation

    NASA Astrophysics Data System (ADS)

    Dauser, Thomas

    2015-05-01

    relline calculates relativistic line profiles; it is compatible with the common X-ray data analysis software XSPEC (ascl:9910.005) and ISIS (ascl:1302.002). The two basic forms are an additive line model (RELLINE) and a convolution model to calculate relativistic smearing (RELCONV).

  16. Convolutional neural network for pottery retrieval

    NASA Astrophysics Data System (ADS)

    Benhabiles, Halim; Tabia, Hedi

    2017-01-01

    The effectiveness of the convolutional neural network (CNN) has already been demonstrated in many challenging tasks of computer vision, such as image retrieval, action recognition, and object classification. This paper specifically exploits CNN to design local descriptors for content-based retrieval of complete or nearly complete three-dimensional (3-D) vessel replicas. Based on vector quantization, the designed descriptors are clustered to form a shape vocabulary. Then, each 3-D object is associated to a set of clusters (words) in that vocabulary. Finally, a weighted vector counting the occurrences of every word is computed. The reported experimental results on the 3-D pottery benchmark show the superior performance of the proposed method.

  17. Robust smile detection using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Celona, Luigi; Schettini, Raimondo

    2016-11-01

    We present a fully automated approach for smile detection. Faces are detected using a multiview face detector and aligned and scaled using automatically detected eye locations. Then, we use a convolutional neural network (CNN) to determine whether it is a smiling face or not. To this end, we investigate different shallow CNN architectures that can be trained even when the amount of learning data is limited. We evaluate our complete processing pipeline on the largest publicly available image database for smile detection in an uncontrolled scenario. We investigate the robustness of the method to different kinds of geometric transformations (rotation, translation, and scaling) due to imprecise face localization, and to several kinds of distortions (compression, noise, and blur). To the best of our knowledge, this is the first time that this type of investigation has been performed for smile detection. Experimental results show that our proposal outperforms state-of-the-art methods on both high- and low-quality images.

  18. The Developmental Rules of Neural Superposition in Drosophila.

    PubMed

    Langen, Marion; Agi, Egemen; Altschuler, Dylan J; Wu, Lani F; Altschuler, Steven J; Hiesinger, Peter Robin

    2015-07-02

    Complicated neuronal circuits can be genetically encoded, but the underlying developmental algorithms remain largely unknown. Here, we describe a developmental algorithm for the specification of synaptic partner cells through axonal sorting in the Drosophila visual map. Our approach combines intravital imaging of growth cone dynamics in developing brains of intact pupae and data-driven computational modeling. These analyses suggest that three simple rules are sufficient to generate the seemingly complex neural superposition wiring of the fly visual map without an elaborate molecular matchmaking code. Our computational model explains robust and precise wiring in a crowded brain region despite extensive growth cone overlaps and provides a framework for matching molecular mechanisms with the rules they execute. Finally, ordered geometric axon terminal arrangements that are not required for neural superposition are a side product of the developmental algorithm, thus elucidating neural circuit connectivity that remained unexplained based on adult structure and function alone.

  19. Nonclassicality tests and entanglement witnesses for macroscopic mechanical superposition states

    NASA Astrophysics Data System (ADS)

    Gittsovich, Oleg; Moroder, Tobias; Asadian, Ali; Gühne, Otfried; Rabl, Peter

    2015-02-01

    We describe a set of measurement protocols for performing nonclassicality tests and the verification of entangled superposition states of macroscopic continuous variable systems, such as nanomechanical resonators. Following earlier works, we first consider a setup where a two-level system is used to indirectly probe the motion of the mechanical system via Ramsey measurements and discuss the application of this method for detecting nonclassical mechanical states. We then show that the generalization of this technique to multiple resonator modes allows the conditioned preparation and the detection of entangled mechanical superposition states. The proposed measurement protocols can be implemented in various qubit-resonator systems that are currently under experimental investigation and find applications in future tests of quantum mechanics at a macroscopic scale.

  20. Macroscopic superposition of ultracold atoms with orbital degrees of freedom

    SciTech Connect

    Garcia-March, M. A.; Carr, L. D.; Dounas-Frazer, D. R.

    2011-04-15

    We introduce higher dimensions into the problem of Bose-Einstein condensates in a double-well potential, taking into account orbital angular momentum. We completely characterize the eigenstates of this system, delineating new regimes via both analytical high-order perturbation theory and numerical exact diagonalization. Among these regimes are mixed Josephson- and Fock-like behavior, crossings in both excited and ground states, and shadows of macroscopic superposition states.

  1. Measurement-Induced Macroscopic Superposition States in Cavity Optomechanics

    NASA Astrophysics Data System (ADS)

    Hoff, Ulrich B.; Kollath-Bönig, Johann; Neergaard-Nielsen, Jonas S.; Andersen, Ulrik L.

    2016-09-01

    A novel protocol for generating quantum superpositions of macroscopically distinct states of a bulk mechanical oscillator is proposed, compatible with existing optomechanical devices operating in the bad-cavity limit. By combining a pulsed optomechanical quantum nondemolition (QND) interaction with nonclassical optical resources and measurement-induced feedback, the need for strong single-photon coupling is avoided. We outline a three-pulse sequence of QND interactions encompassing squeezing-enhanced cooling by measurement, state preparation, and tomography.

  2. Sensing Super-position: Visual Instrument Sensor Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2006-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an

  3. Single-Atom Gating of Quantum State Superpositions

    SciTech Connect

    Moon, Christopher

    2010-04-28

    The ultimate miniaturization of electronic devices will likely require local and coherent control of single electronic wavefunctions. Wavefunctions exist within both physical real space and an abstract state space with a simple geometric interpretation: this state space - or Hilbert space - is spanned by mutually orthogonal state vectors corresponding to the quantized degrees of freedom of the real-space system. Measurement of superpositions is akin to accessing the direction of a vector in Hilbert space, determining an angle of rotation equivalent to quantum phase. Here we show that an individual atom inside a designed quantum corral1 can control this angle, producing arbitrary coherent superpositions of spatial quantum states. Using scanning tunnelling microscopy and nanostructures assembled atom-by-atom we demonstrate how single spins and quantum mirages can be harnessed to image the superposition of two electronic states. We also present a straightforward method to determine the atom path enacting phase rotations between any desired state vectors. A single atom thus becomes a real-space handle for an abstract Hilbert space, providing a simple technique for coherent quantum state manipulation at the spatial limit of condensed matter.

  4. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  5. SU-E-T-355: Efficient Scatter Correction for Direct Ray-Tracing Based Dose Calculation

    SciTech Connect

    Chen, M; Jiang, S; Lu, W

    2015-06-15

    Purpose: To propose a scatter correction method with linear computational complexity for direct-ray-tracing (DRT) based dose calculation. Due to its speed and simplicity, DRT is widely used as a dose engine in the treatment planning system (TPS) and monitor unit (MU) verification software, where heterogeneity correction is applied by radiological distance scaling. However, such correction only accounts for attenuation but not scatter difference, causing the DRT algorithm less accurate than the model-based algorithms for small field size in heterogeneous media. Methods: Inspired by the convolution formula derived from an exponential kernel as is typically done in the collapsed-cone-convolution-superposition (CCCS) method, we redesigned the ray tracing component as the sum of TERMA scaled by a local deposition factor, which is linear with respect to density, and dose of the previous voxel scaled by a remote deposition factor, D(i)=aρ(i)T(i)+(b+c(ρ(i)-1))D(i-1),where T(i)=e(-αr(i)+β(r(i))2) and r(i)=Σ-(j=1,..,i)ρ(j).The two factors together with TERMA can be expressed in terms of 5 parameters, which are subsequently optimized by curve fitting using digital phantoms for each field size and each beam energy. Results: The proposed algorithm was implemented for the Fluence-Convolution-Broad-Beam (FCBB) dose engine and evaluated using digital slab phantoms and clinical CT data. Compared with the gold standard calculation, dose deviations were improved from 20% to 2% in the low density regions of the slab phantoms for the 1-cm field size, and within 2% for over 95% of the volume with the largest discrepancy at the interface for the clinical lung case. Conclusion: We developed a simple recursive formula for scatter correction for the DRT-based dose calculation with much improved accuracy, especially for small field size, while still keeping calculation to linear complexity. The proposed calculator is fast, yet accurate, which is crucial for dose updating in IMRT

  6. Programmable convolution via the chirp Z-transform with CCD's

    NASA Technical Reports Server (NTRS)

    Buss, D. D.

    1977-01-01

    Technique filtering by convolution in frequency domain rather than in time domain presents possible solution to problem of programmable transversal filters. Process is accomplished through utilization of chip z-transform (CZT) with charge-coupled devices

  7. Metaheuristic Algorithms for Convolution Neural Network

    PubMed Central

    Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  8. Accelerated unsteady flow line integral convolution.

    PubMed

    Liu, Zhanping; Moorhead, Robert J

    2005-01-01

    Unsteady flow line integral convolution (UFLIC) is a texture synthesis technique for visualizing unsteady flows with high temporal-spatial coherence. Unfortunately, UFLIC requires considerable time to generate each frame due to the huge amount of pathline integration that is computed for particle value scattering. This paper presents Accelerated UFLIC (AUFLIC) for near interactive (1 frame/second) visualization with 160,000 particles per frame. AUFLIC reuses pathlines in the value scattering process to reduce computationally expensive pathline integration. A flow-driven seeding strategy is employed to distribute seeds such that only a few of them need pathline integration while most seeds are placed along the pathlines advected at earlier times by other seeds upstream and, therefore, the known pathlines can be reused for fast value scattering. To maintain a dense scattering coverage to convey high temporal-spatial coherence while keeping the expense of pathline integration low, a dynamic seeding controller is designed to decide whether to advect, copy, or reuse a pathline. At a negligible memory cost, AUFLIC is 9 times faster than UFLIC with comparable image quality.

  9. Convolution kernels for multi-wavelength imaging

    NASA Astrophysics Data System (ADS)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  10. Event Discrimination using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Menon, Hareesh; Hughes, Richard; Daling, Alec; Winer, Brian

    2017-01-01

    Convolutional Neural Networks (CNNs) are computational models that have been shown to be effective at classifying different types of images. We present a method to use CNNs to distinguish events involving the production of a top quark pair and a Higgs boson from events involving the production of a top quark pair and several quark and gluon jets. To do this, we generate and simulate data using MADGRAPH and DELPHES for a general purpose LHC detector at 13 TeV. We produce images using a particle flow algorithm by binning the particles geometrically based on their position in the detector and weighting the bins by the energy of each particle within each bin, and by defining channels based on particle types (charged track, neutral hadronic, neutral EM, lepton, heavy flavor). Our classification results are competitive with standard machine learning techniques. We have also looked into the classification of the substructure of the events, in a process known as scene labeling. In this context, we look for the presence of boosted objects (such as top quarks) with substructure encompassed within single jets. Preliminary results on substructure classification will be presented.

  11. Comparison of dose calculation algorithms in slab phantoms with cortical bone equivalent heterogeneities.

    PubMed

    Carrasco, P; Jornet, N; Duch, M A; Panettieri, V; Weber, L; Eudaldo, T; Ginjaume, M; Ribas, M

    2007-08-01

    To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10 x 10, 5 x 5, and 2 x 2 cm2) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2 x 2 cm2 field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values

  12. Comparison of dose calculation algorithms in slab phantoms with cortical bone equivalent heterogeneities

    SciTech Connect

    Carrasco, P.; Jornet, N.; Duch, M. A.; Panettieri, V.; Weber, L.; Eudaldo, T.; Ginjaume, M.; Ribas, M.

    2007-08-15

    To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10x10, 5x5, and 2x2 cm{sup 2}) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2x2 cm{sup 2} field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values

  13. Vehicle detection based on visual saliency and deep sparse convolution hierarchical model

    NASA Astrophysics Data System (ADS)

    Cai, Yingfeng; Wang, Hai; Chen, Xiaobo; Gao, Li; Chen, Long

    2016-07-01

    Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification. These types of methods generally have high processing times and low vehicle detection performance. To address this issue, a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed. A visual saliency calculation is firstly used to generate a small vehicle candidate area. The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection. The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group, which outperforms the existing state-of-the-art algorithms. More importantly, high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.

  14. Hardware efficient implementation of DFT using an improved first-order moments based cyclic convolution structure

    NASA Astrophysics Data System (ADS)

    Xiong, Jun; Liu, J. G.; Cao, Li

    2015-12-01

    This paper presents hardware efficient designs for implementing the one-dimensional (1D) discrete Fourier transform (DFT). Once DFT is formulated as the cyclic convolution form, the improved first-order moments-based cyclic convolution structure can be used as the basic computing unit for the DFT computation, which only contains a control module, a barrel shifter and (N-1)/2 accumulation units. After decomposing and reordering the twiddle factors, all that remains to do is shifting the input data sequence and accumulating them under the control of the statistical results on the twiddle factors. The whole calculation process only contains shift operations and additions with no need for multipliers and large memory. Compared with the previous first-order moments-based structure for DFT, the proposed designs have the advantages of less hardware consumption, lower power consumption and the flexibility to achieve better performance in certain cases. A series of experiments have proven the high performance of the proposed designs in terms of the area time product and power consumption. Similar efficient designs can be obtained for other computations, such as DCT/IDCT, DST/IDST, digital filter and correlation by transforming them into the forms of the first-order moments based cyclic convolution.

  15. Quantum jumps, superpositions, and the continuous evolution of quantum states

    NASA Astrophysics Data System (ADS)

    Dick, Rainer

    2017-02-01

    The apparent dichotomy between quantum jumps on the one hand, and continuous time evolution according to wave equations on the other hand, provided a challenge to Bohr's proposal of quantum jumps in atoms. Furthermore, Schrödinger's time-dependent equation also seemed to require a modification of the explanation for the origin of line spectra due to the apparent possibility of superpositions of energy eigenstates for different energy levels. Indeed, Schrödinger himself proposed a quantum beat mechanism for the generation of discrete line spectra from superpositions of eigenstates with different energies. However, these issues between old quantum theory and Schrödinger's wave mechanics were correctly resolved only after the development and full implementation of photon quantization. The second quantized scattering matrix formalism reconciles quantum jumps with continuous time evolution through the identification of quantum jumps with transitions between different sectors of Fock space. The continuous evolution of quantum states is then recognized as a sum over continually evolving jump amplitudes between different sectors in Fock space. In today's terminology, this suggests that linear combinations of scattering matrix elements are epistemic sums over ontic states. Insights from the resolution of the dichotomy between quantum jumps and continuous time evolution therefore hold important lessons for modern research both on interpretations of quantum mechanics and on the foundations of quantum computing. They demonstrate that discussions of interpretations of quantum theory necessarily need to take into account field quantization. They also demonstrate the limitations of the role of wave equations in quantum theory, and caution us that superpositions of quantum states for the formation of qubits may be more limited than usually expected.

  16. Labelled Unit Superposition Calculi for Instantiation-Based Reasoning

    NASA Astrophysics Data System (ADS)

    Korovin, Konstantin; Sticksel, Christoph

    The Inst-Gen-Eq method is an instantiation-based calculus which is complete for first-order clause logic modulo equality. Its distinctive feature is that it combines first-order reasoning with efficient ground satisfiability checking which is delegated in a modular way to any state-of-the-art ground SMT solver. The first-order reasoning modulo equality employs a superposition-style calculus which generates the instances needed by the ground solver to refine a model of a ground abstraction or to witness unsatisfiability.

  17. Scaling of macroscopic superpositions close to a quantum phase transition

    NASA Astrophysics Data System (ADS)

    Abad, Tahereh; Karimipour, Vahid

    2016-05-01

    It is well known that in a quantum phase transition (QPT), entanglement remains short ranged [Osterloh et al., Nature (London) 416, 608 (2005), 10.1038/416608a]. We ask if there is a quantum property entailing the whole system which diverges near this point. Using the recently proposed measures of quantum macroscopicity, we show that near a quantum critical point, it is the effective size of macroscopic superposition between the two symmetry breaking states which grows to the scale of system size, and its derivative with respect to the coupling shows both singular behavior and scaling properties.

  18. Concentration-temperature superposition of helix folding rates in gelatin.

    PubMed

    Gornall, J L; Terentjev, E M

    2007-07-13

    Using optical rotation as the primary technique, we have characterized the kinetics of helix renaturation in water solutions of gelatin. By covering a wide range of solution concentrations we identify a universal exponential dependence of folding rate on concentration and quench temperature. We demonstrate a new concentration-temperature superposition of data at all temperatures and concentrations, and build the corresponding master curve. The normalized rate constant is consistent with helix lengthening. Nucleation of the triple helix occurs rapidly and contributes less to the helical onset than previously thought.

  19. SU-E-T-277: Dose Calculation Comparisons Between Monaco, Pinnacle and Eclipse Treatment Planning Systems

    SciTech Connect

    Bosse, C; Kirby, N; Narayanasamy, G; Papanikolaou, N; Stathakis, S

    2015-06-15

    Purpose: Monaco treatment planning system (TPS) version 5.0 uses a Monte-Carlo based dose calculation engine. The aim of this study is to verify and compare the Monaco based dose calculations with both Pinnacle{sup 3} collapsed cone convolution superposition (CCC) and Eclipse analytical anisotropic algorithm (AAA) calculations. Methods: For this study, previously treated SBRT lung, head and neck and abdomen patients were chosen to compare dose calculations between Pinnacle, Monaco and Eclipse. Plans were chosen from those that had been treated using the Elekta VersaHD or a NovalisTX linac. The plans included 3D conventional and IMRT beams using 6MV and 6MV Flattening filter free (FFF) photon beams. The original plans calculated with CCCS or AAA along with the recalculated ones using MC from the three TPS were exported into Velocity software for inter-comparison. Results: To compare the dose calculations, Mean Lung Dose (MLD), lung V5 and V20 values, and PTV Heterogeneity indexes (HI) and Conformity indexes (CI) were all calculated and recorded from the dose volume histograms (DVH). For each patient, the CI values were identical but there were differences in all other parameters. The HI was computed higher by 5 and 4% for calculated plans AAA and CCCS respectively, compared to the MC ones. The DVH graphs showed large differences between the CCCS and AAA and Monaco for 3D FFF, VMAT and IMRT plans. Better DVH agreement between was observed for 3D conventional plans. Conclusion: Better agreement was observed between CCCS and MC calculations than AAA and MC calculations. Those differences were more profound as the field size was decreasing and in the presence of inhomogeneities.

  20. Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.

    PubMed

    Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian

    2016-10-01

    In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.

  1. Comparison of selected dose calculation algorithms in radiotherapy treatment planning for tissues with inhomogeneities

    NASA Astrophysics Data System (ADS)

    Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.

    2016-03-01

    Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.

  2. Patient-specific dosimetry using quantitative SPECT imaging and three-dimensional discrete fourier transform convolution

    SciTech Connect

    Akabani, G.; Hawkins, W.G.; Eckblade, M.B.; Leichner, P.K.

    1997-02-01

    The objective of this study was to develop a three-dimensional discrete Fourier transform (3D-DFT) convolution method to perform the dosimetry for {sup 131}I-labeled antibodies in soft tissues. Mathematical and physical phantoms were used to compare 3D-DFT with Monte Carlo transport (MCT) calculations based on the EGS4 code. The mathematical and physical phantoms consisted of a sphere and cylinder, respectively, containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the circular harmonic transform (CHT) algorithm. The radial dose profile obtained from MCT calculations and the 3D-DFT convolution method for the mathematical phantom were in close agreement. The root mean square error (RMSE) for the two methods was <0.1%, with a maximum difference <21%. Results obtained for the physical phantom gave a RMSE <0.1% and a maximum difference of <13%; isodose contours were in good agreement. SPECT data for two patients who had undergone {sup 131}I radioimmunotherapy (RIT) were used to compare absorbed-dose rates and isodose rate contours with the two methods of calculations. This yielded a RMSE <0.02% and a maximum difference of <13%. Our results showed that the 3D-DFT convolution method compared well with MCT calculations. The 3D-DFT approach is computationally much more efficient and, hence, the method of choice. This method is patient-specific and applicable to the dosimetry of soft-tissue tumors and normal organs. It can be implemented on personal computers. 22 refs., 6 figs., 2 tabs.

  3. SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data

    SciTech Connect

    Chen, M; Jiang, S; Lu, W

    2015-06-15

    Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data, as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.

  4. Experiments testing macroscopic quantum superpositions must be slow

    NASA Astrophysics Data System (ADS)

    Mari, Andrea; de Palma, Giacomo; Giovannetti, Vittorio

    2016-03-01

    We consider a thought experiment where the preparation of a macroscopically massive or charged particle in a quantum superposition and the associated dynamics of a distant test particle apparently allow for superluminal communication. We give a solution to the paradox which is based on the following fundamental principle: any local experiment, discriminating a coherent superposition from an incoherent statistical mixture, necessarily requires a minimum time proportional to the mass (or charge) of the system. For a charged particle, we consider two examples of such experiments, and show that they are both consistent with the previous limitation. In the first, the measurement requires to accelerate the charge, that can entangle with the emitted photons. In the second, the limitation can be ascribed to the quantum vacuum fluctuations of the electromagnetic field. On the other hand, when applied to massive particles our result provides an indirect evidence for the existence of gravitational vacuum fluctuations and for the possibility of entangling a particle with quantum gravitational radiation.

  5. Modeling scattering from azimuthally symmetric bathymetric features using wavefield superposition.

    PubMed

    Fawcett, John A

    2007-12-01

    In this paper, an approach for modeling the scattering from azimuthally symmetric bathymetric features is described. These features are useful models for small mounds and indentations on the seafloor at high frequencies and seamounts, shoals, and basins at low frequencies. A bathymetric feature can be considered as a compact closed region, with the same sound speed and density as one of the surrounding media. Using this approach, a number of numerical methods appropriate for a partially buried target or facet problem can be applied. This paper considers the use of wavefield superposition and because of the azimuthal symmetry, the three-dimensional solution to the scattering problem can be expressed as a Fourier sum of solutions to a set of two-dimensional scattering problems. In the case where the surrounding two half spaces have only a density contrast, a semianalytic coupled mode solution is derived. This provides a benchmark solution to scattering from a class of penetrable hemispherical bosses or indentations. The details and problems of the numerical implementation of the wavefield superposition method are described. Example computations using the method for a simple scattering feature on a seabed are presented for a wide band of frequencies.

  6. Free Nano-Object Ramsey Interferometry for Large Quantum Superpositions

    NASA Astrophysics Data System (ADS)

    Wan, C.; Scala, M.; Morley, G. W.; Rahman, ATM. A.; Ulbricht, H.; Bateman, J.; Barker, P. F.; Bose, S.; Kim, M. S.

    2016-09-01

    We propose an interferometric scheme based on an untrapped nano-object subjected to gravity. The motion of the center of mass (c.m.) of the free object is coupled to its internal spin system magnetically, and a free flight scheme is developed based on coherent spin control. The wave packet of the test object, under a spin-dependent force, may then be delocalized to a macroscopic scale. A gravity induced dynamical phase (accrued solely on the spin state, and measured through a Ramsey scheme) is used to reveal the above spatially delocalized superposition of the spin-nano-object composite system that arises during our scheme. We find a remarkable immunity to the motional noise in the c.m. (initially in a thermal state with moderate cooling), and also a dynamical decoupling nature of the scheme itself. Together they secure a high visibility of the resulting Ramsey fringes. The mass independence of our scheme makes it viable for a nano-object selected from an ensemble with a high mass variability. Given these advantages, a quantum superposition with a 100 nm spatial separation for a massive object of 1 09 amu is achievable experimentally, providing a route to test postulated modifications of quantum theory such as continuous spontaneous localization.

  7. Runs in superpositions of renewal processes with applications to discrimination

    NASA Astrophysics Data System (ADS)

    Alsmeyer, Gerold; Irle, Albrecht

    2006-02-01

    Wald and Wolfowitz [Ann. Math. Statist. 11 (1940) 147-162] introduced the run test for testing whether two samples of i.i.d. random variables follow the same distribution. Here a run means a consecutive subsequence of maximal length from only one of the two samples. In this paper we contribute to the problem of runs and resulting test procedures for the superposition of independent renewal processes which may be interpreted as arrival processes of customers from two different input channels at the same service station. To be more precise, let (Sn)n[greater-or-equal, slanted]1 and (Tn)n[greater-or-equal, slanted]1 be the arrival processes for channel 1 and channel 2, respectively, and (Wn)n[greater-or-equal, slanted]1 their be superposition with counting process . Let further be the number of runs in W1,...,Wn and the number of runs observed up to time t. We study the asymptotic behavior of and Rt, first for the case where (Sn)n[greater-or-equal, slanted]1 and (Tn)n[greater-or-equal, slanted]1 have exponentially distributed increments with parameters [lambda]1 and [lambda]2, and then for the more difficult situation when these increments have an absolutely continuous distribution. These results are used to design asymptotic level [alpha] tests for testing [lambda]1=[lambda]2 against [lambda]1[not equal to][lambda]2 in the first case, and for testing for equal scale parameters in the second.

  8. Experiments testing macroscopic quantum superpositions must be slow

    PubMed Central

    Mari, Andrea; De Palma, Giacomo; Giovannetti, Vittorio

    2016-01-01

    We consider a thought experiment where the preparation of a macroscopically massive or charged particle in a quantum superposition and the associated dynamics of a distant test particle apparently allow for superluminal communication. We give a solution to the paradox which is based on the following fundamental principle: any local experiment, discriminating a coherent superposition from an incoherent statistical mixture, necessarily requires a minimum time proportional to the mass (or charge) of the system. For a charged particle, we consider two examples of such experiments, and show that they are both consistent with the previous limitation. In the first, the measurement requires to accelerate the charge, that can entangle with the emitted photons. In the second, the limitation can be ascribed to the quantum vacuum fluctuations of the electromagnetic field. On the other hand, when applied to massive particles our result provides an indirect evidence for the existence of gravitational vacuum fluctuations and for the possibility of entangling a particle with quantum gravitational radiation. PMID:26959656

  9. Superposition states for quantum nanoelectronic circuits and their nonclassical properties

    NASA Astrophysics Data System (ADS)

    Choi, Jeong Ryeol

    2016-09-01

    Quantum properties of a superposition state for a series RLC nanoelectronic circuit are investigated. Two displaced number states of the same amplitude but with opposite phases are considered as components of the superposition state. We have assumed that the capacitance of the system varies with time and a time-dependent power source is exerted on the system. The effects of displacement and a sinusoidal power source on the characteristics of the state are addressed in detail. Depending on the magnitude of the sinusoidal power source, the wave packets that propagate in charge(q)-space are more or less distorted. Provided that the displacement is sufficiently high, distinct interference structures appear in the plot of the time behavior of the probability density whenever the two components of the wave packet meet together. This is strong evidence for the advent of nonclassical properties in the system, that cannot be interpretable by the classical theory. Nonclassicality of a quantum system is not only a beneficial topic for academic interest in itself, but its results can be useful resources for quantum information and computation as well.

  10. Evolution of superpositions of quantum states through a level crossing

    SciTech Connect

    Torosov, B. T.; Vitanov, N. V.

    2011-12-15

    The Landau-Zener-Stueckelberg-Majorana (LZSM) model is widely used for estimating transition probabilities in the presence of crossing energy levels in quantum physics. This model, however, makes the unphysical assumption of an infinitely long constant interaction, which introduces a divergent phase in the propagator. This divergence remains hidden when estimating output probabilities for a single input state insofar as the divergent phase cancels out. In this paper we show that, because of this divergent phase, the LZSM model is inadequate to describe the evolution of pure or mixed superposition states across a level crossing. The LZSM model can be used only if the system is initially in a single state or in a completely mixed superposition state. To this end, we show that the more realistic Demkov-Kunike model, which assumes a hyperbolic-tangent level crossing and a hyperbolic-secant interaction envelope, is free of divergences and is a much more adequate tool for describing the evolution through a level crossing for an arbitrary input state. For multiple crossing energies which are reducible to one or more effective two-state systems (e.g., by the Majorana and Morris-Shore decompositions), similar conclusions apply: the LZSM model does not produce definite values of the populations and the coherences, and one should use the Demkov-Kunike model instead.

  11. Experiments testing macroscopic quantum superpositions must be slow.

    PubMed

    Mari, Andrea; De Palma, Giacomo; Giovannetti, Vittorio

    2016-03-09

    We consider a thought experiment where the preparation of a macroscopically massive or charged particle in a quantum superposition and the associated dynamics of a distant test particle apparently allow for superluminal communication. We give a solution to the paradox which is based on the following fundamental principle: any local experiment, discriminating a coherent superposition from an incoherent statistical mixture, necessarily requires a minimum time proportional to the mass (or charge) of the system. For a charged particle, we consider two examples of such experiments, and show that they are both consistent with the previous limitation. In the first, the measurement requires to accelerate the charge, that can entangle with the emitted photons. In the second, the limitation can be ascribed to the quantum vacuum fluctuations of the electromagnetic field. On the other hand, when applied to massive particles our result provides an indirect evidence for the existence of gravitational vacuum fluctuations and for the possibility of entangling a particle with quantum gravitational radiation.

  12. Free Nano-Object Ramsey Interferometry for Large Quantum Superpositions.

    PubMed

    Wan, C; Scala, M; Morley, G W; Rahman, Atm A; Ulbricht, H; Bateman, J; Barker, P F; Bose, S; Kim, M S

    2016-09-30

    We propose an interferometric scheme based on an untrapped nano-object subjected to gravity. The motion of the center of mass (c.m.) of the free object is coupled to its internal spin system magnetically, and a free flight scheme is developed based on coherent spin control. The wave packet of the test object, under a spin-dependent force, may then be delocalized to a macroscopic scale. A gravity induced dynamical phase (accrued solely on the spin state, and measured through a Ramsey scheme) is used to reveal the above spatially delocalized superposition of the spin-nano-object composite system that arises during our scheme. We find a remarkable immunity to the motional noise in the c.m. (initially in a thermal state with moderate cooling), and also a dynamical decoupling nature of the scheme itself. Together they secure a high visibility of the resulting Ramsey fringes. The mass independence of our scheme makes it viable for a nano-object selected from an ensemble with a high mass variability. Given these advantages, a quantum superposition with a 100 nm spatial separation for a massive object of 10^{9}  amu is achievable experimentally, providing a route to test postulated modifications of quantum theory such as continuous spontaneous localization.

  13. Colonoscopic polyp detection using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Park, Sun Young; Sargent, Dusty

    2016-03-01

    Computer aided diagnosis (CAD) systems for medical image analysis rely on accurate and efficient feature extraction methods. Regardless of which type of classifier is used, the results will be limited if the input features are not diagnostically relevant and do not properly discriminate between the different classes of images. Thus, a large amount of research has been dedicated to creating feature sets that capture the salient features that physicians are able to observe in the images. Successful feature extraction reduces the semantic gap between the physician's interpretation and the computer representation of images, and helps to reduce the variability in diagnosis between physicians. Due to the complexity of many medical image classification tasks, feature extraction for each problem often requires domainspecific knowledge and a carefully constructed feature set for the specific type of images being classified. In this paper, we describe a method for automatic diagnostic feature extraction from colonoscopy images that may have general application and require a lower level of domain-specific knowledge. The work in this paper expands on our previous CAD algorithm for detecting polyps in colonoscopy video. In that work, we applied an eigenimage model to extract features representing polyps, normal tissue, diverticula, etc. from colonoscopy videos taken from various viewing angles and imaging conditions. Classification was performed using a conditional random field (CRF) model that accounted for the spatial and temporal adjacency relationships present in colonoscopy video. In this paper, we replace the eigenimage feature descriptor with features extracted from a convolutional neural network (CNN) trained to recognize the same image types in colonoscopy video. The CNN-derived features show greater invariance to viewing angles and image quality factors when compared to the eigenimage model. The CNN features are used as input to the CRF classifier as before. We report

  14. Limitations to the validity of single wake superposition in wind farm yield assessment

    NASA Astrophysics Data System (ADS)

    Gunn, K.; Stock-Williams, C.; Burke, M.; Willden, R.; Vogel, C.; Hunter, W.; Stallard, T.; Robinson, N.; Schmidt, S. R.

    2016-09-01

    Commercially available wind yield assessment models rely on superposition of wakes calculated for isolated single turbines. These methods of wake simulation fail to account for emergent flow physics that may affect the behaviour of multiple turbines and their wakes and therefore wind farm yield predictions. In this paper wake-wake interaction is modelled computationally (CFD) and physically (in a hydraulic flume) to investigate physical causes of discrepancies between analytical modelling and simulations or measurements. Three effects, currently neglected in commercial models, are identified as being of importance: 1) when turbines are directly aligned, the combined wake is shortened relative to the single turbine wake; 2) when wakes are adjacent, each will be lengthened due to reduced mixing; and 3) the pressure field of downstream turbines can move and modify wakes flowing close to them.

  15. Color changes in wood during heating: kinetic analysis by applying a time-temperature superposition method

    NASA Astrophysics Data System (ADS)

    Matsuo, Miyuki; Yokoyama, Misao; Umemura, Kenji; Gril, Joseph; Yano, Ken'ichiro; Kawai, Shuichi

    2010-04-01

    This paper deals with the kinetics of the color properties of hinoki ( Chamaecyparis obtusa Endl.) wood. Specimens cut from the wood were heated at 90-180°C as accelerated aging treatment. The specimens completely dried and heated in the presence of oxygen allowed us to evaluate the effects of thermal oxidation on wood color change. Color properties measured by a spectrophotometer showed similar behavior irrespective of the treatment temperature with each time scale. Kinetic analysis using the time-temperature superposition principle, which uses the whole data set, was successfully applied to the color changes. The calculated values of the apparent activation energy in terms of L *, a *, b *, and Δ E^{*}_{ab} were 117, 95, 114, and 113 kJ/mol, respectively, which are similar to the values of the literature obtained for other properties such as the physical and mechanical properties of wood.

  16. SU-E-T-371: Evaluating the Convolution Algorithm of a Commercially Available Radiosurgery Irradiator Using a Novel Phantom

    SciTech Connect

    Cates, J; Drzymala, R

    2015-06-15

    Purpose: The purpose of this study was to develop and use a novel phantom to evaluate the accuracy and usefulness of the Leskell Gamma Plan convolution-based dose calculation algorithm compared with the current TMR10 algorithm. Methods: A novel phantom was designed to fit the Leskell Gamma Knife G Frame which could accommodate various materials in the form of one inch diameter, cylindrical plugs. The plugs were split axially to allow EBT2 film placement. Film measurements were made during two experiments. The first utilized plans generated on a homogeneous acrylic phantom setup using the TMR10 algorithm, with various materials inserted into the phantom during film irradiation to assess the effect on delivered dose due to unplanned heterogeneities upstream in the beam path. The second experiment utilized plans made on CT scans of different heterogeneous setups, with one plan using the TMR10 dose calculation algorithm and the second using the convolution-based algorithm. Materials used to introduce heterogeneities included air, LDPE, polystyrene, Delrin, Teflon, and aluminum. Results: The data shows that, as would be expected, having heterogeneities in the beam path does induce dose delivery error when using the TMR10 algorithm, with the largest errors being due to the heterogeneities with electron densities most different from that of water, i.e. air, Teflon, and aluminum. Additionally, the Convolution algorithm did account for the heterogeneous material and provided a more accurate predicted dose, in extreme cases up to a 7–12% improvement over the TMR10 algorithm. The convolution algorithm expected dose was accurate to within 3% in all cases. Conclusion: This study proves that the convolution algorithm is an improvement over the TMR10 algorithm when heterogeneities are present. More work is needed to determine what the heterogeneity size/volume limits are where this improvement exists, and in what clinical and/or research cases this would be relevant.

  17. ASIC-based architecture for the real-time computation of 2D convolution with large kernel size

    NASA Astrophysics Data System (ADS)

    Shao, Rui; Zhong, Sheng; Yan, Luxin

    2015-12-01

    Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.

  18. Enthalpy difference between conformations of normal alkanes: effects of basis set and chain length on intramolecular basis set superposition error

    NASA Astrophysics Data System (ADS)

    Balabin, Roman M.

    2011-03-01

    The quantum chemistry of conformation equilibrium is a field where great accuracy (better than 100 cal mol-1) is needed because the energy difference between molecular conformers rarely exceeds 1000-3000 cal mol-1. The conformation equilibrium of straight-chain (normal) alkanes is of particular interest and importance for modern chemistry. In this paper, an extra error source for high-quality ab initio (first principles) and DFT calculations of the conformation equilibrium of normal alkanes, namely the intramolecular basis set superposition error (BSSE), is discussed. In contrast to out-of-plane vibrations in benzene molecules, diffuse functions on carbon and hydrogen atoms were found to greatly reduce the relative BSSE of n-alkanes. The corrections due to the intramolecular BSSE were found to be almost identical for the MP2, MP4, and CCSD(T) levels of theory. Their cancelation is expected when CCSD(T)/CBS (CBS, complete basis set) energies are evaluated by addition schemes. For larger normal alkanes (N > 12), the magnitude of the BSSE correction was found to be up to three times larger than the relative stability of the conformer; in this case, the basis set superposition error led to a two orders of magnitude difference in conformer abundance. No error cancelation due to the basis set superposition was found. A comparison with amino acid, peptide, and protein data was provided.

  19. The origin of non-classical effects in a one-dimensional superposition of coherent states

    NASA Technical Reports Server (NTRS)

    Buzek, V.; Knight, P. L.; Barranco, A. Vidiella

    1992-01-01

    We investigate the nature of the quantum fluctuations in a light field created by the superposition of coherent fields. We give a physical explanation (in terms of Wigner functions and phase-space interference) why the 1-D superposition of coherent states in the direction of the x-quadrature leads to the squeezing of fluctuations in the y-direction, and show that such a superposition can generate the squeezed vacuum and squeezed coherent states.

  20. Entanglement and Decoherence in Two-Dimensional Coherent State Superpositions

    NASA Astrophysics Data System (ADS)

    Maleki, Y.

    2017-03-01

    A detailed investigation of entanglement in the generalized two-dimensional nonorthogonal states, which are expressed in the framework of superposed coherent states, is presented. In addition to quantifying entanglement of the generalized two-dimensional coherent states superposition, necessary and sufficient conditions for maximality of entanglement of these states are found. We show that a large class of maximally entangled coherent states can be constructed, and hence, some new maximally entangled coherent states are explicitly manipulated. The investigation is extended to the mixed system states and entanglement properties of such mixed states are investigated. It is shown that in some cases maximally entangled mixed states can be detected. Furthermore, the effect of decoherence, due to both cavity losses and noisy channel process, on such entangled states are studied and its features are discussed.

  1. Sensing Super-Position: Human Sensing Beyond the Visual Spectrum

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2007-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This paper addresses the technical feasibility of augmenting human vision through Sensing Super-position by mixing natural Human sensing. The current implementation of the device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of Lie human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system. The

  2. Predicting jet radius in electrospinning by superpositioning exponential functions

    NASA Astrophysics Data System (ADS)

    Widartiningsih, P. M.; Iskandar, F.; Munir, M. M.; Viridi, S.

    2016-08-01

    This paper presents an analytical study of the correlation between viscosity and fiber diameter in electrospinning. Control over fiber diameter in electrospinning process was important since it will determine the performance of resulting nanofiber. Theoretically, fiber diameter was determined by surface tension, solution concentration, flow rate, and electric current. But experimentally it had been proven that significantly viscosity had an influence to fiber diameter. Jet radius equation in electrospinning process was divided into three areas: near the nozzle, far from the nozzle, and at jet terminal. There was no correlation between these equations. Superposition of exponential series model provides the equations combined into one, thus the entire of working parameters on electrospinning take a contribution to fiber diameter. This method yields the value of solution viscosity has a linear relation to jet radius. However, this method works only for low viscosity.

  3. Inequalities and consequences of new convolutions for the fractional Fourier transform with Hermite weights

    NASA Astrophysics Data System (ADS)

    Anh, P. K.; Castro, L. P.; Thao, P. T.; Tuan, N. M.

    2017-01-01

    This paper presents new convolutions for the fractional Fourier transform which are somehow associated with the Hermite functions. Consequent inequalities and properties are derived for these convolutions, among which we emphasize two new types of Young's convolution inequalities. The results guarantee a general framework where the present convolutions are well-defined, allowing larger possibilities than the known ones for other convolutions. Furthermore, we exemplify the use of our convolutions by providing explicit solutions of some classes of integral equations which appear in engineering problems.

  4. Macroscopicity of quantum superpositions on a one-parameter unitary path in Hilbert space

    NASA Astrophysics Data System (ADS)

    Volkoff, T. J.; Whaley, K. B.

    2014-12-01

    We analyze quantum states formed as superpositions of an initial pure product state and its image under local unitary evolution, using two measurement-based measures of superposition size: one based on the optimal quantum binary distinguishability of the branches of the superposition and another based on the ratio of the maximal quantum Fisher information of the superposition to that of its branches, i.e., the relative metrological usefulness of the superposition. A general formula for the effective sizes of these states according to the branch-distinguishability measure is obtained and applied to superposition states of N quantum harmonic oscillators composed of Gaussian branches. Considering optimal distinguishability of pure states on a time-evolution path leads naturally to a notion of distinguishability time that generalizes the well-known orthogonalization times of Mandelstam and Tamm and Margolus and Levitin. We further show that the distinguishability time provides a compact operational expression for the superposition size measure based on the relative quantum Fisher information. By restricting the maximization procedure in the definition of this measure to an appropriate algebra of observables, we show that the superposition size of, e.g., NOON states and hierarchical cat states, can scale linearly with the number of elementary particles comprising the superposition state, implying precision scaling inversely with the total number of photons when these states are employed as probes in quantum parameter estimation of a 1-local Hamiltonian in this algebra.

  5. Robustness of superposition states evolving under the influence of a thermal reservoir

    SciTech Connect

    Sales, J. S.; Almeida, N. G. de

    2011-06-15

    We study the evolution of superposition states under the influence of a reservoir at zero and finite temperatures in cavity quantum electrodynamics aiming to know how their purity is lost over time. The superpositions studied here are composed of coherent states, orthogonal coherent states, squeezed coherent states, and orthogonal squeezed coherent states, which we introduce to generalize the orthogonal coherent states. For comparison, we also show how the robustness of the superpositions studied here differs from that of a qubit given by a superposition of zero- and one-photon states.

  6. Output-sensitive 3D line integral convolution.

    PubMed

    Falk, Martin; Weiskopf, Daniel

    2008-01-01

    We propose an output-sensitive visualization method for 3D line integral convolution (LIC) whose rendering speed is largely independent of the data set size and mostly governed by the complexity of the output on the image plane. Our approach of view-dependent visualization tightly links the LIC generation with the volume rendering of the LIC result in order to avoid the computation of unnecessary LIC points: early-ray termination and empty-space leaping techniques are used to skip the computation of the LIC integral in a lazy-evaluation approach; both ray casting and texture slicing can be used as volume-rendering techniques. The input noise is modeled in object space to allow for temporal coherence under object and camera motion. Different noise models are discussed, covering dense representations based on filtered white noise all the way to sparse representations similar to oriented LIC. Aliasing artifacts are avoided by frequency control over the 3D noise and by employing a 3D variant of MIPmapping. A range of illumination models is applied to the LIC streamlines: different codimension-2 lighting models and a novel gradient-based illumination model that relies on precomputed gradients and does not require any direct calculation of gradients after the LIC integral is evaluated. We discuss the issue of proper sampling of the LIC and volume-rendering integrals by employing a frequency-space analysis of the noise model and the precomputed gradients. Finally, we demonstrate that our visualization approach lends itself to a fast graphics processing unit (GPU) implementation that supports both steady and unsteady flow. Therefore, this 3D LIC method allows users to interactively explore 3D flow by means of high-quality, view-dependent, and adaptive LIC volume visualization. Applications to flow visualization in combination with feature extraction and focus-and-context visualization are described, a comparison to previous methods is provided, and a detailed performance

  7. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  8. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  9. A Review on the Use of Grid-Based Boltzmann Equation Solvers for Dose Calculation in External Photon Beam Treatment Planning

    PubMed Central

    Kan, Monica W. K.; Yu, Peter K. N.; Leung, Lucullus H. T.

    2013-01-01

    Deterministic linear Boltzmann transport equation (D-LBTE) solvers have recently been developed, and one of the latest available software codes, Acuros XB, has been implemented in a commercial treatment planning system for radiotherapy photon beam dose calculation. One of the major limitations of most commercially available model-based algorithms for photon dose calculation is the ability to account for the effect of electron transport. This induces some errors in patient dose calculations, especially near heterogeneous interfaces between low and high density media such as tissue/lung interfaces. D-LBTE solvers have a high potential of producing accurate dose distributions in and near heterogeneous media in the human body. Extensive previous investigations have proved that D-LBTE solvers were able to produce comparable dose calculation accuracy as Monte Carlo methods with a reasonable speed good enough for clinical use. The current paper reviews the dosimetric evaluations of D-LBTE solvers for external beam photon radiotherapy. This content summarizes and discusses dosimetric validations for D-LBTE solvers in both homogeneous and heterogeneous media under different circumstances and also the clinical impact on various diseases due to the conversion of dose calculation from a conventional convolution/superposition algorithm to a recently released D-LBTE solver. PMID:24066294

  10. Prediction of color changes using the time-temperature superposition principle in liquid formulations.

    PubMed

    Mochizuki, Koji; Takayama, Kozo

    2014-01-01

    This study reports the results of applying the time-temperature superposition principle (TTSP) to the prediction of color changes in liquid formulations. A sample solution consisting of L-tryptophan and glucose was used as the model liquid formulation for the Maillard reaction. After accelerated aging treatment at elevated temperatures, the Commission Internationale de l'Eclairage (CIE) LAB color parameters (a*, b*, L*, and E*ab) of the sample solution were measured using a spectrophotometer. The TTSP was then applied to a kinetic analysis of the color changes. The calculated values of the apparent activation energy of a*, b*, L*, and ΔE*ab were 105.2, 109.8, 91.6, and 103.7 kJ/mol, respectively. The predicted values of the color parameters at 40°C were calculated using Arrhenius plots for each of the color parameters. A comparison of the relationships between the experimental and predicted values of each color parameter revealed the coefficients of determination for a*, b*, L*, and ΔE*ab to be 0.961, 0.979, 0.960, and 0.979, respectively. All the R(2) values were sufficiently high, and these results suggested that the prediction was highly reliable. Kinetic analysis using the TTSP was successfully applied to calculating the apparent activation energy and to predicting the color changes at any temperature or duration.

  11. A high-order fast method for computing convolution integral with smooth kernel

    NASA Astrophysics Data System (ADS)

    Qiang, Ji

    2010-02-01

    In this paper we report on a high-order fast method to numerically calculate convolution integral with smooth non-periodic kernel. This method is based on the Newton-Cotes quadrature rule for the integral approximation and an FFT method for discrete summation. The method can have an arbitrarily high-order accuracy in principle depending on the number of points used in the integral approximation and a computational cost of O(Nlog(N)), where N is the number of grid points. For a three-point Simpson rule approximation, the method has an accuracy of O(h), where h is the size of the computational grid. Applications of the Simpson rule based algorithm to the calculation of a one-dimensional continuous Gauss transform and to the calculation of a two-dimensional electric field from a charged beam are also presented.

  12. A high-order fast method for computing convolution integral with smooth kernel

    SciTech Connect

    Qiang, Ji

    2009-09-28

    In this paper we report on a high-order fast method to numerically calculate convolution integral with smooth non-periodic kernel. This method is based on the Newton-Cotes quadrature rule for the integral approximation and an FFT method for discrete summation. The method can have an arbitrarily high-order accuracy in principle depending on the number of points used in the integral approximation and a computational cost of O(Nlog(N)), where N is the number of grid points. For a three-point Simpson rule approximation, the method has an accuracy of O(h{sup 4}), where h is the size of the computational grid. Applications of the Simpson rule based algorithm to the calculation of a one-dimensional continuous Gauss transform and to the calculation of a two-dimensional electric field from a charged beam are also presented.

  13. Die and telescoping punch form convolutions in thin diaphragm

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Die and punch set forms convolutions in thin dished metal diaphragm without stretching the metal too thin at sharp curvatures. The die corresponds to the metal shape to be formed, and the punch consists of elements that progressively slide against one another under the restraint of a compressed-air cushion to mate with the die.

  14. Stacked Convolutional Denoising Auto-Encoders for Feature Representation.

    PubMed

    Du, Bo; Xiong, Wei; Wu, Jia; Zhang, Lefei; Zhang, Liangpei; Tao, Dacheng

    2016-03-16

    Deep networks have achieved excellent performance in learning representation from visual data. However, the supervised deep models like convolutional neural network require large quantities of labeled data, which are very expensive to obtain. To solve this problem, this paper proposes an unsupervised deep network, called the stacked convolutional denoising auto-encoders, which can map images to hierarchical representations without any label information. The network, optimized by layer-wise training, is constructed by stacking layers of denoising auto-encoders in a convolutional way. In each layer, high dimensional feature maps are generated by convolving features of the lower layer with kernels learned by a denoising auto-encoder. The auto-encoder is trained on patches extracted from feature maps in the lower layer to learn robust feature detectors. To better train the large network, a layer-wise whitening technique is introduced into the model. Before each convolutional layer, a whitening layer is embedded to sphere the input data. By layers of mapping, raw images are transformed into high-level feature representations which would boost the performance of the subsequent support vector machine classifier. The proposed algorithm is evaluated by extensive experimentations and demonstrates superior classification performance to state-of-the-art unsupervised networks.

  15. Advanced superposition methods for high speed turbopump vibration analysis

    NASA Technical Reports Server (NTRS)

    Nielson, C. E.; Campany, A. D.

    1981-01-01

    The small, high pressure Mark 48 liquid hydrogen turbopump was analyzed and dynamically tested to determine the cause of high speed vibration at an operating speed of 92,400 rpm. This approaches the design point operating speed of 95,000 rpm. The initial dynamic analysis in the design stage and subsequent further analysis of the rotor only dynamics failed to predict the vibration characteristics found during testing. An advanced procedure for dynamics analysis was used in this investigation. The procedure involves developing accurate dynamic models of the rotor assembly and casing assembly by finite element analysis. The dynamically instrumented assemblies are independently rap tested to verify the analytical models. The verified models are then combined by modal superposition techniques to develop a completed turbopump model where dynamic characteristics are determined. The results of the dynamic testing and analysis obtained are presented and methods of moving the high speed vibration characteristics to speeds above the operating range are recommended. Recommendations for use of these advanced dynamic analysis procedures during initial design phases are given.

  16. Solar Supergranulation Revealed as a Superposition of Traveling Waves

    NASA Technical Reports Server (NTRS)

    Gizon, L.; Duvall, T. L., Jr.; Schou, J.; Oegerle, William (Technical Monitor)

    2002-01-01

    40 years ago two new solar phenomena were described: supergranulation and the five-minute solar oscillations. While the oscillations have since been explained and exploited to determine the properties of the solar interior, the supergranulation has remained unexplained. The supergranules, appearing as convective-like cellular patterns of horizontal outward flow with a characteristic diameter of 30 Mm and an apparent lifetime of 1 day, have puzzling properties, including their apparent superrotation and the minute temperature variations over the cells. Using a 60-day sequence of data from the MDI (Michelson-Doppler Imager) instrument onboard the SOHO (Solar and Heliospheric Observatory) spacecraft, we show that the supergranulation pattern is formed by a superposition of traveling waves with periods of 5-10 days. The wave power is anisotropic with excess power in the direction of rotation and toward the equator, leading to spurious rotation rates and north-south flows as derived from correlation analyses. These newly discovered waves could play an important role in maintaining differential rotation in the upper convection zone by transporting angular momentum towards the equator.

  17. Superposition, Transition Probabilities and Primitive Observables in Infinite Quantum Systems

    NASA Astrophysics Data System (ADS)

    Buchholz, Detlev; Størmer, Erling

    2015-10-01

    The concepts of superposition and of transition probability, familiar from pure states in quantum physics, are extended to locally normal states on funnels of type I∞ factors. Such funnels are used in the description of infinite systems, appearing for example in quantum field theory or in quantum statistical mechanics; their respective constituents are interpreted as algebras of observables localized in an increasing family of nested spacetime regions. Given a generic reference state (expectation functional) on a funnel, e.g. a ground state or a thermal equilibrium state, it is shown that irrespective of the global type of this state all of its excitations, generated by the adjoint action of elements of the funnel, can coherently be superimposed in a meaningful manner. Moreover, these states are the extreme points of their convex hull and as such are analogues of pure states. As further support of this analogy, transition probabilities are defined, complete families of orthogonal states are exhibited and a one-to-one correspondence between the states and families of minimal projections on a Hilbert space is established. The physical interpretation of these quantities relies on a concept of primitive observables. It extends the familiar framework of observable algebras and avoids some counter intuitive features of that setting. Primitive observables admit a consistent statistical interpretation of corresponding measurements and their impact on states is described by a variant of the von Neumann-Lüders projection postulate.

  18. Hybrid multi-Bernoulli CPHD filter for superpositional sensors

    NASA Astrophysics Data System (ADS)

    Nannuru, Santosh; Coates, Mark

    2014-06-01

    We propose, for the super-positional sensor scenario, a hybrid between the multi-Bernoulli filter and the cardinal­ized probability hypothesis density (CPHD) filter. We use a multi-Bernoulli random finite set (RFS) to model existing targets and we use an independent and identically distributed cluster (IIDC) RFS to model newborn targets and targets with low probability of existence. Our main contributions are providing the update equa­tions of the hybrid filter and identifying computationally tractable approximations. We achieve this by defining conditional probability hypothesis densities (PHDs), where the conditioning is on one of the targets having a specified state. The filter performs an approximate Bayes update of the conditional PHDs. In parallel, we perform a cardinality update of the IIDC RFS component in order to estimate the number of newborn targets. We provide an auxiliary particle filter based implementation of the proposed filter and compare it with CPHD and multi-Bernoulli filters in a simulated multitarget tracking application

  19. De-convoluting mixed crude oil in Prudhoe Bay Field, North Slope, Alaska

    USGS Publications Warehouse

    Peters, K.E.; Scott, Ramos L.; Zumberge, J.E.; Valin, Z.C.; Bird, K.J.

    2008-01-01

    Seventy-four crude oil samples from the Barrow arch on the North Slope of Alaska were studied to assess the relative volumetric contributions from different source rocks to the giant Prudhoe Bay Field. We applied alternating least squares to concentration data (ALS-C) for 46 biomarkers in the range C19-C35 to de-convolute mixtures of oil generated from carbonate rich Triassic Shublik Formation and clay rich Jurassic Kingak Shale and Cretaceous Hue Shale-gamma ray zone (Hue-GRZ) source rocks. ALS-C results for 23 oil samples from the prolific Ivishak Formation reservoir of the Prudhoe Bay Field indicate approximately equal contributions from Shublik Formation and Hue-GRZ source rocks (37% each), less from the Kingak Shale (26%), and little or no contribution from other source rocks. These results differ from published interpretations that most oil in the Prudhoe Bay Field originated from the Shublik Formation source rock. With few exceptions, the relative contribution of oil from the Shublik Formation decreases, while that from the Hue-GRZ increases in reservoirs along the Barrow arch from Point Barrow in the northwest to Point Thomson in the southeast (???250 miles or 400 km). The Shublik contribution also decreases to a lesser degree between fault blocks within the Ivishak pool from west to east across the Prudhoe Bay Field. ALS-C provides a robust means to calculate the relative amounts of two or more oil types in a mixture. Furthermore, ALS-C does not require that pure end member oils be identified prior to analysis or that laboratory mixtures of these oils be prepared to evaluate mixing. ALS-C of biomarkers reliably de-convolutes mixtures because the concentrations of compounds in mixtures vary as linear functions of the amount of each oil type. ALS of biomarker ratios (ALS-R) cannot be used to de-convolute mixtures because compound ratios vary as nonlinear functions of the amount of each oil type.

  20. Comparison of dose calculation algorithms in phantoms with lung equivalent heterogeneities under conditions of lateral electronic disequilibrium.

    PubMed

    Carrasco, P; Jornet, N; Duch, M A; Weber, L; Ginjaume, M; Eudaldo, T; Jurado, D; Ruiz, A; Ribas, M

    2004-10-01

    An extensive set of benchmark measurement of PDDs and beam profiles was performed in a heterogeneous layer phantom, including a lung equivalent heterogeneity, by means of several detectors and compared against the predicted dose values by different calculation algorithms in two treatment planning systems. PDDs were measured with TLDs, plane parallel and cylindrical ionization chambers and beam profiles with films. Additionally, Monte Carlo simulations by means of the PENELOPE code were performed. Four different field sizes (10 x 10, 5 x 5, 2 x 2, and 1 x 1 cm2) and two lung equivalent materials (CIRS, p(w)e=0.195 and St. Bartholomew Hospital, London, p(w)e=0.244-0.322) were studied. The performance of four correction-based algorithms and one based on convolution-superposition was analyzed. The correction-based algorithms were the Batho, the Modified Batho, and the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system and the TMS Pencil Beam from the Helax-TMS (Nucletron) treatment planning system. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. The only studied calculation methods that correlated successfully with the measured values with a 2% average inside all media were the Collapsed Cone and the Monte Carlo simulation. The biggest difference between the predicted and the delivered dose in the beam axis was found for the EqTAR algorithm inside the CIRS lung equivalent material in a 2 x 2 cm2 18 MV x-ray beam. In these conditions, average and maximum difference against the TLD measurements were 32% and 39%, respectively. In the water equivalent part of the phantom every algorithm correctly predicted the dose (within 2%) everywhere except very close to the interfaces where differences up to 24% were found for 2 x 2 cm2 18 MV photon beams. Consistent values were found between the reference detector (ionization chamber in water and TLD in lung) and Monte Carlo simulations, yielding minimal

  1. SU-E-T-465: Dose Calculation Method for Dynamic Tumor Tracking Using a Gimbal-Mounted Linac

    SciTech Connect

    Sugimoto, S; Inoue, T; Kurokawa, C; Usui, K; Sasai, K; Utsunomiya, S; Ebe, K

    2014-06-01

    Purpose: Dynamic tumor tracking using the gimbal-mounted linac (Vero4DRT, Mitsubishi Heavy Industries, Ltd., Japan) has been available when respiratory motion is significant. The irradiation accuracy of the dynamic tumor tracking has been reported to be excellent. In addition to the irradiation accuracy, a fast and accurate dose calculation algorithm is needed to validate the dose distribution in the presence of respiratory motion because the multiple phases of it have to be considered. A modification of dose calculation algorithm is necessary for the gimbal-mounted linac due to the degrees of freedom of gimbal swing. The dose calculation algorithm for the gimbal motion was implemented using the linear transformation between coordinate systems. Methods: The linear transformation matrices between the coordinate systems with and without gimbal swings were constructed using the combination of translation and rotation matrices. The coordinate system where the radiation source is at the origin and the beam axis along the z axis was adopted. The transformation can be divided into the translation from the radiation source to the gimbal rotation center, the two rotations around the center relating to the gimbal swings, and the translation from the gimbal center to the radiation source. After operating the transformation matrix to the phantom or patient image, the dose calculation can be performed as the no gimbal swing. The algorithm was implemented in the treatment planning system, PlanUNC (University of North Carolina, NC). The convolution/superposition algorithm was used. The dose calculations with and without gimbal swings were performed for the 3 × 3 cm{sup 2} field with the grid size of 5 mm. Results: The calculation time was about 3 minutes per beam. No significant additional time due to the gimbal swing was observed. Conclusions: The dose calculation algorithm for the finite gimbal swing was implemented. The calculation time was moderate.

  2. Time-efficient flexible superposition of medium-sized molecules

    NASA Astrophysics Data System (ADS)

    Lemmen, Christian; Lengauer, Thomas

    1997-07-01

    We present an efficient algorithm for the structural alignment of medium-sized organic molecules. The algorithm has been developed for applications in 3D QSAR and in receptor modeling. The method assumes one of the molecules, the reference ligand, to be presented in the conformation that it adopts inside the receptor pocket. The second molecule, the test ligand, is considered to be flexible, and is assumed to be given in an arbitrary low-energy conformation. Ligand flexibility is modeled by decomposing the test ligand into molecular fragments, such that ring systems are completely contained in a single fragment. Conformations of fragments and torsional angles of single bonds are taken from a small finite set, which depends on the fragment and bond, respectively. The algorithm superimposes a distinguished base fragment of the test ligand onto a suitable region of the reference ligand and then attaches the remaining fragments of the test ligand in a step-by-step fashion. During this process, a scoring function is optimized that encompasses bonding terms and terms accounting for steric overlap as well as for similarity of chemical properties of both ligands. The algorithm has been implemented in the FLEXS system. To validate the quality of the produced results, we have selected a number of examples for which the mutual superposition of two ligands is experimentally given by the comparison of the binding geometries known from the crystal structures of their corresponding protein-ligand complexes. On more than two-thirds of the test examples the algorithm produces rms deviations of the predicted versus the observed conformation of the test ligand below 1.5 Å. The run time of the algorithm on a single problem instance is a few minutes on a common-day workstation. The overall goal of this research is to drastically reduce run times, while limiting the inaccuracies of the model and the computation to a tolerable level.

  3. A reciprocal space approach for locating symmetry elements in Patterson superposition maps

    SciTech Connect

    Hendrixson, T.

    1990-09-21

    A method for determining the location and possible existence of symmetry elements in Patterson superposition maps has been developed. A comparison of the original superposition map and a superposition map operated on by the symmetry element gives possible translations to the location of the symmetry element. A reciprocal space approach using structure factor-like quantities obtained from the Fourier transform of the superposition function is then used to determine the best'' location of the symmetry element. Constraints based upon the space group requirements are also used as a check on the locations. The locations of the symmetry elements are used to modify the Fourier transform coefficients of the superposition function to give an approximation of the structure factors, which are then refined using the EG relation. The analysis of several compounds using this method is presented. Reciprocal space techniques for locating multiple images in the superposition function are also presented, along with methods to remove the effect of multiple images in the Fourier transform coefficients of the superposition map. In addition, crystallographic studies of the extended chain structure of (NHC{sub 5}H{sub 5})SbI{sub 4} and of the twinning method of the orthorhombic form of the high-{Tc} superconductor YBa{sub 2}Cu{sub 3}O{sub 7-x} are presented. 54 refs.

  4. Adaptive Estimation of Active Contour Parameters Using Convolutional Neural Networks and Texture Analysis.

    PubMed

    Hoogi, Assaf; Subramaniam, Arjun; Veerapaneni, Rishi; Rubin, Daniel

    2016-11-11

    In this paper, we propose a generalization of the level set segmentation approach by supplying a novel method for adaptive estimation of active contour parameters. The presented segmentation method is fully automatic once the lesion has been detected. First, the location of the level set contour relative to the lesion is estimated using a convolutional neural network (CNN). The CNN has two convolutional layers for feature extraction, which lead into dense layers for classification. Second, the output CNN probabilities are then used to adaptively calculate the parameters of the active contour functional during the segmentation process. Finally, the adaptive window size surrounding each contour point is re-estimated by an iterative process that considers lesion size and spatial texture. We demonstrate the capabilities of our method on a dataset of 164 MRI and 112 CT images of liver lesions that includes low contrast and heterogeneous lesions as well as noisy images. To illustrate the strength of our method, we evaluated it against state of the art CNNbased and active contour techniques. For all cases, our method, as assessed by Dice similarity coefficients, performed significantly better than currently available methods. An average Dice improvement of 0.27 was found across the entire dataset over all comparisons. We also analyzed two challenging subsets of lesions and obtained a significant Dice improvement of ����.�������� with our method (p < 0.001, Wilcoxon).

  5. Adaptive Estimation of Active Contour Parameters Using Convolutional Neural Networks and Texture Analysis.

    PubMed

    Hoogi, Assaf; Subramaniam, Arjun; Veerapaneni, Rishi; Rubin, Daniel

    2016-11-11

    In this paper, we propose a generalization of the level set segmentation approach by supplying a novel method for adaptive estimation of active contour parameters. The presented segmentation method is fully automatic once the lesion has been detected. First, the location of the level set contour relative to the lesion is estimated using a convolutional neural network (CNN). The CNN has two convolutional layers for feature extraction, which lead into dense layers for classification. Second, the output CNN probabilities are then used to adaptively calculate the parameters of the active contour functional during the segmentation process. Finally, the adaptive window size surrounding each contour point is re-estimated by an iterative process that considers lesion size and spatial texture. We demonstrate the capabilities of our method on a dataset of 164 MRI and 112 CT images of liver lesions that includes low contrast and heterogeneous lesions as well as noisy images. To illustrate the strength of our method, we evaluated it against state of the art CNNbased and active contour techniques. For all cases, our method, as assessed by Dice similarity coefficients, performed significantly better than currently available methods. An average Dice improvement of 0.27 was found across the entire dataset over all comparisons. We also analyzed two challenging subsets of lesions and obtained a significant Dice improvement of 0.24 with our method (p < 0.001, Wilcoxon).

  6. The principle of superposition and its application in ground-water hydraulics

    USGS Publications Warehouse

    Reilly, Thomas E.; Franke, O. Lehn; Bennett, Gordon D.

    1987-01-01

    The principle of superposition, a powerful mathematical technique for analyzing certain types of complex problems in many areas of science and technology, has important applications in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that problem solutions can be added together to obtain composite solutions. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to ground-water hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader.

  7. The principle of superposition and its application in ground-water hydraulics

    USGS Publications Warehouse

    Reilly, T.E.; Franke, O.L.; Bennett, G.D.

    1984-01-01

    The principle of superposition, a powerful methematical technique for analyzing certain types of complex problems in many areas of science and technology, has important application in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that solutions to individual problems can be added together to obtain solutions to complex problems. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to groundwater hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader. (USGS)

  8. Two-dimensional convolute integers for analytical instrumentation

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.

    1982-01-01

    As new analytical instruments and techniques emerge with increased dimensionality, a corresponding need is seen for data processing logic which can appropriately address the data. Two-dimensional measurements reveal enhanced unknown mixture analysis capability as a result of the greater spectral information content over two one-dimensional methods taken separately. It is noted that two-dimensional convolute integers are merely an extension of the work by Savitzky and Golay (1964). It is shown that these low-pass, high-pass and band-pass digital filters are truly two-dimensional and that they can be applied in a manner identical with their one-dimensional counterpart, that is, a weighted nearest-neighbor, moving average with zero phase shifting, convoluted integer (universal number) weighting coefficients.

  9. UFLIC: A Line Integral Convolution Algorithm for Visualizing Unsteady Flows

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Kao, David L.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents an algorithm, UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Using the Line Integral Convolution (LIC) as the underlying method, a new convolution algorithm is proposed that can effectively trace the flow's global features over time. The new algorithm consists of a time-accurate value depositing scheme and a successive feed-forward method. The value depositing scheme accurately models the flow advection, and the successive feed-forward method maintains the coherence between animation frames. Our new algorithm can produce time-accurate, highly coherent flow animations to highlight global features in unsteady flow fields. CFD scientists, for the first time, are able to visualize unsteady surface flows using our algorithm.

  10. Spectral density of generalized Wishart matrices and free multiplicative convolution.

    PubMed

    Młotkowski, Wojciech; Nowak, Maciej A; Penson, Karol A; Życzkowski, Karol

    2015-07-01

    We investigate the level density for several ensembles of positive random matrices of a Wishart-like structure, W=XX(†), where X stands for a non-Hermitian random matrix. In particular, making use of the Cauchy transform, we study the free multiplicative powers of the Marchenko-Pastur (MP) distribution, MP(⊠s), which for an integer s yield Fuss-Catalan distributions corresponding to a product of s-independent square random matrices, X=X(1)⋯X(s). New formulas for the level densities are derived for s=3 and s=1/3. Moreover, the level density corresponding to the generalized Bures distribution, given by the free convolution of arcsine and MP distributions, is obtained. We also explain the reason of such a curious convolution. The technique proposed here allows for the derivation of the level densities for several other cases.

  11. Spectral density of generalized Wishart matrices and free multiplicative convolution

    NASA Astrophysics Data System (ADS)

    Młotkowski, Wojciech; Nowak, Maciej A.; Penson, Karol A.; Życzkowski, Karol

    2015-07-01

    We investigate the level density for several ensembles of positive random matrices of a Wishart-like structure, W =X X† , where X stands for a non-Hermitian random matrix. In particular, making use of the Cauchy transform, we study the free multiplicative powers of the Marchenko-Pastur (MP) distribution, MP⊠s, which for an integer s yield Fuss-Catalan distributions corresponding to a product of s -independent square random matrices, X =X1⋯Xs . New formulas for the level densities are derived for s =3 and s =1 /3 . Moreover, the level density corresponding to the generalized Bures distribution, given by the free convolution of arcsine and MP distributions, is obtained. We also explain the reason of such a curious convolution. The technique proposed here allows for the derivation of the level densities for several other cases.

  12. Rationale-Augmented Convolutional Neural Networks for Text Classification

    PubMed Central

    Zhang, Ye; Marshall, Iain; Wallace, Byron C.

    2016-01-01

    We present a new Convolutional Neural Network (CNN) model for text classification that jointly exploits labels on documents and their constituent sentences. Specifically, we consider scenarios in which annotators explicitly mark sentences (or snippets) that support their overall document categorization, i.e., they provide rationales. Our model exploits such supervision via a hierarchical approach in which each document is represented by a linear combination of the vector representations of its component sentences. We propose a sentence-level convolutional model that estimates the probability that a given sentence is a rationale, and we then scale the contribution of each sentence to the aggregate document representation in proportion to these estimates. Experiments on five classification datasets that have document labels and associated rationales demonstrate that our approach consistently outperforms strong baselines. Moreover, our model naturally provides explanations for its predictions. PMID:28191551

  13. Self-Taught convolutional neural networks for short text clustering.

    PubMed

    Xu, Jiaming; Xu, Bo; Wang, Peng; Zheng, Suncong; Tian, Guanhua; Zhao, Jun; Xu, Bo

    2017-04-01

    Short text clustering is a challenging problem due to its sparseness of text representation. Here we propose a flexible Self-Taught Convolutional neural network framework for Short Text Clustering (dubbed STC(2)), which can flexibly and successfully incorporate more useful semantic features and learn non-biased deep text representation in an unsupervised manner. In our framework, the original raw text features are firstly embedded into compact binary codes by using one existing unsupervised dimensionality reduction method. Then, word embeddings are explored and fed into convolutional neural networks to learn deep feature representations, meanwhile the output units are used to fit the pre-trained binary codes in the training process. Finally, we get the optimal clusters by employing K-means to cluster the learned representations. Extensive experimental results demonstrate that the proposed framework is effective, flexible and outperform several popular clustering methods when tested on three public short text datasets.

  14. Deep learning for steganalysis via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu

    2015-03-01

    Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.

  15. A new computational decoding complexity measure of convolutional codes

    NASA Astrophysics Data System (ADS)

    Benchimol, Isaac B.; Pimentel, Cecilio; Souza, Richard Demo; Uchôa-Filho, Bartolomeu F.

    2014-12-01

    This paper presents a computational complexity measure of convolutional codes well suitable for software implementations of the Viterbi algorithm (VA) operating with hard decision. We investigate the number of arithmetic operations performed by the decoding process over the conventional and minimal trellis modules. A relation between the complexity measure defined in this work and the one defined by McEliece and Lin is investigated. We also conduct a refined computer search for good convolutional codes (in terms of distance spectrum) with respect to two minimal trellis complexity measures. Finally, the computational cost of implementation of each arithmetic operation is determined in terms of machine cycles taken by its execution using a typical digital signal processor widely used for low-power telecommunications applications.

  16. New syndrome decoder for (n, 1) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.

  17. On the growth and form of cortical convolutions

    NASA Astrophysics Data System (ADS)

    Tallinen, Tuomas; Chung, Jun Young; Rousseau, François; Girard, Nadine; Lefèvre, Julien; Mahadevan, L.

    2016-06-01

    The rapid growth of the human cortex during development is accompanied by the folding of the brain into a highly convoluted structure. Recent studies have focused on the genetic and cellular regulation of cortical growth, but understanding the formation of the gyral and sulcal convolutions also requires consideration of the geometry and physical shaping of the growing brain. To study this, we use magnetic resonance images to build a 3D-printed layered gel mimic of the developing smooth fetal brain; when immersed in a solvent, the outer layer swells relative to the core, mimicking cortical growth. This relative growth puts the outer layer into mechanical compression and leads to sulci and gyri similar to those in fetal brains. Starting with the same initial geometry, we also build numerical simulations of the brain modelled as a soft tissue with a growing cortex, and show that this also produces the characteristic patterns of convolutions over a realistic developmental course. All together, our results show that although many molecular determinants control the tangential expansion of the cortex, the size, shape, placement and orientation of the folds arise through iterations and variations of an elementary mechanical instability modulated by early fetal brain geometry.

  18. Fast convolution quadrature for the wave equation in three dimensions

    NASA Astrophysics Data System (ADS)

    Banjai, L.; Kachanovska, M.

    2014-12-01

    This work addresses the numerical solution of time-domain boundary integral equations arising from acoustic and electromagnetic scattering in three dimensions. The semidiscretization of the time-domain boundary integral equations by Runge-Kutta convolution quadrature leads to a lower triangular Toeplitz system of size N. This system can be solved recursively in an almost linear time (O(Nlog2⁡N)), but requires the construction of O(N) dense spatial discretizations of the single layer boundary operator for the Helmholtz equation. This work introduces an improvement of this algorithm that allows to solve the scattering problem in an almost linear time. The new approach is based on two main ingredients: the near-field reuse and the application of data-sparse techniques. Exponential decay of Runge-Kutta convolution weights wnh(d) outside of a neighborhood of d≈nh (where h is a time step) allows to avoid constructing the near-field (i.e. singular and near-singular integrals) for most of the discretizations of the single layer boundary operators (near-field reuse). The far-field of these matrices is compressed with the help of data-sparse techniques, namely, H-matrices and the high-frequency fast multipole method. Numerical experiments indicate the efficiency of the proposed approach compared to the conventional Runge-Kutta convolution quadrature algorithm.

  19. A model of traffic signs recognition with convolutional neural network

    NASA Astrophysics Data System (ADS)

    Hu, Haihe; Li, Yujian; Zhang, Ting; Huo, Yi; Kuang, Wenqing

    2016-10-01

    In real traffic scenes, the quality of captured images are generally low due to some factors such as lighting conditions, and occlusion on. All of these factors are challengeable for automated recognition algorithms of traffic signs. Deep learning has provided a new way to solve this kind of problems recently. The deep network can automatically learn features from a large number of data samples and obtain an excellent recognition performance. We therefore approach this task of recognition of traffic signs as a general vision problem, with few assumptions related to road signs. We propose a model of Convolutional Neural Network (CNN) and apply the model to the task of traffic signs recognition. The proposed model adopts deep CNN as the supervised learning model, directly takes the collected traffic signs image as the input, alternates the convolutional layer and subsampling layer, and automatically extracts the features for the recognition of the traffic signs images. The proposed model includes an input layer, three convolutional layers, three subsampling layers, a fully-connected layer, and an output layer. To validate the proposed model, the experiments are implemented using the public dataset of China competition of fuzzy image processing. Experimental results show that the proposed model produces a recognition accuracy of 99.01 % on the training dataset, and yield a record of 92% on the preliminary contest within the fourth best.

  20. Fine-grained representation learning in convolutional autoencoders

    NASA Astrophysics Data System (ADS)

    Luo, Chang; Wang, Jie

    2016-03-01

    Convolutional autoencoders (CAEs) have been widely used as unsupervised feature extractors for high-resolution images. As a key component in CAEs, pooling is a biologically inspired operation to achieve scale and shift invariances, and the pooled representation directly affects the CAEs' performance. Fine-grained pooling, which uses small and dense pooling regions, encodes fine-grained visual cues and enhances local characteristics. However, it tends to be sensitive to spatial rearrangements. In most previous works, pooled features were obtained by empirically modulating parameters in CAEs. We see the CAE as a whole and propose a fine-grained representation learning law to extract better fine-grained features. This representation learning law suggests two directions for improvement. First, we probabilistically evaluate the discrimination-invariance tradeoff with fine-grained granularity in the pooled feature maps, and suggest the proper filter scale in the convolutional layer and appropriate whitening parameters in preprocessing step. Second, pooling approaches are combined with the sparsity degree in pooling regions, and we propose the preferable pooling approach. Experimental results on two independent benchmark datasets demonstrate that our representation learning law could guide CAEs to extract better fine-grained features and performs better in multiclass classification task. This paper also provides guidance for selecting appropriate parameters to obtain better fine-grained representation in other convolutional neural networks.

  1. Automatic localization of vertebrae based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Shen, Wei; Yang, Feng; Mu, Wei; Yang, Caiyun; Yang, Xin; Tian, Jie

    2015-03-01

    Localization of the vertebrae is of importance in many medical applications. For example, the vertebrae can serve as the landmarks in image registration. They can also provide a reference coordinate system to facilitate the localization of other organs in the chest. In this paper, we propose a new vertebrae localization method using convolutional neural networks (CNN). The main advantage of the proposed method is the removal of hand-crafted features. We construct two training sets to train two CNNs that share the same architecture. One is used to distinguish the vertebrae from other tissues in the chest, and the other is aimed at detecting the centers of the vertebrae. The architecture contains two convolutional layers, both of which are followed by a max-pooling layer. Then the output feature vector from the maxpooling layer is fed into a multilayer perceptron (MLP) classifier which has one hidden layer. Experiments were performed on ten chest CT images. We used leave-one-out strategy to train and test the proposed method. Quantitative comparison between the predict centers and ground truth shows that our convolutional neural networks can achieve promising localization accuracy without hand-crafted features.

  2. Calcium transport in the rabbit superficial proximal convoluted tubule

    SciTech Connect

    Ng, R.C.; Rouse, D.; Suki, W.N.

    1984-09-01

    Calcium transport was studied in isolated S2 segments of rabbit superficial proximal convoluted tubules. 45Ca was added to the perfusate for measurement of lumen-to-bath flux (JlbCa), to the bath for bath-to-lumen flux (JblCa), and to both perfusate and bath for net flux (JnetCa). In these studies, the perfusate consisted of an equilibrium solution that was designed to minimize water flux or electrochemical potential differences (PD). Under these conditions, JlbCa (9.1 +/- 1.0 peq/mm X min) was not different from JblCa (7.3 +/- 1.3 peq/mm X min), and JnetCa was not different from zero, which suggests that calcium transport in the superficial proximal convoluted tubule is due primarily to passive transport. The efflux coefficient was 9.5 +/- 1.2 X 10(-5) cm/s, which was not significantly different from the influx coefficient, 7.0 +/- 1.3 X 10(-5) cm/s. When the PD was made positive or negative with use of different perfusates, net calcium absorption or secretion was demonstrated, respectively, which supports a major role for passive transport. These results indicate that in the superficial proximal convoluted tubule of the rabbit, passive driving forces are the major determinants of calcium transport.

  3. Single crystal EPR, optical absorption and superposition model study of Cr 3+ doped ammonium dihydrogen phosphate

    NASA Astrophysics Data System (ADS)

    Kripal, Ram; Pandey, Sangita

    2010-06-01

    The electron paramagnetic resonance (EPR) studies are carried out on Cr 3+ ion doped ammonium dihydrogen phosphate (ADP) single crystals at room temperature. Four magnetically inequivalent sites for chromium are observed. No hyperfine structure is obtained. The crystal-field and spin Hamiltonian parameters are calculated from the resonance lines obtained at different angular rotations. The zero field and spin Hamiltonian parameters of Cr 3+ ion in ADP are calculated as: | D| = (257 ± 2) × 10 -4 cm -1, | E| = (79 ± 2) × 10 -4 cm -1, g = 1.9724 ± 0.0002 for site I; | D| = (257 ± 2) × 10 -4 cm -1, | E| = (77 ± 2) × 10 -4 cm -1, g = 1.9727 ± 0.0002 for site II; | D| = (259 ± 2) × 10 -4 cm -1, | E| = (78 ± 2) × 10 -4 cm -1, g = 1.9733 ± 0.0002 for site III; | D| = (259 ± 2) × 10 -4 cm -1, | E| = (77 ± 2) × 10 -4 cm -1, g = 1.973 ± 0.0002 for site IV, respectively. The site symmetry of Cr 3+ doped single crystal is discussed on the basis of EPR data. The Cr 3+ ion enters the lattice substitutionally replacing the NH 4+ sites. The optical absorption spectra are recorded in 195-925 nm wavelength range at room temperature. The energy values of different orbital levels are determined. On the basis of EPR and optical data, the nature of bonding in the crystal is discussed. The calculated values of Racah interelectronic repulsion parameters ( B and C), cubic crystal-field splitting parameter ( Dq) and nephelauxetic parameters ( h and k) are: B = 640, C = 3070, Dq = 2067 cm -1, h = 1.44 and k = 0.21, respectively. ZFS parameters are also determined using Bkq parameters from superposition model.

  4. SUPERPOSE-An excel visual basic program for fracture modeling based on the stress superposition method

    NASA Astrophysics Data System (ADS)

    Ismail Ozkaya, Sait

    2014-03-01

    An Excel Visual Basic program, SUPERPOSE, is presented to predict the distribution, relative size and strike of tensile and shear fractures on anticlinal structures. The program is based on the concept of stress superposition; addition of curvature-related local tensile stress and regional far-field stress. The method accurately predicts fractures on many Middle East Oil Fields that were formed under a strike slip regime as duplexes, flower structures or inverted structures. The program operates on the Excel platform. The program reads the parameters and structural grid data from an Excel template and writes the results to the same template. The program has two routines to import structural grid data in the Eclipse and Zmap formats. The platform of SUPERPOSE is a single layer structural grid of a given cell size (e.g. 50×50 m). In the final output, a single tensile or two conjugate shear fractures are placed in each cell if fracturing criteria are satisfied; otherwise the cell is left blank. Strike of the representative fracture(s) is calculated and exact, but the length is an index of fracture porosity (fracture density×length×aperture) within that cell.

  5. Probing the conductance superposition law in single-molecule circuits with parallel paths.

    PubMed

    Vazquez, H; Skouta, R; Schneebeli, S; Kamenetska, M; Breslow, R; Venkataraman, L; Hybertsen, M S

    2012-10-01

    According to Kirchhoff's circuit laws, the net conductance of two parallel components in an electronic circuit is the sum of the individual conductances. However, when the circuit dimensions are comparable to the electronic phase coherence length, quantum interference effects play a critical role, as exemplified by the Aharonov-Bohm effect in metal rings. At the molecular scale, interference effects dramatically reduce the electron transfer rate through a meta-connected benzene ring when compared with a para-connected benzene ring. For longer conjugated and cross-conjugated molecules, destructive interference effects have been observed in the tunnelling conductance through molecular junctions. Here, we investigate the conductance superposition law for parallel components in single-molecule circuits, particularly the role of interference. We synthesize a series of molecular systems that contain either one backbone or two backbones in parallel, bonded together cofacially by a common linker on each end. Single-molecule conductance measurements and transport calculations based on density functional theory show that the conductance of a double-backbone molecular junction can be more than twice that of a single-backbone junction, providing clear evidence for constructive interference.

  6. Evidence for transcriptase quantum processing implies entanglement and decoherence of superposition proton states.

    PubMed

    Cooper, W Grant

    2009-08-01

    Evidence requiring transcriptase quantum processing is identified and elementary quantum methods are used to qualitatively describe origins and consequences of time-dependent coherent proton states populating informational DNA base pair sites in T4 phage, designated by G-C-->G'-C', G-C-->*G-*C and AT-->*A-*T. Coherent states at these 'point' DNA lesions are introduced as consequences of hydrogen bond arrangement, keto-amino-->enol-imine, where product protons are shared between two sets of indistinguishable electron lone-pairs, and thus, participate in coupled quantum oscillations at frequencies of approximately 10(13) s(-1). This quantum mixing of proton energy states introduces stability enhancements of approximately 0.25-7 kcal/mole. Transcriptase genetic specificity is determined by hydrogen bond components contributing to the formation of complementary interstrand hydrogen bonds which, in these cases, is variable due to coupled quantum oscillations of coherent enol-imine protons. The transcriptase deciphers and executes genetic specificity instructions by implementing measurements on superposition proton states at G'-C', *G-*C and *A-*T sites in an interval Deltat<10(-13) s. After initiation of transcriptase measurement, model calculations indicate proton decoherence time, tau(D), satisfies the relation DeltatT, G'-->C, *C-->T and *G-->A. Measurements of 37 degrees C lifetimes of the keto-amino DNA hydrogen bond indicate a range of approximately 3200-68,000 yrs. Arguments are presented that quantum uncertainty limits on amino protons may drive the keto-amino-->enol-imine arrangement. Data imply that natural selection at the quantum level has generated effective schemes (a) for introducing superposition proton states--at rates appropriate for DNA evolution--in decoherence-free subspaces and (b) for creating entanglement states that augment (i

  7. Collapsing a perfect superposition to a chosen quantum state without measurement.

    PubMed

    Younes, Ahmed; Abdel-Aty, Mahmoud

    2014-01-01

    Given a perfect superposition of [Formula: see text] states on a quantum system of [Formula: see text] qubits. We propose a fast quantum algorithm for collapsing the perfect superposition to a chosen quantum state [Formula: see text] without applying any measurements. The basic idea is to use a phase destruction mechanism. Two operators are used, the first operator applies a phase shift and a temporary entanglement to mark [Formula: see text] in the superposition, and the second operator applies selective phase shifts on the states in the superposition according to their Hamming distance with [Formula: see text]. The generated state can be used as an excellent input state for testing quantum memories and linear optics quantum computers. We make no assumptions about the used operators and applied quantum gates, but our result implies that for this purpose the number of qubits in the quantum register offers no advantage, in principle, over the obvious measurement-based feedback protocol.

  8. Reproducible mesoscopic superpositions of Bose-Einstein condensates and mean-field chaos

    SciTech Connect

    Gertjerenken, Bettina; Arlinghaus, Stephan; Teichmann, Niklas; Weiss, Christoph

    2010-08-15

    In a parameter regime for which the mean-field (Gross-Pitaevskii) dynamics becomes chaotic, mesoscopic quantum superpositions in phase space can occur in a double-well potential, which is shaken periodically. For experimentally realistic initial states, such as the ground state of some 100 atoms, the emergence of mesoscopic quantum superpositions in phase space is investigated numerically. It is shown to be reproducible, even if the initial conditions change slightly. Although the final state is not a perfect superposition of two distinct phase states, the superposition is reached an order of magnitude faster than in the case of the collapse-and-revival phenomenon. Furthermore, a generator of entanglement is identified.

  9. Superpositioning of Digital Elevation Data with Analog Imagery for Data Editing,

    DTIC Science & Technology

    1984-01-01

    The Topographic Developments Laboratory of the U.S. Army Engineer Topographic Laboratories (ETL) has established the Photogrammetric Technology ... Integration (PTI) testbed system for the evaluation of superpositioning techniques utilizing electronically scanned hardcopy imagery with overlayed digital

  10. Resilience to decoherence of the macroscopic quantum superpositions generated by universally covariant optimal quantum cloning

    SciTech Connect

    Spagnolo, Nicolo; Sciarrino, Fabio; De Martini, Francesco

    2010-09-15

    We show that the quantum states generated by universal optimal quantum cloning of a single photon represent a universal set of quantum superpositions resilient to decoherence. We adopt the Bures distance as a tool to investigate the persistence of quantum coherence of these quantum states. According to this analysis, the process of universal cloning realizes a class of quantum superpositions that exhibits a covariance property in lossy configuration over the complete set of polarization states in the Bloch sphere.

  11. Strong-Driving-Assisted Preparation of Superpositions of Two-Mode Coherent States in Cavity QED

    NASA Astrophysics Data System (ADS)

    Su, Wan-Jun; Huang, Jian-Min

    2011-09-01

    A scheme is proposed for preparing the superposition of two-mode coherent states with controllable weighting factors along a straight line for two-mode cavity field. In this scheme two-level atoms driven by classical field are sent through a two-mode cavity initially in the vacuum state. Then the detection of the atoms make the cavity field be in a two-mode superpositions of coherent states.

  12. Using least median of squares for structural superposition of flexible proteins

    PubMed Central

    Liu, Yu-Shen; Fang, Yi; Ramani, Karthik

    2009-01-01

    Background The conventional superposition methods use an ordinary least squares (LS) fit for structural comparison of two different conformations of the same protein. The main problem of the LS fit that it is sensitive to outliers, i.e. large displacements of the original structures superimposed. Results To overcome this problem, we present a new algorithm to overlap two protein conformations by their atomic coordinates using a robust statistics technique: least median of squares (LMS). In order to effectively approximate the LMS optimization, the forward search technique is utilized. Our algorithm can automatically detect and superimpose the rigid core regions of two conformations with small or large displacements. In contrast, most existing superposition techniques strongly depend on the initial LS estimating for the entire atom sets of proteins. They may fail on structural superposition of two conformations with large displacements. The presented LMS fit can be considered as an alternative and complementary tool for structural superposition. Conclusion The proposed algorithm is robust and does not require any prior knowledge of the flexible regions. Furthermore, we show that the LMS fit can be extended to multiple level superposition between two conformations with several rigid domains. Our fit tool has produced successful superpositions when applied to proteins for which two conformations are known. The binary executable program for Windows platform, tested examples, and database are available from . PMID:19159484

  13. SU-E-T-31: A Fast Finite Size Pencil Beam (FSPB) Convolution Algorithm for a New Co-60 Arc Therapy Machine

    SciTech Connect

    Chibani, O; Eldib, A; Ma, C

    2015-06-15

    Purpose: Present a fast Finite Size Pencil Beam (FSPB) convolution algorithm for a new Co-60 arc therapy machine. The FSPB algorithm accounts for (i) strong angular divergence (short SAD), (ii) heterogeneity effect for primary attenuation, and (iii) source energy spectrum. Methods: The FSPB algorithm is based on a 0.5×0.5-cm2 dose kernel calculated using the GEPTS (Gamma Electron and Positron Transport System) Monte Carlo code. The dose kernel is tabulated using a thin XYZ mesh (0.1 mm steps in lateral directions) for radius less than 1 cm and using an RZ mesh (with varying steps) for larger radial distance. To account for SSD effect, 11 dose kernels with SSDs varying between 30 cm to 80 cm are calculated. Maynord factor and “lateral stretching” are applied to account for differences between closest and actual SSD. Appropriate rotations and second order interpolation are used to calculate the dose from a given beamlet to a point. Results: Accuracy: Dose distributions in water with 80 cm SSD are calculated using the new FSPB convolution algorithm and full Monte Carlo simulation (gold standard). Figs 1–4 show excellent agreements between FSPB and Monte Carlo calculations for different field sizes and at different depths. The dose distribution for a prostate case is calculated using FSPB (Fig.5). Sixty conformal beams with rectum blocking are assumed. Figs 6–8 show the comparison with Monte Carlo simulation based on the same beam apertures. The excellent agreement demonstrates the accuracy of the new algorithm in handling SSD variation, oblique incidence, and scatter contribution.Speed: The FSPB convolution algorithm calculates 28 million dose points per second using a single 2.2-GHz CPU. The present algorithm is seven times faster than a similar algorithm from Gu et al. (Phys. Med. Biol. 54, 2009, 6287–6297). Conclusion: A fast and accurate FSPB convolution algorithm was developed and benchmarked.

  14. Prediction of color changes in acetaminophen solution using the time-temperature superposition principle.

    PubMed

    Mochizuki, Koji; Takayama, Kozo

    2016-01-01

    A prediction method for color changes based on the time-temperature superposition principle (TTSP) was developed for acetaminophen solution. Color changes of acetaminophen solution are caused by the degradation of acetaminophen, such as hydrolysis and oxidation. In principle, the TTSP can be applied to only thermal aging. Therefore, the impact of oxidation on the color changes of acetaminophen solution was verified. The results of our experiment suggested that the oxidation products enhanced the color changes in acetaminophen solution. Next, the color changes of acetaminophen solution samples of the same head space volume after accelerated aging at various temperatures were investigated using the Commission Internationale de l'Eclairage (CIE) LAB color space (a*, b*, L* and ΔE*ab), following which the TTSP was adopted to kinetic analysis of the color changes. The apparent activation energies using the time-temperature shift factor of a*, b*, L* and ΔE*ab were calculated as 72.4, 69.2, 72.3 and 70.9 (kJ/mol), respectively, which are similar to the values for acetaminophen hydrolysis reported in the literature. The predicted values of a*, b*, L* and ΔE*ab at 40 °C were obtained by calculation using Arrhenius plots. A comparison between the experimental and predicted values for each color parameter revealed sufficiently high R(2) values (>0.98), suggesting the high reliability of the prediction. The kinetic analysis using TTSP was successfully applied to predicting the color changes under the controlled oxygen amount at any temperature and for any length of time.

  15. Super-resolution reconstruction algorithm based on adaptive convolution kernel size selection

    NASA Astrophysics Data System (ADS)

    Gao, Hang; Chen, Qian; Sui, Xiubao; Zeng, Junjie; Zhao, Yao

    2016-09-01

    Restricted by the detector technology and optical diffraction limit, the spatial resolution of infrared imaging system is difficult to achieve significant improvement. Super-Resolution (SR) reconstruction algorithm is an effective way to solve this problem. Among them, the SR algorithm based on multichannel blind deconvolution (MBD) estimates the convolution kernel only by low resolution observation images, according to the appropriate regularization constraints introduced by a priori assumption, to realize the high resolution image restoration. The algorithm has been shown effective when each channel is prime. In this paper, we use the significant edges to estimate the convolution kernel and introduce an adaptive convolution kernel size selection mechanism, according to the uncertainty of the convolution kernel size in MBD processing. To reduce the interference of noise, we amend the convolution kernel in an iterative process, and finally restore a clear image. Experimental results show that the algorithm can meet the convergence requirement of the convolution kernel estimation.

  16. Operational and convolution properties of three-dimensional Fourier transforms in spherical polar coordinates.

    PubMed

    Baddour, Natalie

    2010-10-01

    For functions that are best described with spherical coordinates, the three-dimensional Fourier transform can be written in spherical coordinates as a combination of spherical Hankel transforms and spherical harmonic series. However, to be as useful as its Cartesian counterpart, a spherical version of the Fourier operational toolset is required for the standard operations of shift, multiplication, convolution, etc. This paper derives the spherical version of the standard Fourier operation toolset. In particular, convolution in various forms is discussed in detail as this has important consequences for filtering. It is shown that standard multiplication and convolution rules do apply as long as the correct definition of convolution is applied.

  17. Convolution Algebra for Fluid Modes with Finite Energy

    DTIC Science & Technology

    1992-04-01

    PHILLIPS LABORATORY AIR FORCE SYSTEMS COMMAND UNITED STATES AIR FORCE HANSCOM AIR FORCE BASE, MASSACHIUSETTS 01731-5000 94-22604 "This technical report ’-as...with finite spatial and temporal extents. At Boston University, we have developed a full form of wavelet expansion which has the advantage over more...distribution: 00 bX =00 0l if, TZ< VPf (X) = V •a,,,’(x) = E bnb 𔄀(x) where b, =otherwise (34) V=o ,i=o a._, otherwise 7 The convolution of two

  18. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

    1996-01-01

    We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

  19. Surrogacy theory and models of convoluted organic systems.

    PubMed

    Konopka, Andrzej K

    2007-03-01

    The theory of surrogacy is briefly outlined as one of the conceptual foundations of systems biology that has been developed for the last 30 years in the context of Hertz-Rosen modeling relationship. Conceptual foundations of modeling convoluted (biologically complex) systems are briefly reviewed and discussed in terms of current and future research in systems biology. New as well as older results that pertain to the concepts of modeling relationship, sequence of surrogacies, cascade of representations, complementarity, analogy, metaphor, and epistemic time are presented together with a classification of models in a cascade. Examples of anticipated future applications of surrogacy theory in life sciences are briefly discussed.

  20. Medical image fusion using the convolution of Meridian distributions.

    PubMed

    Agrawal, Mayank; Tsakalides, Panagiotis; Achim, Alin

    2010-01-01

    The aim of this paper is to introduce a novel non-Gaussian statistical model-based approach for medical image fusion based on the Meridian distribution. The paper also includes a new approach to estimate the parameters of generalized Cauchy distribution. The input images are first decomposed using the Dual-Tree Complex Wavelet Transform (DT-CWT) with the subband coefficients modelled as Meridian random variables. Then, the convolution of Meridian distributions is applied as a probabilistic prior to model the fused coefficients, and the weights used to combine the source images are optimised via Maximum Likelihood (ML) estimation. The superior performance of the proposed method is demonstrated using medical images.

  1. Convolution seal for transition duct in turbine system

    DOEpatents

    Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray

    2015-05-26

    A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface feature for interfacing with an adjacent transition duct. The turbine system further includes a convolution seal contacting the interface feature to provide a seal between the interface feature and the adjacent transition duct.

  2. Simplified Syndrome Decoding of (n, 1) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.

  3. New Syndrome Decoding Techniques for the (n, K) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.

  4. New syndrome decoding techniques for the (n, k) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964

  5. Continuous speech recognition based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Qing-qing; Liu, Yong; Pan, Jie-lin; Yan, Yong-hong

    2015-07-01

    Convolutional Neural Networks (CNNs), which showed success in achieving translation invariance for many image processing tasks, are investigated for continuous speech recognitions in the paper. Compared to Deep Neural Networks (DNNs), which have been proven to be successful in many speech recognition tasks nowadays, CNNs can reduce the NN model sizes significantly, and at the same time achieve even better recognition accuracies. Experiments on standard speech corpus TIMIT showed that CNNs outperformed DNNs in the term of the accuracy when CNNs had even smaller model size.

  6. Convolution seal for transition duct in turbine system

    DOEpatents

    Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray

    2015-03-10

    A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface member for interfacing with a turbine section. The turbine system further includes a convolution seal contacting the interface member to provide a seal between the interface member and the turbine section.

  7. Convolutional neural networks for synthetic aperture radar classification

    NASA Astrophysics Data System (ADS)

    Profeta, Andrew; Rodriguez, Andres; Clouse, H. Scott

    2016-05-01

    For electro-optical object recognition, convolutional neural networks (CNNs) are the state-of-the-art. For large datasets, CNNs are able to learn meaningful features used for classification. However, their application to synthetic aperture radar (SAR) has been limited. In this work we experimented with various CNN architectures on the MSTAR SAR dataset. As the input to the CNN we used the magnitude and phase (2 channels) of the SAR imagery. We used the deep learning toolboxes CAFFE and Torch7. Our results show that we can achieve 93% accuracy on the MSTAR dataset using CNNs.

  8. Nonadiabatic creation of macroscopic superpositions with strongly correlated one-dimensional bosons in a ring trap

    SciTech Connect

    Schenke, C.; Minguzzi, A.; Hekking, F. W. J.

    2011-11-15

    We consider a strongly interacting quasi-one-dimensional Bose gas on a tight ring trap subjected to a localized barrier potential. We explore the possibility of forming a macroscopic superposition of a rotating and a nonrotating state under nonequilibrium conditions, achieved by a sudden quench of the barrier velocity. Using an exact solution for the dynamical evolution in the impenetrable-boson (Tonks-Girardeau) limit, we find an expression for the many-body wave function corresponding to a superposition state. The superposition is formed when the barrier velocity is tuned close to multiples of an integer or half-integer number of Coriolis flux quanta. As a consequence of the strong interactions, we find that (i) the state of the system can be mapped onto a macroscopic superposition of two Fermi spheres rather than two macroscopically occupied single-particle states as in a weakly interacting gas, and (ii) the barrier velocity should be larger than the sound velocity to better discriminate the two components of the superposition.

  9. Aquifer response to stream-stage and recharge variations. II. Convolution method and applications

    USGS Publications Warehouse

    Barlow, P.M.; DeSimone, L.A.; Moench, A.F.

    2000-01-01

    In this second of two papers, analytical step-response functions, developed in the companion paper for several cases of transient hydraulic interaction between a fully penetrating stream and a confined, leaky, or water-table aquifer, are used in the convolution integral to calculate aquifer heads, streambank seepage rates, and bank storage that occur in response to streamstage fluctuations and basinwide recharge or evapotranspiration. Two computer programs developed on the basis of these step-response functions and the convolution integral are applied to the analysis of hydraulic interaction of two alluvial stream-aquifer systems in the northeastern and central United States. These applications demonstrate the utility of the analytical functions and computer programs for estimating aquifer and streambank hydraulic properties, recharge rates, streambank seepage rates, and bank storage. Analysis of the water-table aquifer adjacent to the Blackstone River in Massachusetts suggests that the very shallow depth of water table and associated thin unsaturated zone at the site cause the aquifer to behave like a confined aquifer (negligible specific yield). This finding is consistent with previous studies that have shown that the effective specific yield of an unconfined aquifer approaches zero when the capillary fringe, where sediment pores are saturated by tension, extends to land surface. Under this condition, the aquifer's response is determined by elastic storage only. Estimates of horizontal and vertical hydraulic conductivity, specific yield, specific storage, and recharge for a water-table aquifer adjacent to the Cedar River in eastern Iowa, determined by the use of analytical methods, are in close agreement with those estimated by use of a more complex, multilayer numerical model of the aquifer. Streambank leakance of the semipervious streambank materials also was estimated for the site. The streambank-leakance parameter may be considered to be a general (or lumped

  10. Generalization of susceptibility of RF systems through far-field pattern superposition

    NASA Astrophysics Data System (ADS)

    Verdin, B.; Debroux, P.

    2015-05-01

    The purpose of this paper is to perform an analysis of RF (Radio Frequency) communication systems in a large electromagnetic environment to identify its susceptibility to jamming systems. We propose a new method that incorporates the use of reciprocity and superposition of the far-field radiation pattern of the RF system and the far-field radiation pattern of the jammer system. By using this method we can find the susceptibility pattern of RF systems with respect to the elevation and azimuth angles. A scenario was modeled with HFSS (High Frequency Structural Simulator) where the radiation pattern of the jammer was simulated as a cylindrical horn antenna. The RF jamming entry point used was a half-wave dipole inside a cavity with apertures that approximates a land-mobile vehicle, the dipole approximates a leaky coax cable. Because of the limitation of the simulation method, electrically large electromagnetic environments cannot be quickly simulated using HFSS's finite element method (FEM). Therefore, the combination of the transmit antenna radiation pattern (horn) superimposed onto the receive antenna pattern (dipole) was performed in MATLAB. A 2D or 3D susceptibility pattern is obtained with respect to the azimuth and elevation angles. In addition, by incorporating the jamming equation into this algorithm, the received jamming power as a function of distance at the RF receiver Pr(Φr, θr) can be calculated. The received power depends on antenna properties, propagation factor and system losses. Test cases include: a cavity with four apertures, a cavity above an infinite ground plane, and a land-mobile vehicle approximation. By using the proposed algorithm a susceptibility analysis of RF systems in electromagnetic environments can be performed.

  11. Fast Electron Correlation Methods for Molecular Clusters without Basis Set Superposition Errors

    SciTech Connect

    Kamiya, Muneaki; Hirata, So; Valiev, Marat

    2008-02-19

    Two critical extensions to our fast, accurate, and easy-to-implement binary or ternary interaction method for weakly-interacting molecular clusters [Hirata et al. Mol. Phys. 103, 2255 (2005)] have been proposed, implemented, and applied to water hexamers, hydrogen fluoride chains and rings, and neutral and zwitterionic glycine–water clusters with an excellent result for an initial performance assessment. Our original method included up to two- or three-body Coulomb, exchange, and correlation energies exactly and higher-order Coulomb energies in the dipole–dipole approximation. In this work, the dipole moments are replaced by atom-centered point charges determined so that they reproduce the electrostatic potentials of the cluster subunits as closely as possible and also self-consistently with one another in the cluster environment. They have been shown to lead to dramatic improvement in the description of short-range electrostatic potentials not only of large, charge-separated subunits like zwitterionic glycine but also of small subunits. Furthermore, basis set superposition errors (BSSE) known to plague direct evaluation of weak interactions have been eliminated by com-bining the Valiron–Mayer function counterpoise (VMFC) correction with our binary or ternary interaction method in an economical fashion (quadratic scaling n2 with respect to the number of subunits n when n is small and linear scaling when n is large). A new variant of VMFC has also been proposed in which three-body and all higher-order Coulomb effects on BSSE are estimated approximately. The BSSE-corrected ternary interaction method with atom-centered point charges reproduces the VMFC-corrected results of conventional electron correlation calculations within 0.1 kcal/mol. The proposed method is significantly more accurate and also efficient than conventional correlation methods uncorrected of BSSE.

  12. Free-flow reabsorption of glucose, sodium, osmoles and water in rat proximal convoluted tubule.

    PubMed Central

    Bishop, J H; Green, R; Thomas, S

    1979-01-01

    1. Reabsorption of glucose, sodium, total solute (osmoles) and water in the rat proximal tubule (pars convoluta) were studied by free-flow micropuncture at normal (saline-infused), suppressed (saline with phlorizin) and elevated (glucose infusion) glucose reabsorption rates. 2. Phlorizin completely inhibited net glucose reabsorption, approximately halved reabsorption of sodium, total solutes and water, and reduced single nephron glomerular filtration rate (SNGFR). 3. In saline and glucose-infused groups, there were no significant differences between SNGFR nor between reabsorptions (fractional and absolute) of either sodium, total solute or water, which were uniformly distributed along segments assessible to micropuncture. 4. Glucose reabsorptive capacity existed along the entire pars convoluta, with highest reabsorptive rates in convolutions closest to the glomerulus (in saline-infused rats, 90% fractional reabsorption at 2 mm, over 95% at end pars convoluta; in glucose-infused rats, 55 and 90%, respectively). 5. In saline and glucose infused rats, a significant correlation existed between net glucose and sodium reabsorption, but the regression slopes differed and correlations became non-significant when the reabsorptive fluxes were factored by SNGFR. 6. For all groups, the majority of tubular fluid (TF) concentrations of osmoles and sodium were lower than those in plasma (over-all mean TFosm)Posm = 0.973 +/- 0.004, P less than 0.001; TFNa /PNa = 0.964 +/- 0.005, P less than 0.001). 7. Correspondingly, calculated osmolal and sodium concentrations in the reabsorbate were greater than those in plasma, and were significantly correlated with distance to puncture site with maximal values in the most proximal convolutions (for osmolality, approximately +79 m-osmole kg-1 water at 1 mm). PMID:469722

  13. Convolutional neural network features based change detection in satellite images

    NASA Astrophysics Data System (ADS)

    Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong

    2016-07-01

    With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.

  14. Convolutional neural network architectures for predicting DNA–protein binding

    PubMed Central

    Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.

    2016-01-01

    Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608

  15. Multichannel Convolutional Neural Network for Biological Relation Extraction

    PubMed Central

    Quan, Chanqin; Sun, Xiao; Bai, Wenjun

    2016-01-01

    The plethora of biomedical relations which are embedded in medical logs (records) demands researchers' attention. Previous theoretical and practical focuses were restricted on traditional machine learning techniques. However, these methods are susceptible to the issues of “vocabulary gap” and data sparseness and the unattainable automation process in feature extraction. To address aforementioned issues, in this work, we propose a multichannel convolutional neural network (MCCNN) for automated biomedical relation extraction. The proposed model has the following two contributions: (1) it enables the fusion of multiple (e.g., five) versions in word embeddings; (2) the need for manual feature engineering can be obviated by automated feature learning with convolutional neural network (CNN). We evaluated our model on two biomedical relation extraction tasks: drug-drug interaction (DDI) extraction and protein-protein interaction (PPI) extraction. For DDI task, our system achieved an overall f-score of 70.2% compared to the standard linear SVM based system (e.g., 67.0%) on DDIExtraction 2013 challenge dataset. And for PPI task, we evaluated our system on Aimed and BioInfer PPI corpus; our system exceeded the state-of-art ensemble SVM system by 2.7% and 5.6% on f-scores. PMID:28053977

  16. Deep Convolutional Neural Networks for large-scale speech tasks.

    PubMed

    Sainath, Tara N; Kingsbury, Brian; Saon, George; Soltau, Hagen; Mohamed, Abdel-rahman; Dahl, George; Ramabhadran, Bhuvana

    2015-04-01

    Convolutional Neural Networks (CNNs) are an alternative type of neural network that can be used to reduce spectral variations and model spectral correlations which exist in signals. Since speech signals exhibit both of these properties, we hypothesize that CNNs are a more effective model for speech compared to Deep Neural Networks (DNNs). In this paper, we explore applying CNNs to large vocabulary continuous speech recognition (LVCSR) tasks. First, we determine the appropriate architecture to make CNNs effective compared to DNNs for LVCSR tasks. Specifically, we focus on how many convolutional layers are needed, what is an appropriate number of hidden units, what is the best pooling strategy. Second, investigate how to incorporate speaker-adapted features, which cannot directly be modeled by CNNs as they do not obey locality in frequency, into the CNN framework. Third, given the importance of sequence training for speech tasks, we introduce a strategy to use ReLU+dropout during Hessian-free sequence training of CNNs. Experiments on 3 LVCSR tasks indicate that a CNN with the proposed speaker-adapted and ReLU+dropout ideas allow for a 12%-14% relative improvement in WER over a strong DNN system, achieving state-of-the art results in these 3 tasks.

  17. Single-Cell Phenotype Classification Using Deep Convolutional Neural Networks.

    PubMed

    Dürr, Oliver; Sick, Beate

    2016-10-01

    Deep learning methods are currently outperforming traditional state-of-the-art computer vision algorithms in diverse applications and recently even surpassed human performance in object recognition. Here we demonstrate the potential of deep learning methods to high-content screening-based phenotype classification. We trained a deep learning classifier in the form of convolutional neural networks with approximately 40,000 publicly available single-cell images from samples treated with compounds from four classes known to lead to different phenotypes. The input data consisted of multichannel images. The construction of appropriate feature definitions was part of the training and carried out by the convolutional network, without the need for expert knowledge or handcrafted features. We compare our results against the recent state-of-the-art pipeline in which predefined features are extracted from each cell using specialized software and then fed into various machine learning algorithms (support vector machine, Fisher linear discriminant, random forest) for classification. The performance of all classification approaches is evaluated on an untouched test image set with known phenotype classes. Compared to the best reference machine learning algorithm, the misclassification rate is reduced from 8.9% to 6.6%.

  18. Selective Convolutional Descriptor Aggregation for Fine-Grained Image Retrieval.

    PubMed

    Wei, Xiu-Shen; Luo, Jian-Hao; Wu, Jianxin; Zhou, Zhi-Hua

    2017-03-27

    Deep convolutional neural network models pretrained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, let alone the unsupervised retrieval task. We propose the Selective Convolutional Descriptor Aggregation (SCDA) method. SCDA firstly localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and dimensionality reduced into a short feature vector using the best practices we found. SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained datasets confirm the effectiveness of SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA's high mean average precision in fine-grained retrieval. Moreover, on general image retrieval datasets, SCDA achieves comparable retrieval results with state-of-the-art general image retrieval approaches.

  19. Enhancing Neutron Beam Production with a Convoluted Moderator

    SciTech Connect

    Iverson, Erik B; Baxter, David V; Muhrer, Guenter; Ansell, Stuart; Gallmeier, Franz X; Dalgliesh, Robert; Lu, Wei; Kaiser, Helmut

    2014-10-01

    We describe a new concept for a neutron moderating assembly resulting in the more efficient production of slow neutron beams. The Convoluted Moderator, a heterogeneous stack of interleaved moderating material and nearly transparent single-crystal spacers, is a directionally-enhanced neutron beam source, improving beam effectiveness over an angular range comparable to the range accepted by neutron beam lines and guides. We have demonstrated gains of 50% in slow neutron intensity for a given fast neutron production rate while simultaneously reducing the wavelength-dependent emission time dispersion by 25%, both coming from a geometric effect in which the neutron beam lines view a large surface area of moderating material in a relatively small volume. Additionally, we have confirmed a Bragg-enhancement effect arising from coherent scattering within the single-crystal spacers. We have not observed hypothesized refractive effects leading to additional gains at long wavelength. In addition to confirmation of the validity of the Convoluted Moderator concept, our measurements provide a series of benchmark experiments suitable for developing simulation and analysis techniques for practical optimization and eventual implementation at slow neutron source facilities.

  20. Classifications of Multispectral Colorectal Cancer Tissues Using Convolution Neural Network

    PubMed Central

    Haj-Hassan, Hawraa; Chaddad, Ahmad; Harkouss, Youssef; Desrosiers, Christian; Toews, Matthew; Tanougast, Camel

    2017-01-01

    Background: Colorectal cancer (CRC) is the third most common cancer among men and women. Its diagnosis in early stages, typically done through the analysis of colon biopsy images, can greatly improve the chances of a successful treatment. This paper proposes to use convolution neural networks (CNNs) to predict three tissue types related to the progression of CRC: benign hyperplasia (BH), intraepithelial neoplasia (IN), and carcinoma (Ca). Methods: Multispectral biopsy images of thirty CRC patients were retrospectively analyzed. Images of tissue samples were divided into three groups, based on their type (10 BH, 10 IN, and 10 Ca). An active contour model was used to segment image regions containing pathological tissues. Tissue samples were classified using a CNN containing convolution, max-pooling, and fully-connected layers. Available tissue samples were split into a training set, for learning the CNN parameters, and test set, for evaluating its performance. Results: An accuracy of 99.17% was obtained from segmented image regions, outperforming existing approaches based on traditional feature extraction, and classification techniques. Conclusions: Experimental results demonstrate the effectiveness of CNN for the classification of CRC tissue types, in particular when using presegmented regions of interest.

  1. An optimal nonorthogonal separation of the anisotropic Gaussian convolution filter.

    PubMed

    Lampert, Christoph H; Wirjadi, Oliver

    2006-11-01

    We give an analytical and geometrical treatment of what it means to separate a Gaussian kernel along arbitrary axes in R(n), and we present a separation scheme that allows us to efficiently implement anisotropic Gaussian convolution filters for data of arbitrary dimensionality. Based on our previous analysis we show that this scheme is optimal with regard to the number of memory accesses and interpolation operations needed. The proposed method relies on nonorthogonal convolution axes and works completely in image space. Thus, it avoids the need for a fast Fourier transform (FFT)-subroutine. Depending on the accuracy and speed requirements, different interpolation schemes and methods to implement the one-dimensional Gaussian (finite impulse response and infinite impulse response) can be integrated. Special emphasis is put on analyzing the performance and accuracy of the new method. In particular, we show that without any special optimization of the source code, it can perform anisotropic Gaussian filtering faster than methods relying on the FFT.

  2. Convolutional Neural Network Based Fault Detection for Rotating Machinery

    NASA Astrophysics Data System (ADS)

    Janssens, Olivier; Slavkovikj, Viktor; Vervisch, Bram; Stockman, Kurt; Loccufier, Mia; Verstockt, Steven; Van de Walle, Rik; Van Hoecke, Sofie

    2016-09-01

    Vibration analysis is a well-established technique for condition monitoring of rotating machines as the vibration patterns differ depending on the fault or machine condition. Currently, mainly manually-engineered features, such as the ball pass frequencies of the raceway, RMS, kurtosis an crest, are used for automatic fault detection. Unfortunately, engineering and interpreting such features requires a significant level of human expertise. To enable non-experts in vibration analysis to perform condition monitoring, the overhead of feature engineering for specific faults needs to be reduced as much as possible. Therefore, in this article we propose a feature learning model for condition monitoring based on convolutional neural networks. The goal of this approach is to autonomously learn useful features for bearing fault detection from the data itself. Several types of bearing faults such as outer-raceway faults and lubrication degradation are considered, but also healthy bearings and rotor imbalance are included. For each condition, several bearings are tested to ensure generalization of the fault-detection system. Furthermore, the feature-learning based approach is compared to a feature-engineering based approach using the same data to objectively quantify their performance. The results indicate that the feature-learning system, based on convolutional neural networks, significantly outperforms the classical feature-engineering based approach which uses manually engineered features and a random forest classifier. The former achieves an accuracy of 93.61 percent and the latter an accuracy of 87.25 percent.

  3. SU-E-T-607: An Experimental Validation of Gamma Knife Based Convolution Algorithm On Solid Acrylic Anthropomorphic Phantom

    SciTech Connect

    Gopishankar, N; Bisht, R K

    2014-06-01

    Purpose: To perform dosimetric evaluation of convolution algorithm in Gamma Knife (Perfexion Model) using solid acrylic anthropomorphic phantom. Methods: An in-house developed acrylic phantom with ion chamber insert was used for this purpose. The middle insert was designed to fit ion chamber from top(head) as well as from bottom(neck) of the phantom, henceforth measurement done at two different positions simultaneously. Leksell frame fixed to phantom simulated patient treatment. Prior to dosimetric study, hounsfield units and electron density of acrylic material were incorporated into the calibration curve in the TPS for convolution algorithm calculation. A CT scan of phantom with ion chamber (PTW Freiberg, 0.125cc) was obtained with following scanning parameters: Tube voltage-110kV, Slice thickness-1mm and FOV-240mm. Three separate single shot plans were generated in LGP TPS (Version 10.1.) with collimators 16mm, 8mm and 4mm respectively for both ion chamber positions. Both TMR10 and Convolution algorithm based planning (CABP) were used for dose calculation. A dose of 6Gy at 100% isodose was prescribed at centre of ion chamber visible in the CT scan. The phantom with ion chamber was positioned in the treatment couch for dose delivery. Results: The ion chamber measured dose was 5.98Gy for 16mm collimator shot plan with less than 1% deviation for convolution algorithm whereas with TMR10 measured dose was 5.6Gy. For 8mm and 4mm collimator plan merely a dose of 3.86Gy and 2.18Gy respectively were delivered at TPS calculated time for CABP. Conclusion: CABP is expected to perform accurate prediction of time for dose delivery for all collimators, but significant variation in measured dose was observed for 8mm and 4mm collimator which may be due collimator size effect. Effect of metal artifacts caused by pins and frame on the CT scan also may have role in misinterpreting CABP. The study carried out requires further investigation.

  4. Oblique superposition of two elliptically polarized lightwaves using geometric algebra: is energy-momentum conserved?

    PubMed

    Sze, Michelle Wynne C; Sugon, Quirino M; McNamara, Daniel J

    2010-11-01

    In this paper, we use Clifford (geometric) algebra Cl(3,0) to verify if electromagnetic energy-momentum density is still conserved for oblique superposition of two elliptically polarized plane waves with the same frequency. We show that energy-momentum conservation is valid at any time only for the superposition of two counter-propagating elliptically polarized plane waves. We show that the time-average energy-momentum of the superposition of two circularly polarized waves with opposite handedness is conserved regardless of the propagation directions of the waves. And, we show that the resulting momentum density of the superposed waves generally has a vector component perpendicular to the momentum densities of the individual waves.

  5. Space-variant polarization patterns of non-collinear Poincaré superpositions

    NASA Astrophysics Data System (ADS)

    Galvez, E. J.; Beach, K.; Zeosky, J. J.; Khajavi, B.

    2015-03-01

    We present analysis and measurements of the polarization patterns produced by non-collinear superpositions of Laguerre-Gauss spatial modes in orthogonal polarization states, which are known as Poincaré modes. Our findings agree with predictions (I. Freund Opt. Lett. 35, 148-150 (2010)), that superpositions containing a C-point lead to a rotation of the polarization ellipse in 3-dimensions. Here we do imaging polarimetry of superpositions of first- and zero-order spatial modes at relative beam angles of 0-4 arcmin. We find Poincaré-type polarization patterns showing fringes in polarization orientation, but which preserve the polarization-singularity index for all three cases of C-points: lemons, stars and monstars.

  6. Polarimetric aspects in antenna related superposition of multipath signals

    NASA Astrophysics Data System (ADS)

    Cichon, D. J.; Kuerner, T.; Wiesbeck, W.

    Polarimetry in radio wave propagation is very important and has been thoroughly investigated for radar systems, which are mainly related to single path propagation. Many theoretical approaches and results cannot be directly applied to problems dealing with wave propagation in a strong multipath environment, for example mobile communication in urban and suburban areas. In this paper the basic polarimetric mechanisms considering multipath propagation and its interaction with antennas are described. Results of a multipath simulation and its proper visualization are presented. The polarization signature for a radio link in a real 3D terrain is calculated based on the briefly described polarimetric propagation model.

  7. Experimental implementation of the Deutsch-Jozsa algorithm for three-qubit functions using pure coherent molecular superpositions

    SciTech Connect

    Vala, Jiri; Kosloff, Ronnie; Amitay, Zohar; Zhang Bo; Leone, Stephen R.

    2002-12-01

    The Deutsch-Jozsa algorithm is experimentally demonstrated for three-qubit functions using pure coherent superpositions of Li{sub 2} rovibrational eigenstates. The function's character, either constant or balanced, is evaluated by first imprinting the function, using a phase-shaped femtosecond pulse, on a coherent superposition of the molecular states, and then projecting the superposition onto an ionic final state, using a second femtosecond pulse at a specific time delay.

  8. Calculation of potential flow past airship bodies in yaw

    NASA Technical Reports Server (NTRS)

    Lotz, I

    1932-01-01

    An outline of Von Karman's method of computing the potential flow of airships in yaw by means of partially constant dipolar superposition on the axis of the body is followed by several considerations for beginning and end of the superposition. Then this method is improved by postulating a continuous, in part linearly variable dipolar superposition on the axis. The second main part of the report brings the calculation of the potential flow by means of sources and sinks, arranged on the surface of the airship body. The integral equation which must satisfy this surface superposition is posed, and the core reduced to functions developed from whole elliptical normal integrals. The functions are shown diagrammatically. The integration is resolvable by iteration. The consequence of the method is good. The formulas for computing the velocity on the surface and of the potential for any point conclude the report.

  9. Complex periodic non-diffracting beams generated by superposition of two identical periodic wave fields

    NASA Astrophysics Data System (ADS)

    Gao, Yuanmei; Wen, Zengrun; Zheng, Liren; Zhao, Lina

    2017-04-01

    A method has been proposed to generate complex periodic discrete non-diffracting beams (PDNBs) via superposition of two identical simple PDNBs at a particular angle. As for special cases, we studied the superposition of the two identical squares (;4+4;) and two hexagonal (;6+6;) periodic wave fields at specific angles, respectively, and obtained a series of interesting complex PDNBs. New PDNBs were also obtained by modulating the initial phase difference between adjacent interfering beams. In the experiment, a 4 f Fourier filter system and a phase-only spatial light modulator imprinting synthesis phase patterns of these PDNBs were used to produce desired wave fields.

  10. Geometric measure of pairwise quantum discord for superpositions of multipartite generalized coherent states

    NASA Astrophysics Data System (ADS)

    Daoud, M.; Ahl Laamara, R.

    2012-07-01

    We give the explicit expressions of the pairwise quantum correlations present in superpositions of multipartite coherent states. A special attention is devoted to the evaluation of the geometric quantum discord. The dynamics of quantum correlations under a dephasing channel is analyzed. A comparison of geometric measure of quantum discord with that of concurrence shows that quantum discord in multipartite coherent states is more resilient to dissipative environments than is quantum entanglement. To illustrate our results, we consider some special superpositions of Weyl-Heisenberg, SU(2) and SU(1,1) coherent states which interpolate between Werner and Greenberger-Horne-Zeilinger states.

  11. Experimental study of current loss and plasma formation in the Z machine post-hole convolute

    NASA Astrophysics Data System (ADS)

    Gomez, M. R.; Gilgenbach, R. M.; Cuneo, M. E.; Jennings, C. A.; McBride, R. D.; Waisman, E. M.; Hutsel, B. T.; Stygar, W. A.; Rose, D. V.; Maron, Y.

    2017-01-01

    The Z pulsed-power generator at Sandia National Laboratories drives high energy density physics experiments with load currents of up to 26 MA. Z utilizes a double post-hole convolute to combine the current from four parallel magnetically insulated transmission lines into a single transmission line just upstream of the load. Current loss is observed in most experiments and is traditionally attributed to inefficient convolute performance. The apparent loss current varies substantially for z-pinch loads with different inductance histories; however, a similar convolute impedance history is observed for all load types. This paper details direct spectroscopic measurements of plasma density, temperature, and apparent and actual plasma closure velocities within the convolute. Spectral measurements indicate a correlation between impedance collapse and plasma formation in the convolute. Absorption features in the spectra show the convolute plasma consists primarily of hydrogen, which likely forms from desorbed electrode contaminant species such as H2O , H2 , and hydrocarbons. Plasma densities increase from 1 ×1016 cm-3 (level of detectability) just before peak current to over 1 ×1017 cm-3 at stagnation (tens of ns later). The density seems to be highest near the cathode surface, with an apparent cathode to anode plasma velocity in the range of 35 - 50 cm /μ s . Similar plasma conditions and convolute impedance histories are observed in experiments with high and low losses, suggesting that losses are driven largely by load dynamics, which determine the voltage on the convolute.

  12. There is no MacWilliams identity for convolutional codes. [transmission gain comparison

    NASA Technical Reports Server (NTRS)

    Shearer, J. B.; Mceliece, R. J.

    1977-01-01

    An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.

  13. Using convolutional decoding to improve time delay and phase estimation in digital communications

    DOEpatents

    Ormesher, Richard C.; Mason, John J.

    2010-01-26

    The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.

  14. The effect of whitening transformation on pooling operations in convolutional autoencoders

    NASA Astrophysics Data System (ADS)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua

    2015-12-01

    Convolutional autoencoders (CAEs) are unsupervised feature extractors for high-resolution images. In the pre-processing step, whitening transformation has widely been adopted to remove redundancy by making adjacent pixels less correlated. Pooling is a biologically inspired operation to reduce the resolution of feature maps and achieve spatial invariance in convolutional neural networks. Conventionally, pooling methods are mainly determined empirically in most previous work. Therefore, our main purpose is to study the relationship between whitening processing and pooling operations in convolutional autoencoders for image classification. We propose an adaptive pooling approach based on the concepts of information entropy to test the effect of whitening on pooling in different conditions. Experimental results on benchmark datasets indicate that the performance of pooling strategies is associated with the distribution of feature activations, which can be affected by whitening processing. This provides guidance for the selection of pooling methods in convolutional autoencoders and other convolutional neural networks.

  15. Direct phase-domain calculation of transmission line transients using two-sided recursions

    SciTech Connect

    Angelidis, G.; Semlyen, A.

    1995-04-01

    This paper presents a new method for the simulation of electromagnetic transients on transmission lines. Instead of using convolutions of the input variables only, the authors perform short convolutions with both input and output variables. The result is a method of Two-Sided Recursions (TSR), which is comparable in efficiency with the existing recursive convolutions or with their equivalent state variable formulations. It is, however, conceptually simpler and can be applied, in addition to fast modal-domain solutions, to the direct phase-domain calculation of transmission line transients with very accurate results.

  16. Low-dose CT via convolutional neural network

    PubMed Central

    Chen, Hu; Zhang, Yi; Zhang, Weihua; Liao, Peixi; Li, Ke; Zhou, Jiliu; Wang, Ge

    2017-01-01

    In order to reduce the potential radiation risk, low-dose CT has attracted an increasing attention. However, simply lowering the radiation dose will significantly degrade the image quality. In this paper, we propose a new noise reduction method for low-dose CT via deep learning without accessing original projection data. A deep convolutional neural network is here used to map low-dose CT images towards its corresponding normal-dose counterparts in a patch-by-patch fashion. Qualitative results demonstrate a great potential of the proposed method on artifact reduction and structure preservation. In terms of the quantitative metrics, the proposed method has showed a substantial improvement on PSNR, RMSE and SSIM than the competing state-of-art methods. Furthermore, the speed of our method is one order of magnitude faster than the iterative reconstruction and patch-based image denoising methods. PMID:28270976

  17. Drug-Drug Interaction Extraction via Convolutional Neural Networks

    PubMed Central

    Liu, Shengyu; Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong

    2016-01-01

    Drug-drug interaction (DDI) extraction as a typical relation extraction task in natural language processing (NLP) has always attracted great attention. Most state-of-the-art DDI extraction systems are based on support vector machines (SVM) with a large number of manually defined features. Recently, convolutional neural networks (CNN), a robust machine learning method which almost does not need manually defined features, has exhibited great potential for many NLP tasks. It is worth employing CNN for DDI extraction, which has never been investigated. We proposed a CNN-based method for DDI extraction. Experiments conducted on the 2013 DDIExtraction challenge corpus demonstrate that CNN is a good choice for DDI extraction. The CNN-based DDI extraction method achieves an F-score of 69.75%, which outperforms the existing best performing method by 2.75%. PMID:26941831

  18. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  19. Convolution quadrature for the wave equation with impedance boundary conditions

    NASA Astrophysics Data System (ADS)

    Sauter, S. A.; Schanz, M.

    2017-04-01

    We consider the numerical solution of the wave equation with impedance boundary conditions and start from a boundary integral formulation for its discretization. We develop the generalized convolution quadrature (gCQ) to solve the arising acoustic retarded potential integral equation for this impedance problem. For the special case of scattering from a spherical object, we derive representations of analytic solutions which allow to investigate the effect of the impedance coefficient on the acoustic pressure analytically. We have performed systematic numerical experiments to study the convergence rates as well as the sensitivity of the acoustic pressure from the impedance coefficients. Finally, we apply this method to simulate the acoustic pressure in a building with a fairly complicated geometry and to study the influence of the impedance coefficient also in this situation.

  20. Discovering characteristic landmarks on ancient coins using convolutional networks

    NASA Astrophysics Data System (ADS)

    Kim, Jongpil; Pavlovic, Vladimir

    2017-01-01

    We propose a method to find characteristic landmarks and recognize ancient Roman imperial coins using deep convolutional neural networks (CNNs) combined with expert-designed domain hierarchies. We first propose a framework to recognize Roman coins that exploits the hierarchical knowledge structure embedded in the coin domain, which we combine with the CNN-based category classifiers. We next formulate an optimization problem to discover class-specific salient coin regions. Analysis of discovered salient regions confirms that they are largely consistent with human expert annotations. Experimental results show that the proposed framework is able to effectively recognize ancient Roman coins as well as successfully identify landmarks on the coins. For this research, we have collected a Roman coin dataset where all coins are annotated and consist of obverse (head) and reverse (tail) images.

  1. Tomography by iterative convolution - Empirical study and application to interferometry

    NASA Technical Reports Server (NTRS)

    Vest, C. M.; Prikryl, I.

    1984-01-01

    An algorithm for computer tomography has been developed that is applicable to reconstruction from data having incomplete projections because an opaque object blocks some of the probing radiation as it passes through the object field. The algorithm is based on iteration between the object domain and the projection (Radon transform) domain. Reconstructions are computed during each iteration by the well-known convolution method. Although it is demonstrated that this algorithm does not converge, an empirically justified criterion for terminating the iteration when the most accurate estimate has been computed is presented. The algorithm has been studied by using it to reconstruct several different object fields with several different opaque regions. It also has been used to reconstruct aerodynamic density fields from interferometric data recorded in wind tunnel tests.

  2. Finding the complete path and weight enumerators of convolutional codes

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I.

    1990-01-01

    A method for obtaining the complete path enumerator T(D, L, I) of a convolutional code is described. A system of algebraic equations is solved, using a new algorithm for computing determinants, to obtain T(D, L, I) for the (7,1/2) NASA standard code. Generating functions, derived from T(D, L, I) are used to upper bound Viterbi decoder error rates. This technique is currently feasible for constraint length K less than 10 codes. A practical, fast algorithm is presented for computing the leading nonzero coefficients of the generating functions used to bound the performance of constraint length K less than 20 codes. Code profiles with about 50 nonzero coefficients are obtained with this algorithm for the experimental K = 15, rate 1/4, code in the Galileo mission and for the proposed K = 15, rate 1/6, 2-dB code.

  3. Enhanced Line Integral Convolution with Flow Feature Detection

    NASA Technical Reports Server (NTRS)

    Lane, David; Okada, Arthur

    1996-01-01

    The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain. The method produces a flow texture image based on the input velocity field defined in the domain. Because of the nature of the algorithm, the texture image tends to be blurry. This sometimes makes it difficult to identify boundaries where flow separation and reattachments occur. We present techniques to enhance LIC texture images and use colored texture images to highlight flow separation and reattachment boundaries. Our techniques have been applied to several flow fields defined in 3D curvilinear multi-block grids and scientists have found the results to be very useful.

  4. Learning to Generate Chairs, Tables and Cars with Convolutional Networks.

    PubMed

    Dosovitskiy, Alexey; Springenberg, Jost; Tatarchenko, Maxim; Brox, Thomas

    2016-05-12

    We train generative 'up-convolutional' neural networks which are able to generate images of objects given object style, viewpoint, and color. We train the networks on rendered 3D models of chairs, tables, and cars. Our experiments show that the networks do not merely learn all images by heart, but rather find a meaningful representation of 3D models allowing them to assess the similarity of different models, interpolate between given views to generate the missing ones, extrapolate views, and invent new objects not present in the training set by recombining training instances, or even two different object classes. Moreover, we show that such generative networks can be used to find correspondences between different objects from the dataset, outperforming existing approaches on this task.

  5. Stability Training for Convolutional Neural Nets in LArTPC

    NASA Astrophysics Data System (ADS)

    Lindsay, Matt; Wongjirad, Taritree

    2017-01-01

    Convolutional Neural Nets (CNNs) are the state of the art for many problems in computer vision and are a promising method for classifying interactions in Liquid Argon Time Projection Chambers (LArTPCs) used in neutrino oscillation experiments. Despite the good performance of CNN's, they are not without drawbacks, chief among them is vulnerability to noise and small perturbations to the input. One solution to this problem is a modification to the learning process called Stability Training developed by Zheng et al. We verify existing work and demonstrate volatility caused by simple Gaussian noise and also that the volatility can be nearly eliminated with Stability Training. We then go further and show that a traditional CNN is also vulnerable to realistic experimental noise and that a stability trained CNN remains accurate despite noise. This further adds to the optimism for CNNs for work in LArTPCs and other applications.

  6. Convolutional Neural Networks for patient-specific ECG classification.

    PubMed

    Kiranyaz, Serkan; Ince, Turker; Hamila, Ridha; Gabbouj, Moncef

    2015-01-01

    We propose a fast and accurate patient-specific electrocardiogram (ECG) classification and monitoring system using an adaptive implementation of 1D Convolutional Neural Networks (CNNs) that can fuse feature extraction and classification into a unified learner. In this way, a dedicated CNN will be trained for each patient by using relatively small common and patient-specific training data and thus it can also be used to classify long ECG records such as Holter registers in a fast and accurate manner. Alternatively, such a solution can conveniently be used for real-time ECG monitoring and early alert system on a light-weight wearable device. The experimental results demonstrate that the proposed system achieves a superior classification performance for the detection of ventricular ectopic beats (VEB) and supraventricular ectopic beats (SVEB).

  7. $\\mathtt {Deepr}$: A Convolutional Net for Medical Records.

    PubMed

    Nguyen, Phuoc; Tran, Truyen; Wickramasinghe, Nilmini; Venkatesh, Svetha

    2017-01-01

    Feature engineering remains a major bottleneck when creating predictive systems from electronic medical records. At present, an important missing element is detecting predictive regular clinical motifs from irregular episodic records. We present Deepr (short for Deep record), a new end-to-end deep learning system that learns to extract features from medical records and predicts future risk automatically. Deepr transforms a record into a sequence of discrete elements separated by coded time gaps and hospital transfers. On top of the sequence is a convolutional neural net that detects and combines predictive local clinical motifs to stratify the risk. Deepr permits transparent inspection and visualization of its inner working. We validate Deepr on hospital data to predict unplanned readmission after discharge. Deepr achieves superior accuracy compared to traditional techniques, detects meaningful clinical motifs, and uncovers the underlying structure of the disease and intervention space.

  8. Fast convolution with free-space Green's functions

    NASA Astrophysics Data System (ADS)

    Vico, Felipe; Greengard, Leslie; Ferrando, Miguel

    2016-10-01

    We introduce a fast algorithm for computing volume potentials - that is, the convolution of a translation invariant, free-space Green's function with a compactly supported source distribution defined on a uniform grid. The algorithm relies on regularizing the Fourier transform of the Green's function by cutting off the interaction in physical space beyond the domain of interest. This permits the straightforward application of trapezoidal quadrature and the standard FFT, with superalgebraic convergence for smooth data. Moreover, the method can be interpreted as employing a Nystrom discretization of the corresponding integral operator, with matrix entries which can be obtained explicitly and rapidly. This is of use in the design of preconditioners or fast direct solvers for a variety of volume integral equations. The method proposed permits the computation of any derivative of the potential, at the cost of an additional FFT.

  9. Rapid Exact Signal Scanning With Deep Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Thom, Markus; Gritschneder, Franz

    2017-03-01

    A rigorous formulation of the dynamics of a signal processing scheme aimed at dense signal scanning without any loss in accuracy is introduced and analyzed. Related methods proposed in the recent past lack a satisfactory analysis of whether they actually fulfill any exactness constraints. This is improved through an exact characterization of the requirements for a sound sliding window approach. The tools developed in this paper are especially beneficial if Convolutional Neural Networks are employed, but can also be used as a more general framework to validate related approaches to signal scanning. The proposed theory helps to eliminate redundant computations and renders special case treatment unnecessary, resulting in a dramatic boost in efficiency particularly on massively parallel processors. This is demonstrated both theoretically in a computational complexity analysis and empirically on modern parallel processors.

  10. Plane-wave decomposition by spherical-convolution microphone array

    NASA Astrophysics Data System (ADS)

    Rafaely, Boaz; Park, Munhum

    2004-05-01

    Reverberant sound fields are widely studied, as they have a significant influence on the acoustic performance of enclosures in a variety of applications. For example, the intelligibility of speech in lecture rooms, the quality of music in auditoria, the noise level in offices, and the production of 3D sound in living rooms are all affected by the enclosed sound field. These sound fields are typically studied through frequency response measurements or statistical measures such as reverberation time, which do not provide detailed spatial information. The aim of the work presented in this seminar is the detailed analysis of reverberant sound fields. A measurement and analysis system based on acoustic theory and signal processing, designed around a spherical microphone array, is presented. Detailed analysis is achieved by decomposition of the sound field into waves, using spherical Fourier transform and spherical convolution. The presentation will include theoretical review, simulation studies, and initial experimental results.

  11. Truncation Depth Rule-of-Thumb for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Moision, Bruce

    2009-01-01

    In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.

  12. Radio frequency interference mitigation using deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Akeret, J.; Chang, C.; Lucchi, A.; Refregier, A.

    2017-01-01

    We propose a novel approach for mitigating radio frequency interference (RFI) signals in radio data using the latest advances in deep learning. We employ a special type of Convolutional Neural Network, the U-Net, that enables the classification of clean signal and RFI signatures in 2D time-ordered data acquired from a radio telescope. We train and assess the performance of this network using the HIDE &SEEK radio data simulation and processing packages, as well as early Science Verification data acquired with the 7m single-dish telescope at the Bleien Observatory. We find that our U-Net implementation is showing competitive accuracy to classical RFI mitigation algorithms such as SEEK's SUMTHRESHOLD implementation. We publish our U-Net software package on GitHub under GPLv3 license.

  13. Star-galaxy classification using deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Kim, Edward J.; Brunner, Robert J.

    2017-02-01

    Most existing star-galaxy classifiers use the reduced summary information from catalogues, requiring careful feature extraction and selection. The latest advances in machine learning that use deep convolutional neural networks (ConvNets) allow a machine to automatically learn the features directly from the data, minimizing the need for input from human experts. We present a star-galaxy classification framework that uses deep ConvNets directly on the reduced, calibrated pixel values. Using data from the Sloan Digital Sky Survey and the Canada-France-Hawaii Telescope Lensing Survey, we demonstrate that ConvNets are able to produce accurate and well-calibrated probabilistic classifications that are competitive with conventional machine learning techniques. Future advances in deep learning may bring more success with current and forthcoming photometric surveys, such as the Dark Energy Survey and the Large Synoptic Survey Telescope, because deep neural networks require very little, manual feature engineering.

  14. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.

    PubMed

    He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian

    2015-09-01

    Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224 × 224) input image. This requirement is "artificial" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, "spatial pyramid pooling", to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 × faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.

  15. Application of time-temperature-stress superposition on creep of wood-plastic composites

    NASA Astrophysics Data System (ADS)

    Chang, Feng-Cheng; Lam, Frank; Kadla, John F.

    2013-08-01

    Time-temperature-stress superposition principle (TTSSP) was widely applied in studies of viscoelastic properties of materials. It involves shifting curves at various conditions to construct master curves. To extend the application of this principle, a temperature-stress hybrid shift factor and a modified Williams-Landel-Ferry (WLF) equation that incorporated variables of stress and temperature for the shift factor fitting were studied. A wood-plastic composite (WPC) was selected as the test subject to conduct a series of short-term creep tests. The results indicate that the WPC were rheologically simple materials and merely a horizontal shift was needed for the time-temperature superposition, whereas vertical shifting would be needed for time-stress superposition. The shift factor was independent of the stress for horizontal shifts in time-temperature superposition. In addition, the temperature- and stress-shift factors used to construct master curves were well fitted with the WLF equation. Furthermore, the parameters of the modified WLF equation were also successfully calibrated. The application of this method and equation can be extended to curve shifting that involves the effects of both temperature and stress simultaneously.

  16. Using Musical Intervals to Demonstrate Superposition of Waves and Fourier Analysis

    ERIC Educational Resources Information Center

    LoPresto, Michael C.

    2013-01-01

    What follows is a description of a demonstration of superposition of waves and Fourier analysis using a set of four tuning forks mounted on resonance boxes and oscilloscope software to create, capture and analyze the waveforms and Fourier spectra of musical intervals.

  17. Reservoir engineering of a mechanical resonator: generating a macroscopic superposition state and monitoring its decoherence

    NASA Astrophysics Data System (ADS)

    Asjad, Muhammad; Vitali, David

    2014-02-01

    A deterministic scheme for generating a macroscopic superposition state of a nanomechanical resonator is proposed. The nonclassical state is generated through a suitably engineered dissipative dynamics exploiting the optomechanical quadratic interaction with a bichromatically driven optical cavity mode. The resulting driven dissipative dynamics can be employed for monitoring and testing the decoherence processes affecting the nanomechanical resonator under controlled conditions.

  18. Anomalous lack of decoherence of the macroscopic quantum superpositions based on phase-covariant quantum cloning.

    PubMed

    De Martini, Francesco; Sciarrino, Fabio; Spagnolo, Nicolò

    2009-09-04

    We show that all macroscopic quantum superpositions (MQS) based on phase-covariant quantum cloning are characterized by an anomalous high resilence to the decoherence processes. The analysis supports the results of recent MQS experiments and leads to conceive a useful conjecture regarding the realization of complex decoherence-free structures for quantum information, such as the quantum computer.

  19. Anomalous Lack of Decoherence of the Macroscopic Quantum Superpositions Based on Phase-Covariant Quantum Cloning

    NASA Astrophysics Data System (ADS)

    de Martini, Francesco; Sciarrino, Fabio; Spagnolo, Nicolò

    2009-09-01

    We show that all macroscopic quantum superpositions (MQS) based on phase-covariant quantum cloning are characterized by an anomalous high resilence to the decoherence processes. The analysis supports the results of recent MQS experiments and leads to conceive a useful conjecture regarding the realization of complex decoherence-free structures for quantum information, such as the quantum computer.

  20. On sufficient statistics of least-squares superposition of vector sets.

    PubMed

    Konagurthu, Arun S; Kasarapu, Parthan; Allison, Lloyd; Collier, James H; Lesk, Arthur M

    2015-06-01

    The problem of superposition of two corresponding vector sets by minimizing their sum-of-squares error under orthogonal transformation is a fundamental task in many areas of science, notably structural molecular biology. This problem can be solved exactly using an algorithm whose time complexity grows linearly with the number of correspondences. This efficient solution has facilitated the widespread use of the superposition task, particularly in studies involving macromolecular structures. This article formally derives a set of sufficient statistics for the least-squares superposition problem. These statistics are additive. This permits a highly efficient (constant time) computation of superpositions (and sufficient statistics) of vector sets that are composed from its constituent vector sets under addition or deletion operation, where the sufficient statistics of the constituent sets are already known (that is, the constituent vector sets have been previously superposed). This results in a drastic improvement in the run time of the methods that commonly superpose vector sets under addition or deletion operations, where previously these operations were carried out ab initio (ignoring the sufficient statistics). We experimentally demonstrate the improvement our work offers in the context of protein structural alignment programs that assemble a reliable structural alignment from well-fitting (substructural) fragment pairs. A C++ library for this task is available online under an open-source license.

  1. Chaos and Complexities Theories. Superposition and Standardized Testing: Are We Coming or Going?

    ERIC Educational Resources Information Center

    Erwin, Susan

    2005-01-01

    The purpose of this paper is to explore the possibility of using the principle of "superposition of states" (commonly illustrated by Schrodinger's Cat experiment) to understand the process of using standardized testing to measure a student's learning. Comparisons from literature, neuroscience, and Schema Theory will be used to expound upon the…

  2. GPU-based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.

    PubMed

    Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd

    2016-11-07

    In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92% (CPU) to 96% (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.

  3. Measurement-based generation of shaped single photons and coherent state superpositions in optical cavities

    NASA Astrophysics Data System (ADS)

    Lecamwasam, Ruvindha L.; Hush, Michael R.; James, Matthew R.; Carvalho, André R. R.

    2017-01-01

    We propose related schemes to generate arbitrarily shaped single photons, i.e., photons with an arbitrary temporal profile, and coherent state superpositions using simple optical elements. The first system consists of two coupled cavities, a memory cavity and a shutter cavity, containing a second-order optical nonlinearity and electro-optic modulator (EOM), respectively. Photodetection events of the shutter cavity output herald preparation of a single photon in the memory cavity, which may be stored by immediately changing the optical length of the shutter cavity with the EOM after detection. On-demand readout of the photon, with arbitrary shaping, can be achieved through modulation of the EOM. The second scheme consists of a memory cavity with two outputs, which are interfered, phase shifted, and measured. States that closely approximate a coherent state superposition can be produced through postselection for sequences of detection events, with more photon detection events leading to a larger superposition. We furthermore demonstrate that no-knowledge feedback can be easily implemented in this system and used to preserve the superposition state, as well as provide an extra control mechanism for state generation.

  4. Identification of the Hereditary Kernels of Isotropic Linear Viscoelastic Materials in Combined Stress state. 1. Superposition of Shear and Bulk creep

    NASA Astrophysics Data System (ADS)

    Golub, V. P.; Maslov, B. P.; Fernati, P. V.

    2016-03-01

    Relations between the shear and bulk creep kernels of an isotropic linear viscoelastic material in combined stress state and the longitudinal and shear creep kernels constructed from data of creep tests under uniaxial tension and pure torsion are formulated. The constitutive equations of viscoelasticity for the combined stress state are chosen in the form of a superposition of the equation for shear strains and the equation for bulk strains. The hereditary kernels are described by Rabotnov's fractional-exponential functions. The creep strains of thin-walled pipes under a combination of tension and torsion or tension and internal pressure are calculated

  5. Cantilever tilt causing amplitude related convolution in dynamic mode atomic force microscopy.

    PubMed

    Wang, Chunmei; Sun, Jielin; Itoh, Hiroshi; Shen, Dianhong; Hu, Jun

    2011-01-01

    It is well known that the topography in atomic force microscopy (AFM) is a convolution of the tip's shape and the sample's geometry. The classical convolution model was established in contact mode assuming a static probe, but it is no longer valid in dynamic mode AFM. It is still not well understood whether or how the vibration of the probe in dynamic mode affects the convolution. Such ignorance complicates the interpretation of the topography. Here we propose a convolution model for dynamic mode by taking into account the typical design of the cantilever tilt in AFMs, which leads to a different convolution from that in contact mode. Our model indicates that the cantilever tilt results in a dynamic convolution affected by the absolute value of the amplitude, especially in the case that corresponding contact convolution has sharp edges beyond certain angle. The effect was experimentally demonstrated by a perpendicular SiO(2)/Si super-lattice structure. Our model is useful for quantitative characterizations in dynamic mode, especially in probe characterization and critical dimension measurements.

  6. Optimal convolution SOR acceleration of waveform relaxation with application to semiconductor device simulation

    NASA Technical Reports Server (NTRS)

    Reichelt, Mark

    1993-01-01

    In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.

  7. Correction of the basis set superposition error in SCF and MP2 interaction energies. The water dimer

    NASA Astrophysics Data System (ADS)

    Szcześniak, M. M.; Scheiner, Steve

    1986-06-01

    There has been some discussion concerning whether basis set superposition error is more correctly evaluated using the full set of ghost orbitals of the partner molecule or some subset thereof. A formal treatment is presented, arguing that the full set is required at the Møller-Plesset level. Numerical support for this position is provided by calculation of the interaction energy between a pair of water molecules, using a series of moderate sized basis sets ranging from 6-31G** to the [432/21] contraction suggested by Clementi and Habitz. These energies, at both the SCF and MP2 levels, behave erratically with respect to changes in details of the basis set, e.g., H p-function exponent. On the other hand, after counterpoise correction using the full set of partner ghost orbitals, the interaction energies are rather insensitive to basis set and behave in a manner consistent with calculated monomer properties. For long intersystem separations, the contribution of correlation to the interaction is repulsive despite the attractive influence of dispersion. This effect is attributed to partial account of intrasystem correlation and can be approximated at long distances via electrostatic terms linear in MP2-induced changes in the monomer moments.

  8. Regioselective electrochemical reduction of 2,4-dichlorobiphenyl - Distinct standard reduction potentials for carbon-chlorine bonds using convolution potential sweep voltammetry

    NASA Astrophysics Data System (ADS)

    Muthukrishnan, A.; Sangaranarayanan, M. V.; Boyarskiy, V. P.; Boyarskaya, I. A.

    2010-04-01

    The reductive cleavage of carbon-chlorine bonds in 2,4-dichlorobiphenyl (PCB-7) is investigated using the convolution potential sweep voltammetry and quantum chemical calculations. The potential dependence of the logarithmic rate constant is non-linear which indicates the validity of Marcus-Hush theory of quadratic activation-driving force relationship. The ortho-chlorine of the 2,4-dichlorobiphenyl gets reduced first as inferred from the quantum chemical calculations and bulk electrolysis. The standard reduction potentials pertaining to the ortho-chlorine of 2,4-dichlorobiphenyl and that corresponding to para chlorine of the 4-chlorobiphenyl have been estimated.

  9. Operational and convolution properties of two-dimensional Fourier transforms in polar coordinates.

    PubMed

    Baddour, Natalie

    2009-08-01

    For functions that are best described in terms of polar coordinates, the two-dimensional Fourier transform can be written in terms of polar coordinates as a combination of Hankel transforms and Fourier series-even if the function does not possess circular symmetry. However, to be as useful as its Cartesian counterpart, a polar version of the Fourier operational toolset is required for the standard operations of shift, multiplication, convolution, etc. This paper derives the requisite polar version of the standard Fourier operations. In particular, convolution-two dimensional, circular, and radial one dimensional-is discussed in detail. It is shown that standard multiplication/convolution rules do apply as long as the correct definition of convolution is applied.

  10. Convoluted nozzle design for the RL10 derivative 2B engine

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The convoluted nozzle is a conventional refractory metal nozzle extension that is formed with a portion of the nozzle convoluted to show the extendible nozzle within the length of the rocket engine. The convoluted nozzle (CN) was deployed by a system of four gas driven actuators. For spacecraft applications the optimum CN may be self-deployed by internal pressure retained, during deployment, by a jettisonable exit closure. The convoluted nozzle is included in a study of extendible nozzles for the RL10 Engine Derivative 2B for use in an early orbit transfer vehicle (OTV). Four extendible nozzle configurations for the RL10-2B engine were evaluated. Three configurations of the two position nozzle were studied including a hydrogen dump cooled metal nozzle and radiation cooled nozzles of refractory metal and carbon/carbon composite construction respectively.

  11. Convolution of large 3D images on GPU and its decomposition

    NASA Astrophysics Data System (ADS)

    Karas, Pavel; Svoboda, David

    2011-12-01

    In this article, we propose a method for computing convolution of large 3D images. The convolution is performed in a frequency domain using a convolution theorem. The algorithm is accelerated on a graphic card by means of the CUDA parallel computing model. Convolution is decomposed in a frequency domain using the decimation in frequency algorithm. We pay attention to keeping our approach efficient in terms of both time and memory consumption and also in terms of memory transfers between CPU and GPU which have a significant inuence on overall computational time. We also study the implementation on multiple GPUs and compare the results between the multi-GPU and multi-CPU implementations.

  12. Directional Radiometry and Radiative Transfer: the Convoluted Path From Centuries-old Phenomenology to Physical Optics

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.

    2014-01-01

    This Essay traces the centuries-long history of the phenomenological disciplines of directional radiometry and radiative transfer in turbid media, discusses their fundamental weaknesses, and outlines the convoluted process of their conversion into legitimate branches of physical optics.

  13. Brain-wave representation of words by superposition of a few sine waves

    PubMed Central

    Suppes, Patrick; Han, Bing

    2000-01-01

    Data from three previous experiments were analyzed to test the hypothesis that brain waves of spoken or written words can be represented by the superposition of a few sine waves. First, we averaged the data over trials and a set of subjects, and, in one case, over experimental conditions as well. Next we applied a Fourier transform to the averaged data and selected those frequencies with high energy, in no case more than nine in number. The superpositions of these selected sine waves were taken as prototypes. The averaged unfiltered data were the test samples. The prototypes were used to classify the test samples according to a least-squares criterion of fit. The results were seven of seven correct classifications for the first experiment using only three frequencies, six of eight for the second experiment using nine frequencies, and eight of eight for the third experiment using five frequencies. PMID:10890906

  14. Optical information encryption based on incoherent superposition with the help of the QR code

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Gong, Qiong

    2014-01-01

    In this paper, a novel optical information encryption approach is proposed with the help of QR code. This method is based on the concept of incoherent superposition which we introduce for the first time. The information to be encrypted is first transformed into the corresponding QR code, and thereafter the QR code is further encrypted into two phase only masks analytically by use of the intensity superposition of two diffraction wave fields. The proposed method has several advantages over the previous interference-based method, such as a higher security level, a better robustness against noise attack, a more relaxed work condition, and so on. Numerical simulation results and actual smartphone collected results are shown to validate our proposal.

  15. Robot Behavior Acquisition Superposition and Composting of Behaviors Learned through Teleoperation

    NASA Technical Reports Server (NTRS)

    Peters, Richard Alan, II

    2004-01-01

    Superposition of a small set of behaviors, learned via teleoperation, can lead to robust completion of a simple articulated reach-and-grasp task. Results support the hypothesis that a set of learned behaviors can be combined to generate new behaviors of a similar type. This supports the hypothesis that a robot can learn to interact purposefully with its environment through a developmental acquisition of sensory-motor coordination. Teleoperation bootstraps the process by enabling the robot to observe its own sensory responses to actions that lead to specific outcomes. A reach-and-grasp task, learned by an articulated robot through a small number of teleoperated trials, can be performed autonomously with success in the face of significant variations in the environment and perturbations of the goal. Superpositioning was performed using the Verbs and Adverbs algorithm that was developed originally for the graphical animation of articulated characters. Work was performed on Robonaut at NASA-JSC.

  16. Superposition and detection of two helical beams for optical orbital angular momentum communication

    NASA Astrophysics Data System (ADS)

    Liu, Yi-Dong; Gao, Chunqing; Gao, Mingwei; Qi, Xiaoqing; Weber, Horst

    2008-07-01

    A loop-like system with a Dove prism is used to generate a collinear superposition of two helical beams with different azimuthal quantum numbers in this manuscript. After the generation of the helical beams distributed on the circle centered at the optical axis by using a binary amplitude grating, the diffractive field is separated into two polarized ones with the same distribution. Rotated by the Dove prism in the loop-like system in counter directions and combined together, the two fields will generate the collinear superposition of two helical beams in certain direction. The experiment shows consistency with the theoretical analysis. This method has potential applications in optical communication by using orbital angular momentum of laser beams (optical vortices).

  17. Brain-wave representation of words by superposition of a few sine waves.

    PubMed

    Suppes, P; Han, B

    2000-07-18

    Data from three previous experiments were analyzed to test the hypothesis that brain waves of spoken or written words can be represented by the superposition of a few sine waves. First, we averaged the data over trials and a set of subjects, and, in one case, over experimental conditions as well. Next we applied a Fourier transform to the averaged data and selected those frequencies with high energy, in no case more than nine in number. The superpositions of these selected sine waves were taken as prototypes. The averaged unfiltered data were the test samples. The prototypes were used to classify the test samples according to a least-squares criterion of fit. The results were seven of seven correct classifications for the first experiment using only three frequencies, six of eight for the second experiment using nine frequencies, and eight of eight for the third experiment using five frequencies.

  18. A numerical dressing method for the nonlinear superposition of solutions of the KdV equation

    NASA Astrophysics Data System (ADS)

    Trogdon, Thomas; Deconinck, Bernard

    2014-01-01

    In this paper we present the unification of two existing numerical methods for the construction of solutions of the Korteweg-de Vries (KdV) equation. The first method is used to solve the Cauchy initial-value problem on the line for rapidly decaying initial data. The second method is used to compute finite-genus solutions of the KdV equation. The combination of these numerical methods allows for the computation of exact solutions that are asymptotically (quasi-)periodic finite-gap solutions and are a nonlinear superposition of dispersive, soliton and (quasi-)periodic solutions in the finite (x, t)-plane. Such solutions are referred to as superposition solutions. We compute these solutions accurately for all values of x and t.

  19. Generation of mesoscopic quantum superpositions through Kerr-stimulated degenerate downconversion

    NASA Astrophysics Data System (ADS)

    Paris, Matteo G. A.

    1999-12-01

    A two-step interaction scheme involving chi(2) and chi(3) nonlinear media is suggested for the generation of Schrödinger cat-like states of a single-mode optical field. In the first step, a weak coherent signal undergoes a self-Kerr phase modulation in a chi(3) crystal, leading to a Kerr kitten, namely a microscopic superposition of two coherent states with opposite phases. In the second step, such a Kerr kitten enters a chi(2) crystal and, in turn, plays the role of a quantum seed for stimulated phase-sensitive amplification. The output state in the above-threshold regime consists in a quantum superposition of mesoscopically distinguishable squeezed states, i.e. an optical cat-like state. The whole setup does not rely on conditional measurements, and is robust against decoherence, as only weak signals interact with the Kerr medium.

  20. Fast superposition T-matrix solution for clusters with arbitrarily-shaped constituent particles

    NASA Astrophysics Data System (ADS)

    Markkanen, Johannes; Yuffa, Alex J.

    2017-03-01

    A fast superposition T-matrix solution is formulated for electromagnetic scattering by a collection of arbitrarily-shaped inhomogeneous particles. The T-matrices for individual constituents are computed by expanding the Green's dyadic in the spherical vector wave functions and formulating a volume integral equation, where the equivalent electric current is the unknown and the spherical vector wave functions are treated as excitations. Furthermore, the volume integral equation and the superposition T-matrix are accelerated by the precorrected-FFT algorithm and the fast multipole algorithm, respectively. The approach allows for an efficient scattering analysis of the clusters and aggregates consisting of a large number of arbitrarily-shaped inhomogeneous particles.

  1. Tunneling-assisted coherent population transfer and creation of coherent superposition states in triple quantum dots

    NASA Astrophysics Data System (ADS)

    Tian, Si-Cong; Wan, Ren-Gang; Wang, Li-Jie; Shu, Shi-Li; Tong, Cun-Zhu; Wang, Li-Jun

    2016-12-01

    A scheme is proposed for coherent population transfer and creation of coherent superposition states assisted by one time-dependent tunneling pulse and one time-independent tunneling pulse in triple quantum dots. Time-dependent tunneling, which is similar to the Stokes laser pulse used in traditional stimulated Raman adiabatic passage, can lead to complete population transfer from the ground state to the indirect exciton states. Time-independent tunneling can also create double dark states, resulting in the distribution of the population and arbitrary coherent superposition states. Such a scheme can also be extended to multiple quantum dots assisted by one time-dependent tunneling pulse and more time-independent tunneling pulses.

  2. Optical threshold secret sharing scheme based on basic vector operations and coherence superposition

    NASA Astrophysics Data System (ADS)

    Deng, Xiaopeng; Wen, Wei; Mi, Xianwu; Long, Xuewen

    2015-04-01

    We propose, to our knowledge for the first time, a simple optical algorithm for secret image sharing with the (2,n) threshold scheme based on basic vector operations and coherence superposition. The secret image to be shared is firstly divided into n shadow images by use of basic vector operations. In the reconstruction stage, the secret image can be retrieved by recording the intensity of the coherence superposition of any two shadow images. Compared with the published encryption techniques which focus narrowly on information encryption, the proposed method can realize information encryption as well as secret sharing, which further ensures the safety and integrality of the secret information and prevents power from being kept centralized and abused. The feasibility and effectiveness of the proposed method are demonstrated by numerical results.

  3. Dose convolution filter: Incorporating spatial dose information into tissue response modeling

    SciTech Connect

    Huang Yimei; Joiner, Michael; Zhao Bo; Liao Yixiang; Burmeister, Jay

    2010-03-15

    Purpose: A model is introduced to integrate biological factors such as cell migration and bystander effects into physical dose distributions, and to incorporate spatial dose information in plan analysis and optimization. Methods: The model consists of a dose convolution filter (DCF) with single parameter {sigma}. Tissue response is calculated by an existing NTCP model with DCF-applied dose distribution as input. The authors determined {sigma} of rat spinal cord from published data. The authors also simulated the GRID technique, in which an open field is collimated into many pencil beams. Results: After applying the DCF, the NTCP model successfully fits the rat spinal cord data with a predicted value of {sigma}=2.6{+-}0.5 mm, consistent with 2 mm migration distances of remyelinating cells. Moreover, it enables the appropriate prediction of a high relative seriality for spinal cord. The model also predicts the sparing of normal tissues by the GRID technique when the size of each pencil beam becomes comparable to {sigma}. Conclusions: The DCF model incorporates spatial dose information and offers an improved way to estimate tissue response from complex radiotherapy dose distributions. It does not alter the prediction of tissue response in large homogenous fields, but successfully predicts increased tissue tolerance in small or highly nonuniform fields.

  4. A fast double template convolution isocenter evaluation algorithm with subpixel accuracy

    SciTech Connect

    Winey, Brian; Sharp, Greg; Bussiere, Marc

    2011-01-15

    Purpose: To design a fast Winston Lutz (fWL) algorithm for accurate analysis of radiation isocenter from images without edge detection or center of mass calculations. Methods: An algorithm has been developed to implement the Winston Lutz test for mechanical/radiation isocenter agreement using an electronic portal imaging device (EPID). The algorithm detects the position of the radiation shadow of a tungsten ball within a stereotactic cone. The fWL algorithm employs a double convolution to independently find the position of the sphere and cone centers. Subpixel estimation is used to achieve high accuracy. Results of the algorithm were compared to (1) a human observer with template guidance and (2) an edge detection/center of mass (edCOM) algorithm. Testing was performed with high resolution (0.05mm/px, film) and low resolution (0.78mm/px, EPID) image sets. Results: Sphere and cone center relative positions were calculated with the fWL algorithm for high resolution test images with an accuracy of 0.002{+-}0.061 mm compared to 0.042{+-}0.294 mm for the human observer, and 0.003{+-}0.038 mm for the edCOM algorithm. The fWL algorithm required 0.01 s per image compared to 5 s for the edCOM algorithm and 20 s for the human observer. For lower resolution images the fWL algorithm localized the centers with an accuracy of 0.083{+-}0.12 mm compared to 0.03{+-}0.5514 mm for the edCOM algorithm. Conclusions: A fast (subsecond) subpixel algorithm has been developed that can accurately determine the center locations of the ball and cone in Winston Lutz test images without edge detection or COM calculations.

  5. Forecasting natural aquifer discharge using a numerical model and convolution.

    PubMed

    Boggs, Kevin G; Johnson, Gary S; Van Kirk, Rob; Fairley, Jerry P

    2014-01-01

    If the nature of groundwater sources and sinks can be determined or predicted, the data can be used to forecast natural aquifer discharge. We present a procedure to forecast the relative contribution of individual aquifer sources and sinks to natural aquifer discharge. Using these individual aquifer recharge components, along with observed aquifer heads for each January, we generate a 1-year, monthly spring discharge forecast for the upcoming year with an existing numerical model and convolution. The results indicate that a forecast of natural aquifer discharge can be developed using only the dominant aquifer recharge sources combined with the effects of aquifer heads (initial conditions) at the time the forecast is generated. We also estimate how our forecast will perform in the future using a jackknife procedure, which indicates that the future performance of the forecast is good (Nash-Sutcliffe efficiency of 0.81). We develop a forecast and demonstrate important features of the procedure by presenting an application to the Eastern Snake Plain Aquifer in southern Idaho.

  6. Remote Sensing Image Fusion with Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Zhong, Jinying; Yang, Bin; Huang, Guoyu; Zhong, Fei; Chen, Zhongze

    2016-12-01

    Remote sensing image fusion (RSIF) is referenced as restoring the high-resolution multispectral image from its corresponding low-resolution multispectral (LMS) image aided by the panchromatic (PAN) image. Most RSIF methods assume that the missing spatial details of the LMS image can be obtained from the high resolution PAN image. However, the distortions would be produced due to the much difference between the structural component of LMS image and that of PAN image. Actually, the LMS image can fully utilize its spatial details to improve the resolution. In this paper, a novel two-stage RSIF algorithm is proposed, which makes full use of both spatial details and spectral information of the LMS image itself. In the first stage, the convolutional neural network based super-resolution is used to increase the spatial resolution of the LMS image. In the second stage, Gram-Schmidt transform is employed to fuse the enhanced MS and the PAN images for further improvement the resolution of MS image. Since the spatial resolution enhancement in the first stage, the spectral distortions in the fused image would be decreased in evidence. Moreover, the spatial details can be preserved to construct the fused images. The QuickBird satellite source images are used to test the performances of the proposed method. The experimental results demonstrate that the proposed method can achieve better spatial details and spectral information simultaneously compared with other well-known methods.

  7. Delta function convolution method (DFCM) for fluorescence decay experiments

    NASA Astrophysics Data System (ADS)

    Zuker, M.; Szabo, A. G.; Bramall, L.; Krajcarski, D. T.; Selinger, B.

    1985-01-01

    A rigorous and convenient method of correcting for the wavelength variation of the instrument response function in time correlated photon counting fluorescence decay measurements is described. The method involves convolution of a modified functional form F˜s of the physical model with a reference data set measured under identical conditions as the measurement of the sample. The method is completely general in that an appropriate functional form may be found for any physical model of the excited state decay process. The modified function includes a term which is a Dirac delta function and terms which give the correct decay times and preexponential values in which one is interested. None of the data is altered in any way, permitting correct statistical analysis of the fitting. The method is readily adaptable to standard deconvolution procedures. The paper describes the theory and application of the method together with fluorescence decay results obtained from measurements of a number of different samples including diphenylhexatriene, myoglobin, hemoglobin, 4', 6-diamidine-2-phenylindole (DAPI), and lysine-trytophan-lysine.

  8. Visualizing Flow Over Parametric Surfaces Using Line Integral Convolution

    NASA Technical Reports Server (NTRS)

    Forssell, Lisa; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Line Integral Convolution (LIC) is a powerful technique for imaging and animating vector fields. We extend the LIC paradigm in three ways: (1) The existing technique is limited to vector fields over a regular Cartesian grid. We extend it to vector fields over parametric surfaces, such as those found in curvilinear grids, used in computational fluid dynamics simulations; (2) Periodic motion filters can be used to animate the flow visualization. When the flow lies on a parametric surface, however, the motion appears misleading. We explain why this problem arises and show how to adjust the LIC algorithm to handle it; (3) We introduce a technique to visualize vector magnitudes as well as vector direction. Cabral and Leedom have suggested a method for variable-speed animation, which is based on varying the frequency of the filter function. We develop a different technique based on kernel phase shifts which we have found to show substantially better results. Our implementation of these algorithms utilizes texture-mapping hardware to run in real time, which allows them to be included in interactive applications.

  9. Interleaved convolutional coding for the turbulent atmospheric optical communication channel

    NASA Astrophysics Data System (ADS)

    Davidson, Frederic M.; Koh, Yutai T.

    1988-09-01

    The coding gain of a constraint-length-three, rate one-half convolutional code over a long clear-air atmospheric direct-detection optical communication channel using binary pulse-position modulation signaling was directly measured as a function of interleaving delay for both hard- and soft-decision Viterbi decoding. Maximum coding gains theoretically possible for this code with perfect interleaving and physically unrealizable perfect-measurement decoding were about 7 dB under conditions of weak clear-air turbulence, and 11 dB at moderate turbulence levels. The time scale of the fading (memory) of the channel was directly measured to be tens to hundreds of milliseconds, depending on turbulence levels. Interleaving delays of 5 ms between transmission of the first and second channel bits output by the encoder yield coding gains within 1.5 dB of theoretical limits with soft-decision Viterbi decoding. Coding gains of 4-5 dB were observed with only 100 microseconds of interleaving delay. Soft-decision Viterbi decoding always yielded 1-2 dB more coding gain than hard-decision Viterbi decoding.

  10. Multi-resolution Convolution Methodology for ICP Waveform Morphology Analysis.

    PubMed

    Shaw, Martin; Piper, Ian; Hawthorne, Christopher

    2016-01-01

    Intracranial pressure (ICP) monitoring is a key clinical tool in the assessment and treatment of patients in neurointensive care. ICP morphology analysis can be useful in the classification of waveform features.A methodology for the decomposition of an ICP signal into clinically relevant dimensions has been devised that allows the identification of important ICP waveform types. It has three main components. First, multi-resolution convolution analysis is used for the main signal decomposition. Then, an impulse function is created, with multiple parameters, that can represent any form in the signal under analysis. Finally, a simple, localised optimisation technique is used to find morphologies of interest in the decomposed data.A pilot application of this methodology using a simple signal has been performed. This has shown that the technique works with performance receiver operator characteristic area under the curve values for each of the waveform types: plateau wave, B wave and high and low compliance states of 0.936, 0.694, 0.676 and 0.698, respectively.This is a novel technique that showed some promise during the pilot analysis. However, it requires further optimisation to become a usable clinical tool for the automated analysis of ICP signals.

  11. Toward an optimal convolutional neural network for traffic sign recognition

    NASA Astrophysics Data System (ADS)

    Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec

    2015-12-01

    Convolutional Neural Networks (CNN) beat the human performance on German Traffic Sign Benchmark competition. Both the winner and the runner-up teams trained CNNs to recognize 43 traffic signs. However, both networks are not computationally efficient since they have many free parameters and they use highly computational activation functions. In this paper, we propose a new architecture that reduces the number of the parameters 27% and 22% compared with the two networks. Furthermore, our network uses Leaky Rectified Linear Units (ReLU) as the activation function that only needs a few operations to produce the result. Specifically, compared with the hyperbolic tangent and rectified sigmoid activation functions utilized in the two networks, Leaky ReLU needs only one multiplication operation which makes it computationally much more efficient than the two other functions. Our experiments on the Gertman Traffic Sign Benchmark dataset shows 0:6% improvement on the best reported classification accuracy while it reduces the overall number of parameters 85% compared with the winner network in the competition.

  12. Study of multispectral convolution scatter correction in high resolution PET

    SciTech Connect

    Yao, R.; Lecomte, R.; Bentourkia, M.

    1996-12-31

    PET images acquired with a high resolution scanner based on arrays of small discrete detectors are obtained at the cost of low sensitivity and increased detector scatter. It has been postulated that these limitations can be overcome by using enlarged discrimination windows to include more low energy events and by developing more efficient energy-dependent methods to correct for scatter. In this work, we investigate one such method based on the frame-by-frame scatter correction of multispectral data. Images acquired in the conventional, broad and multispectral window modes were processed by the stationary and nonstationary consecutive convolution scatter correction methods. Broad and multispectral window acquisition with a low energy threshold of 129 keV improved system sensitivity by up to 75% relative to conventional window with a {approximately}350 keV threshold. The degradation of image quality due to the added scatter events can almost be fully recovered by the subtraction-restoration scatter correction. The multispectral method was found to be more sensitive to the nonstationarity of scatter and its performance was not as good as that of the broad window. It is concluded that new scatter degradation models and correction methods need to be established to fully take advantage of multispectral data.

  13. Cell volume regulation in the proximal convoluted tubule.

    PubMed

    Gagnon, J; Ouimet, D; Nguyen, H; Laprade, R; Le Grimellec, C; Carrière, S; Cardinal, J

    1982-10-01

    To evaluate the effect of hyper- and hypotonicity on proximal convoluted tubule (PCT) cell volume, nonperfused PCT were studied in vitro with hypertonic solutions containing sodium chloride, urea, or mannitol (450 mosmol/kg H2O) and with hypotonic low sodium chloride solutions (160 mosmol/kg H2O). When the tubules were subjected to hypertonic peritubular solutions containing NaCl, cell volume immediately decreased by 15.5% and remained constant throughout the experimental period (60 min). With mannitol, the initial decrease was identical to that with NaCl (17.7%), but the PCT volume increased slightly during the experimental period. With urea, the decrease in cell volume was smaller (7%) and transient. In hypotonicity, the PCT swelled rapidly, but this swelling was followed by a rapid regulatory phase in which PCT volume nearly returned to control values after less than 10 min. With a potassium-free peritubular medium or 10(-3) M ouabain, the regulatory phase of hypotonicity completely disappeared, whereas the cells did not maintain their reduced volume in NaCl-induced hypertonicity. These results suggest that Na-K-ATPase plays an important role in the maintenance of a reduced cellular volume in hypertonicity and in the regulatory phase of hypotonicity, probably by an active extrusion of sodium and water from the cell.

  14. Convolutional networks for fast, energy-efficient neuromorphic computing

    PubMed Central

    Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.

    2016-01-01

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489

  15. Method for Veterbi decoding of large constraint length convolutional codes

    NASA Technical Reports Server (NTRS)

    Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor); Jing, Sun (Inventor)

    1988-01-01

    A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.

  16. Fully automated quantitative cephalometry using convolutional neural networks.

    PubMed

    Arık, Sercan Ö; Ibragimov, Bulat; Xing, Lei

    2017-01-01

    Quantitative cephalometry plays an essential role in clinical diagnosis, treatment, and surgery. Development of fully automated techniques for these procedures is important to enable consistently accurate computerized analyses. We study the application of deep convolutional neural networks (CNNs) for fully automated quantitative cephalometry for the first time. The proposed framework utilizes CNNs for detection of landmarks that describe the anatomy of the depicted patient and yield quantitative estimation of pathologies in the jaws and skull base regions. We use a publicly available cephalometric x-ray image dataset to train CNNs for recognition of landmark appearance patterns. CNNs are trained to output probabilistic estimations of different landmark locations, which are combined using a shape-based model. We evaluate the overall framework on the test set and compare with other proposed techniques. We use the estimated landmark locations to assess anatomically relevant measurements and classify them into different anatomical types. Overall, our results demonstrate high anatomical landmark detection accuracy ([Formula: see text] to 2% higher success detection rate for a 2-mm range compared with the top benchmarks in the literature) and high anatomical type classification accuracy ([Formula: see text] average classification accuracy for test set). We demonstrate that CNNs, which merely input raw image patches, are promising for accurate quantitative cephalometry.

  17. Deep convolutional neural networks for classifying GPR B-scans

    NASA Astrophysics Data System (ADS)

    Besaw, Lance E.; Stimac, Philip J.

    2015-05-01

    Symmetric and asymmetric buried explosive hazards (BEHs) present real, persistent, deadly threats on the modern battlefield. Current approaches to mitigate these threats rely on highly trained operatives to reliably detect BEHs with reasonable false alarm rates using handheld Ground Penetrating Radar (GPR) and metal detectors. As computers become smaller, faster and more efficient, there exists greater potential for automated threat detection based on state-of-the-art machine learning approaches, reducing the burden on the field operatives. Recent advancements in machine learning, specifically deep learning artificial neural networks, have led to significantly improved performance in pattern recognition tasks, such as object classification in digital images. Deep convolutional neural networks (CNNs) are used in this work to extract meaningful signatures from 2-dimensional (2-D) GPR B-scans and classify threats. The CNNs skip the traditional "feature engineering" step often associated with machine learning, and instead learn the feature representations directly from the 2-D data. A multi-antennae, handheld GPR with centimeter-accurate positioning data was used to collect shallow subsurface data over prepared lanes containing a wide range of BEHs. Several heuristics were used to prevent over-training, including cross validation, network weight regularization, and "dropout." Our results show that CNNs can extract meaningful features and accurately classify complex signatures contained in GPR B-scans, complementing existing GPR feature extraction and classification techniques.

  18. HEp-2 Cell Image Classification with Deep Convolutional Neural Networks.

    PubMed

    Gao, Zhimin; Wang, Lei; Zhou, Luping; Zhang, Jianjia

    2016-02-08

    Efficient Human Epithelial-2 (HEp-2) cell image classification can facilitate the diagnosis of many autoimmune diseases. This paper proposes an automatic framework for this classification task, by utilizing the deep convolutional neural networks (CNNs) which have recently attracted intensive attention in visual recognition. In addition to describing the proposed classification framework, this paper elaborates several interesting observations and findings obtained by our investigation. They include the important factors that impact network design and training, the role of rotation-based data augmentation for cell images, the effectiveness of cell image masks for classification, and the adaptability of the CNN-based classification system across different datasets. Extensive experimental study is conducted to verify the above findings and compares the proposed framework with the well-established image classification models in the literature. The results on benchmark datasets demonstrate that i) the proposed framework can effectively outperform existing models by properly applying data augmentation; ii) our CNN-based framework has excellent adaptability across different datasets, which is highly desirable for cell image classification under varying laboratory settings. Our system is ranked high in the cell image classification competition hosted by ICPR 2014.

  19. Designing the optimal convolution kernel for modeling the motion blur

    NASA Astrophysics Data System (ADS)

    Jelinek, Jan

    2011-06-01

    Motion blur acts on an image like a two dimensional low pass filter, whose spatial frequency characteristic depends both on the trajectory of the relative motion between the scene and the camera and on the velocity vector variation along it. When motion during exposure is permitted, the conventional, static notions of both the image exposure and the scene-toimage mapping become unsuitable and must be revised to accommodate the image formation dynamics. This paper develops an exact image formation model for arbitrary object-camera relative motion with arbitrary velocity profiles. Moreover, for any motion the camera may operate in either continuous or flutter shutter exposure mode. Its result is a convolution kernel, which is optimally designed for both the given motion and sensor array geometry, and hence permits the most accurate computational undoing of the blurring effects for the given camera required in forensic and high security applications. The theory has been implemented and a few examples are shown in the paper.

  20. A deep convolutional neural network for recognizing foods

    NASA Astrophysics Data System (ADS)

    Jahani Heravi, Elnaz; Habibi Aghdam, Hamed; Puig, Domenec

    2015-12-01

    Controlling the food intake is an efficient way that each person can undertake to tackle the obesity problem in countries worldwide. This is achievable by developing a smartphone application that is able to recognize foods and compute their calories. State-of-art methods are chiefly based on hand-crafted feature extraction methods such as HOG and Gabor. Recent advances in large-scale object recognition datasets such as ImageNet have revealed that deep Convolutional Neural Networks (CNN) possess more representation power than the hand-crafted features. The main challenge with CNNs is to find the appropriate architecture for each problem. In this paper, we propose a deep CNN which consists of 769; 988 parameters. Our experiments show that the proposed CNN outperforms the state-of-art methods and improves the best result of traditional methods 17%. Moreover, using an ensemble of two CNNs that have been trained two different times, we are able to improve the classification performance 21:5%.

  1. Classification of breast cancer cytological specimen using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Żejmo, Michał; Kowal, Marek; Korbicz, Józef; Monczak, Roman

    2017-01-01

    The paper presents a deep learning approach for automatic classification of breast tumors based on fine needle cytology. The main aim of the system is to distinguish benign from malignant cases based on microscopic images. Experiment was carried out on cytological samples derived from 50 patients (25 benign cases + 25 malignant cases) diagnosed in Regional Hospital in Zielona Góra. To classify microscopic images, we used convolutional neural networks (CNN) of two types: GoogLeNet and AlexNet. Due to the very large size of images of cytological specimen (on average 200000 × 100000 pixels), they were divided into smaller patches of size 256 × 256 pixels. Breast cancer classification usually is based on morphometric features of nuclei. Therefore, training and validation patches were selected using Support Vector Machine (SVM) so that suitable amount of cell material was depicted. Neural classifiers were tuned using GPU accelerated implementation of gradient descent algorithm. Training error was defined as a cross-entropy classification loss. Classification accuracy was defined as the percentage ratio of successfully classified validation patches to the total number of validation patches. The best accuracy rate of 83% was obtained by GoogLeNet model. We observed that more misclassified patches belong to malignant cases.

  2. Convolutional networks for fast, energy-efficient neuromorphic computing.

    PubMed

    Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S

    2016-10-11

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

  3. Implementation and validation of collapsed cone superposition for radiopharmaceutical dosimetry of photon emitters.

    PubMed

    Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud

    2015-10-21

    Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and (18)F, (99m)Tc, (131)I and (177)Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the (99m)Tc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.

  4. A convolution model for computing the far-field directivity of a parametric loudspeaker array.

    PubMed

    Shi, Chuang; Kajikawa, Yoshinobu

    2015-02-01

    This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost.

  5. Error Analysis of Padding Schemes for DFT’s of Convolutions and Derivatives

    DTIC Science & Technology

    2012-01-31

    Geodaetica, 18,263-279. Oppenheim AV, Schafer RW (1975) Digital Signal Processing . Prentice-Hall, Inc., Englewood Cliffs, New Jersey. Schwarz KP... Oppenheim and Schäfer (1975). Many numerical tests have been done to show that this so-called zero padding improves the computation of Stokes...and (19) relate linear convolutions to corresponding cyclic convolutions. Equation (19) is the justification, originating in Oppenheim and Schäfer

  6. [Superposition impact character of air pollution from decentralization docks in a freshwater port].

    PubMed

    Liu, Jian-chang; Li, Xing-hua; Xu, Hong-lei; Cheng, Jin-xiang; Wang, Zhong-dai; Xiao, Yang

    2013-05-01

    Air pollution from freshwater port is mainly caused by dust pollution, including material loading and unloading dust, road dust, and wind erosion dust from stockpile, bare soil. The dust pollution from a single dock characterized in obvious difference with air pollution from multiple scattered docks. Jining Port of Shandong Province was selected as a case study to get superposition impact contribution of air pollution for regional air environment from multiple scattered docks and to provide technical support for system evaluation of port air pollution. The results indicate that (1) the air pollution from freshwater port occupies a low proportion of pollution impact on regional environmental quality because the port is consisted of serveral small scattered docks; (2) however, the geometric center of the region distributed by docks is severely affected with the most superposition of the air pollution; and (3) the ADMS model is helpful to attain an effective and integrated assessment to predict a superposition impact of multiple non-point pollution sources when the differences of high-altitude weather conditions was not considered on a large scale.

  7. Aerodynamic Analysis of the Truss-Braced Wing Aircraft Using Vortex-Lattice Superposition Approach

    NASA Technical Reports Server (NTRS)

    Ting, Eric Bi-Wen; Reynolds, Kevin Wayne; Nguyen, Nhan T.; Totah, Joseph J.

    2014-01-01

    The SUGAR Truss-BracedWing (TBW) aircraft concept is a Boeing-developed N+3 aircraft configuration funded by NASA ARMD FixedWing Project. This future generation transport aircraft concept is designed to be aerodynamically efficient by employing a high aspect ratio wing design. The aspect ratio of the TBW is on the order of 14 which is significantly greater than those of current generation transport aircraft. This paper presents a recent aerodynamic analysis of the TBW aircraft using a conceptual vortex-lattice aerodynamic tool VORLAX and an aerodynamic superposition approach. Based on the underlying linear potential flow theory, the principle of aerodynamic superposition is leveraged to deal with the complex aerodynamic configuration of the TBW. By decomposing the full configuration of the TBW into individual aerodynamic lifting components, the total aerodynamic characteristics of the full configuration can be estimated from the contributions of the individual components. The aerodynamic superposition approach shows excellent agreement with CFD results computed by FUN3D, USM3D, and STAR-CCM+.

  8. Sagnac interferometry with coherent vortex superposition states in exciton-polariton condensates

    NASA Astrophysics Data System (ADS)

    Moxley, Frederick Ira; Dowling, Jonathan P.; Dai, Weizhong; Byrnes, Tim

    2016-05-01

    We investigate prospects of using counter-rotating vortex superposition states in nonequilibrium exciton-polariton Bose-Einstein condensates for the purposes of Sagnac interferometry. We first investigate the stability of vortex-antivortex superposition states, and show that they survive at steady state in a variety of configurations. Counter-rotating vortex superpositions are of potential interest to gyroscope and seismometer applications for detecting rotations. Methods of improving the sensitivity are investigated by targeting high momentum states via metastable condensation, and the application of periodic lattices. The sensitivity of the polariton gyroscope is compared to its optical and atomic counterparts. Due to the large interferometer areas in optical systems and small de Broglie wavelengths for atomic BECs, the sensitivity per detected photon is found to be considerably less for the polariton gyroscope than with competing methods. However, polariton gyroscopes have an advantage over atomic BECs in a high signal-to-noise ratio, and have other practical advantages such as room-temperature operation, area independence, and robust design. We estimate that the final sensitivities including signal-to-noise aspects are competitive with existing methods.

  9. Role of the retinal detector array in perceiving the superposition effects of light

    NASA Astrophysics Data System (ADS)

    Roychoudhuri, Chandrasekhar; Lakshminarayanan, Vasudevan

    2006-08-01

    The perception of light in nature comes through the photopigment molecules of our retina. The objective of this paper is to relate our modern understanding of the quantum mechanical chemical processes in the retinal molecules with our observation of superposition ("interference") fringes due to multiple light beams. The issue of "interference" is important for two subtle reasons. First, we do not perceive light except though the response of the light detecting molecules. Second, EM fields do not operate on each other to create the "interference" (superposition) effects. When the intrinsic molecular properties of a detector allows it to respond simultaneously to all the superposed light beams on them, they sum the effects and report the corresponding "fringes" of superposition. In the human eye the "seeing" (or perception) is initiated by photo-isomerization of retinal, the chromophore of the opsin molecule. There exists several orders of magnitude difference between the characteristic times for the molecular processes of light absorption and the visual signal generation through the photochemical cascade. This allows us to function in the daily chores of walking and visual identification of objects and enjoy the beauty of the natural sceneries even though the retinal layer is bombarded simultaneously by innumerable beams of light with same and different frequencies, which will normally produce a flood of electronic "white noise" over a very wide range of temporal frequencies, namely the heterodyne beat signal. How do the eyes completely suppress this wide range of heterodyne beat signal?

  10. Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes

    SciTech Connect

    Houshmand, Monireh; Hosseini-Khayat, Saied

    2011-02-15

    Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation and practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.

  11. Prediction of in vivo plasma concentration-time profile from in vitro release data of designed formulations of milnacipran using numerical convolution method.

    PubMed

    Singhvi, Gautam; Shah, Abhishek; Yadav, Nilesh; Saha, Ranendra N

    2015-01-01

    The aim of this study was to predict the in vivo plasma drug level of milnacipran (MIL) from in vitro dissolution data of immediate release (IR 50 mg and IR 100 mg) and matrix based controlled release (CR 100 mg) formulations. Plasma drug concentrations of these formulations were predicted by numerical convolution method. The convolution method uses in vitro dissolution data to derive plasma drug levels using reported pharmacokinetic (PK) parameters of a test product. The bioavailability parameters (Cmax and AUC) predicted from convolution method were found to be 106.90 ng/mL, 1138.96 ng/mL h for IR 50 mg and 209.80 ng/mL, 2280.61 ng/mL h for IR 100 mg which are similar to those reported in the literature. The calculated PK parameters were validated with percentage predication error (% PE). The % PE values for Cmax and AUC were found to be 7.04 and -7.35 for IR 50 mg and 11.10 and -8.21 for IR 100 mg formulations. The Cmax, Tmax, and AUC for CR 100 mg were found to be 120 ng/mL, 10 h and 2112.60 ng/mL h, respectively. Predicted plasma profile of designed CR formulation compared with IR formulations which indicated that CR formulation can prolong the plasma concentration of MIL for 24 h. Thus, this convolution method is very useful for designing and selection of formulation before animal and human studies.

  12. Radiotherapy dose calculations in the presence of hip prostheses

    SciTech Connect

    Keall, Paul J.; Siebers, Jeffrey V.; Jeraj, Robert; Mohan, Radhe

    2003-06-30

    The high density and atomic number of hip prostheses for patients undergoing pelvic radiotherapy challenge our ability to accurately calculate dose. A new clinical dose calculation algorithm, Monte Carlo, will allow accurate calculation of the radiation transport both within and beyond hip prostheses. The aim of this research was to investigate, for both phantom and patient geometries, the capability of various dose calculation algorithms to yield accurate treatment plans. Dose distributions in phantom and patient geometries with high atomic number prostheses were calculated using Monte Carlo, superposition, pencil beam, and no-heterogeneity correction algorithms. The phantom dose distributions were analyzed by depth dose and dose profile curves. The patient dose distributions were analyzed by isodose curves, dose-volume histograms (DVHs) and tumor control probability/normal tissue complication probability (TCP/NTCP) calculations. Monte Carlo calculations predicted the dose enhancement and reduction at the proximal and distal prosthesis interfaces respectively, whereas superposition and pencil beam calculations did not. However, further from the prosthesis, the differences between the dose calculation algorithms diminished. Treatment plans calculated with superposition showed similar isodose curves, DVHs, and TCP/NTCP as the Monte Carlo plans, except in the bladder, where Monte Carlo predicted a slightly lower dose. Treatment plans calculated with either the pencil beam method or with no heterogeneity correction differed significantly from the Monte Carlo plans.

  13. Text-Attentional Convolutional Neural Network for Scene Text Detection.

    PubMed

    He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian

    2016-06-01

    Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.

  14. Text-Attentional Convolutional Neural Networks for Scene Text Detection.

    PubMed

    He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian

    2016-03-28

    Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature computed globally from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this work, we present a new system for scene text detection by proposing a novel Text-Attentional Convolutional Neural Network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/nontext information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates main task of text/non-text classification. In addition, a powerful low-level detector called Contrast- Enhancement Maximally Stable Extremal Regions (CE-MSERs) is developed, which extends the widely-used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 dataset, with a F-measure of 0.82, improving the state-of-the-art results substantially.

  15. Deep convolutional networks for pancreas segmentation in CT imaging

    NASA Astrophysics Data System (ADS)

    Roth, Holger R.; Farag, Amal; Lu, Le; Turkbey, Evrim B.; Summers, Ronald M.

    2015-03-01

    Automatic organ segmentation is an important prerequisite for many computer-aided diagnosis systems. The high anatomical variability of organs in the abdomen, such as the pancreas, prevents many segmentation methods from achieving high accuracies when compared to state-of-the-art segmentation of organs like the liver, heart or kidneys. Recently, the availability of large annotated training sets and the accessibility of affordable parallel computing resources via GPUs have made it feasible for "deep learning" methods such as convolutional networks (ConvNets) to succeed in image classification tasks. These methods have the advantage that used classification features are trained directly from the imaging data. We present a fully-automated bottom-up method for pancreas segmentation in computed tomography (CT) images of the abdomen. The method is based on hierarchical coarse-to-fine classification of local image regions (superpixels). Superpixels are extracted from the abdominal region using Simple Linear Iterative Clustering (SLIC). An initial probability response map is generated, using patch-level confidences and a two-level cascade of random forest classifiers, from which superpixel regions with probabilities larger 0.5 are retained. These retained superpixels serve as a highly sensitive initial input of the pancreas and its surroundings to a ConvNet that samples a bounding box around each superpixel at different scales (and random non-rigid deformations at training time) in order to assign a more distinct probability of each superpixel region being pancreas or not. We evaluate our method on CT images of 82 patients (60 for training, 2 for validation, and 20 for testing). Using ConvNets we achieve maximum Dice scores of an average 68% +/- 10% (range, 43-80%) in testing. This shows promise for accurate pancreas segmentation, using a deep learning approach and compares favorably to state-of-the-art methods.

  16. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images.

    PubMed

    Pereira, Sergio; Pinto, Adriano; Alves, Victor; Silva, Carlos A

    2016-05-01

    Among brain tumors, gliomas are the most common and aggressive, leading to a very short life expectancy in their highest grade. Thus, treatment planning is a key stage to improve the quality of life of oncological patients. Magnetic resonance imaging (MRI) is a widely used imaging technique to assess these tumors, but the large amount of data produced by MRI prevents manual segmentation in a reasonable time, limiting the use of precise quantitative measurements in the clinical practice. So, automatic and reliable segmentation methods are required; however, the large spatial and structural variability among brain tumors make automatic segmentation a challenging problem. In this paper, we propose an automatic segmentation method based on Convolutional Neural Networks (CNN), exploring small 3 ×3 kernels. The use of small kernels allows designing a deeper architecture, besides having a positive effect against overfitting, given the fewer number of weights in the network. We also investigated the use of intensity normalization as a pre-processing step, which though not common in CNN-based segmentation methods, proved together with data augmentation to be very effective for brain tumor segmentation in MRI images. Our proposal was validated in the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013), obtaining simultaneously the first position for the complete, core, and enhancing regions in Dice Similarity Coefficient metric (0.88, 0.83, 0.77) for the Challenge data set. Also, it obtained the overall first position by the online evaluation platform. We also participated in the on-site BRATS 2015 Challenge using the same model, obtaining the second place, with Dice Similarity Coefficient metric of 0.78, 0.65, and 0.75 for the complete, core, and enhancing regions, respectively.

  17. Brain Tumor Segmentation using Convolutional Neural Networks in MRI Images.

    PubMed

    Pereira, Sergio; Pinto, Adriano; Alves, Victor; Silva, Carlos A

    2016-03-04

    Among brain tumors, gliomas are the most common and aggressive, leading to a very short life expectancy in their highest grade. Thus, treatment planning is a key stage to improve the quality of life of oncological patients. Magnetic Resonance Imaging (MRI) is a widely used imaging technique to assess these tumors, but the large amount of data produced by MRI prevents manual segmentation in a reasonable time, limiting the use of precise quantitative measurements in the clinical practice. So, automatic and reliable segmentation methods are required; however, the large spatial and structural variability among brain tumors make automatic segmentation a challenging problem. In this paper, we propose an automatic segmentation method based on Convolutional Neural Networks (CNN), exploring small 33 kernels. The use of small kernels allows designing a deeper architecture, besides having a positive effect against overfitting, given the fewer number of weights in the network. We also investigated the use of intensity normalization as a pre-processing step, which though not common in CNN-based segmentation methods, proved together with data augmentation to be very effective for brain tumor segmentation in MRI images. Our proposal was validated in the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013), obtaining simultaneously the first position for the complete, core, and enhancing regions in Dice Similarity Coefficient metric (0:88, 0:83, 0:77) for the Challenge data set. Also, it obtained the overall first position by the online evaluation platform. We also participated in the on-site BRATS 2015 Challenge using the same model, obtaining the second place, with Dice Similarity Coefficient metric of 0:78, 0:65, and 0:75 for the complete, core, and enhancing regions, respectively.

  18. A staggered-grid convolutional differentiator for elastic wave modelling

    NASA Astrophysics Data System (ADS)

    Sun, Weijia; Zhou, Binzhong; Fu, Li-Yun

    2015-11-01

    The computation of derivatives in governing partial differential equations is one of the most investigated subjects in the numerical simulation of physical wave propagation. An analytical staggered-grid convolutional differentiator (CD) for first-order velocity-stress elastic wave equations is derived in this paper by inverse Fourier transformation of the band-limited spectrum of a first derivative operator. A taper window function is used to truncate the infinite staggered-grid CD stencil. The truncated CD operator is almost as accurate as the analytical solution, and as efficient as the finite-difference (FD) method. The selection of window functions will influence the accuracy of the CD operator in wave simulation. We search for the optimal Gaussian windows for different order CDs by minimizing the spectral error of the derivative and comparing the windows with the normal Hanning window function for tapering the CD operators. It is found that the optimal Gaussian window appears to be similar to the Hanning window function for tapering the same CD operator. We investigate the accuracy of the windowed CD operator and the staggered-grid FD method with different orders. Compared to the conventional staggered-grid FD method, a short staggered-grid CD operator achieves an accuracy equivalent to that of a long FD operator, with lower computational costs. For example, an 8th order staggered-grid CD operator can achieve the same accuracy of a 16th order staggered-grid FD algorithm but with half of the computational resources and time required. Numerical examples from a homogeneous model and a crustal waveguide model are used to illustrate the superiority of the CD operators over the conventional staggered-grid FD operators for the simulation of wave propagations.

  19. Single-trial EEG RSVP classification using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Shamwell, Jared; Lee, Hyungtae; Kwon, Heesung; Marathe, Amar R.; Lawhern, Vernon; Nothwang, William

    2016-05-01

    Traditionally, Brain-Computer Interfaces (BCI) have been explored as a means to return function to paralyzed or otherwise debilitated individuals. An emerging use for BCIs is in human-autonomy sensor fusion where physiological data from healthy subjects is combined with machine-generated information to enhance the capabilities of artificial systems. While human-autonomy fusion of physiological data and computer vision have been shown to improve classification during visual search tasks, to date these approaches have relied on separately trained classification models for each modality. We aim to improve human-autonomy classification performance by developing a single framework that builds codependent models of human electroencephalograph (EEG) and image data to generate fused target estimates. As a first step, we developed a novel convolutional neural network (CNN) architecture and applied it to EEG recordings of subjects classifying target and non-target image presentations during a rapid serial visual presentation (RSVP) image triage task. The low signal-to-noise ratio (SNR) of EEG inherently limits the accuracy of single-trial classification and when combined with the high dimensionality of EEG recordings, extremely large training sets are needed to prevent overfitting and achieve accurate classification from raw EEG data. This paper explores a new deep CNN architecture for generalized multi-class, single-trial EEG classification across subjects. We compare classification performance from the generalized CNN architecture trained across all subjects to the individualized XDAWN, HDCA, and CSP neural classifiers which are trained and tested on single subjects. Preliminary results show that our CNN meets and slightly exceeds the performance of the other classifiers despite being trained across subjects.

  20. A Bäcklund Transformation and Nonlinear Superposition Formula of the Caudrey-Dodd-Gibbon-Kotera-Sawada Hierarchy

    NASA Astrophysics Data System (ADS)

    Hu, Xing-Biao; Bullough, Robin

    1998-03-01

    In this paper, the Caudrey-Dodd-Gibbon-Kotera-Sawada hierarchy in bilinear form is considered. A Bäcklund transformation for the CDGKS hierarchy is presented. Under certain conditions, the corresponding nonlinear superposition formula is proved.

  1. SU-E-J-60: Efficient Monte Carlo Dose Calculation On CPU-GPU Heterogeneous Systems

    SciTech Connect

    Xiao, K; Chen, D. Z; Hu, X. S; Zhou, B

    2014-06-01

    Purpose: It is well-known that the performance of GPU-based Monte Carlo dose calculation implementations is bounded by memory bandwidth. One major cause of this bottleneck is the random memory writing patterns in dose deposition, which leads to several memory efficiency issues on GPU such as un-coalesced writing and atomic operations. We propose a new method to alleviate such issues on CPU-GPU heterogeneous systems, which achieves overall performance improvement for Monte Carlo dose calculation. Methods: Dose deposition is to accumulate dose into the voxels of a dose volume along the trajectories of radiation rays. Our idea is to partition this procedure into the following three steps, which are fine-tuned for CPU or GPU: (1) each GPU thread writes dose results with location information to a buffer on GPU memory, which achieves fully-coalesced and atomic-free memory transactions; (2) the dose results in the buffer are transferred to CPU memory; (3) the dose volume is constructed from the dose buffer on CPU. We organize the processing of all radiation rays into streams. Since the steps within a stream use different hardware resources (i.e., GPU, DMA, CPU), we can overlap the execution of these steps for different streams by pipelining. Results: We evaluated our method using a Monte Carlo Convolution Superposition (MCCS) program and tested our implementation for various clinical cases on a heterogeneous system containing an Intel i7 quad-core CPU and an NVIDIA TITAN GPU. Comparing with a straightforward MCCS implementation on the same system (using both CPU and GPU for radiation ray tracing), our method gained 2-5X speedup without losing dose calculation accuracy. Conclusion: The results show that our new method improves the effective memory bandwidth and overall performance for MCCS on the CPU-GPU systems. Our proposed method can also be applied to accelerate other Monte Carlo dose calculation approaches. This research was supported in part by NSF under Grants CCF

  2. Prediction in cases with superposition of different hydrological phenomena, such as from weather "cold drops

    NASA Astrophysics Data System (ADS)

    Anton, J. M.; Grau, J. B.; Tarquis, A. M.; Andina, D.; Sanchez, M. E.

    2012-04-01

    The authors have been involved in Model Codes for Construction prior to Eurocodes now Euronorms, and in a Drainage Instruction for Roads for Spain that adopted a prediction model from BPR (Bureau of Public Roads) of USA to take account of evident regional differences in Iberian Peninsula and Spanish Isles, and in some related studies. They used Extreme Value Type I (Gumbell law) models, with independent actions in superposition; this law was also adopted then to obtain maps of extreme rains by CEDEX. These methods could be extrapolated somehow with other extreme values distributions, but the first step was useful to set valid superposition schemas for actions in norms. As real case, in East of Spain rain comes usually extensively from normal weather perturbations, but in other cases from "cold drop" local high rains of about 400mm in a day occur, causing inundations and in cases local disasters. The city of Valencia in East of Spain was inundated at 1,5m high from a cold drop in 1957, and the river Turia formerly through that city was just later diverted some kilometers to South in a wider canal. With Gumbell law the expected intensity grows with time for occurrence, indicating a value for each given "return period", but the increasing speed grows with the "annual dispersion" of the Gumbell law, and some rare dangerous events may become really very possible in periods of many years. That can be proved with relatively simple models, e.g. with Extreme Law type I, and they could be made more precise or discussed. Such effects were used for superposition of actions on a structure for Model Codes, and may be combined with hydraulic effects, e.g. for bridges on rivers. These different Gumbell laws, or other extreme laws, with different dispersion may occur for marine actions of waves, earthquakes, tsunamis, and maybe for human perturbations, that could include industrial catastrophes, or civilization wars if considering historical periods.

  3. Security-enhanced asymmetric optical cryptosystem based on coherent superposition and equal modulus decomposition

    NASA Astrophysics Data System (ADS)

    Cai, Jianjun; Shen, Xueju; Lin, Chao

    2016-01-01

    We propose a security-enhanced asymmetric optical cryptosystem based on coherent superposition and equal modulus decomposition by combining full phase encryption technique with our previous cryptosystem. In the encryption process, the original image is phase encoded rather than bonded with a RPM. In the decryption process, two phase-contrast filters (PCFs) are employed to obtain the plaintext. As a consequence, the new cryptosystem guarantees high-level security to the attack based on iterative Fourier transform and maintains the good performance of our previous cryptosystem, especially conveniences. Some numerical simulations are presented to verify the validity and the performance of the modified cryptosystem.

  4. Scattering of an attractive Bose-Einstein condensate from a barrier: Formation of quantum superposition states

    NASA Astrophysics Data System (ADS)

    Streltsov, Alexej I.; Alon, Ofir E.; Cederbaum, Lorenz S.

    2009-10-01

    Scattering in one dimension of an attractive ultracold bosonic cloud from a barrier can lead to the formation of two nonoverlapping clouds. Once formed, the clouds travel with constant velocity, in general different in magnitude from that of the incoming cloud, and do not disperse. The phenomenon and its mechanism—transformation of kinetic energy to internal energy of the scattered cloud—are obtained by solving the time-dependent many-boson Schrödinger equation. The analysis of the wave function shows that the object formed corresponds to a quantum superposition state of two distinct wave packets traveling through real space.

  5. Effect of Superposition Location of Ultrasonic Fields on Sonochemical Reaction Rate

    NASA Astrophysics Data System (ADS)

    Yasuda, Keiji; Matsuura, Kazumasa

    2013-07-01

    The effect of the superposition location of ultrasonic fields on the sonochemical reaction rate was investigated using a sonochemical reactor with four transducers at 486 kHz. The transducers were attached at the bottom, upper side middle side, and lower side of a vessel. The reaction rate of potassium iodide in aqueous solution was measured. In the cases of the upper and bottom transducers, and the lower and bottom transducers, the synergy effect of sonochemical efficiency was observed. The amount of synergy effect for the upper and bottom transducers increased with increasing electric power.

  6. Convolution effect on TCR log response curve and the correction method for it

    NASA Astrophysics Data System (ADS)

    Chen, Q.; Liu, L. J.; Gao, J.

    2016-09-01

    Through-casing resistivity (TCR) logging has been successfully used in production wells for the dynamic monitoring of oil pools and the distribution of the residual oil, but its vertical resolution has limited its efficiency in identification of thin beds. The vertical resolution is limited by the distortion phenomenon of vertical response of TCR logging. The distortion phenomenon was studied in this work. It was found that the vertical response curve of TCR logging is the convolution of the true formation resistivity and the convolution function of TCR logging tool. Due to the effect of convolution, the measurement error at thin beds can reach 30% or even bigger. Thus the information of thin bed might be covered up very likely. The convolution function of TCR logging tool was obtained in both continuous and discrete way in this work. Through modified Lyle-Kalman deconvolution method, the true formation resistivity can be optimally estimated, so this inverse algorithm can correct the error caused by the convolution effect. Thus it can improve the vertical resolution of TCR logging tool for identification of thin beds.

  7. Intercomparison of the GOS approach, superposition T-matrix method, and laboratory measurements for black carbon optical properties during aging

    NASA Astrophysics Data System (ADS)

    He, Cenlin; Takano, Yoshi; Liou, Kuo-Nan; Yang, Ping; Li, Qinbin; Mackowski, Daniel W.

    2016-11-01

    We perform a comprehensive intercomparison of the geometric-optics surface-wave (GOS) approach, the superposition T-matrix method, and laboratory measurements for optical properties of fresh and coated/aged black carbon (BC) particles with complex structures. GOS and T-matrix calculations capture the measured optical (i.e., extinction, absorption, and scattering) cross sections of fresh BC aggregates, with 5-20% differences depending on particle size. We find that the T-matrix results tend to be lower than the measurements, due to uncertainty in theoretical approximations of realistic BC structures, particle property measurements, and numerical computations in the method. On the contrary, the GOS results are higher than the measurements (hence the T-matrix results) for BC radii <100 nm, because of computational uncertainty for small particles, while the discrepancy substantially reduces to 10% for radii >100 nm. We find good agreement (differences <5%) between the two methods in asymmetry factors for various BC sizes and aggregating structures. For aged BC particles coated with sulfuric acid, GOS and T-matrix results closely match laboratory measurements of optical cross sections. Sensitivity calculations show that differences between the two methods in optical cross sections vary with coating structures for radii <100 nm, while differences decrease to 10% for radii >100 nm. We find small deviations (≤10%) in asymmetry factors computed from the two methods for most BC coating structures and sizes, but several complex structures have 10-30% differences. This study provides the foundation for downstream application of the GOS approach in radiative transfer and climate studies.

  8. SAS-Pro: simultaneous residue assignment and structure superposition for protein structure alignment.

    PubMed

    Shah, Shweta B; Sahinidis, Nikolaos V

    2012-01-01

    Protein structure alignment is the problem of determining an assignment between the amino-acid residues of two given proteins in a way that maximizes a measure of similarity between the two superimposed protein structures. By identifying geometric similarities, structure alignment algorithms provide critical insights into protein functional similarities. Existing structure alignment tools adopt a two-stage approach to structure alignment by decoupling and iterating between the assignment evaluation and structure superposition problems. We introduce a novel approach, SAS-Pro, which addresses the assignment evaluation and structure superposition simultaneously by formulating the alignment problem as a single bilevel optimization problem. The new formulation does not require the sequentiality constraints, thus generalizing the scope of the alignment methodology to include non-sequential protein alignments. We employ derivative-free optimization methodologies for searching for the global optimum of the highly nonlinear and non-differentiable RMSD function encountered in the proposed model. Alignments obtained with SAS-Pro have better RMSD values and larger lengths than those obtained from other alignment tools. For non-sequential alignment problems, SAS-Pro leads to alignments with high degree of similarity with known reference alignments. The source code of SAS-Pro is available for download at http://eudoxus.cheme.cmu.edu/saspro/SAS-Pro.html.

  9. Noise-based logic: Binary, multi-valued, or fuzzy, with optional superposition of logic states

    NASA Astrophysics Data System (ADS)

    Kish, Laszlo B.

    2009-03-01

    A new type of deterministic (non-probabilistic) computer logic system inspired by the stochasticity of brain signals is shown. The distinct values are represented by independent stochastic processes: independent voltage (or current) noises. The orthogonality of these processes provides a natural way to construct binary or multi-valued logic circuitry with arbitrary number N of logic values by using analog circuitry. Moreover, the logic values on a single wire can be made a (weighted) superposition of the N distinct logic values. Fuzzy logic is also naturally represented by a two-component superposition within the binary case ( N=2). Error propagation and accumulation are suppressed. Other relevant advantages are reduced energy dissipation and leakage current problems, and robustness against circuit noise and background noises such as 1/f, Johnson, shot and crosstalk noise. Variability problems are also non-existent because the logic value is an AC signal. A similar logic system can be built with orthogonal sinusoidal signals (different frequency or orthogonal phase) however that has an extra 1/N type slowdown compared to the noise-based logic system with increasing number of N furthermore it is less robust against time delay effects than the noise-based counterpart.

  10. Generation of mesoscopic quantum superpositions through Kerr-stimulated degenerate downconversion

    NASA Astrophysics Data System (ADS)

    Paris, Matteo G. A.

    1999-12-01

    A two-step interaction scheme involving icons/Journals/Common/chi" ALT="chi" ALIGN="TOP"/>(2) and icons/Journals/Common/chi" ALT="chi" ALIGN="TOP"/>(3) nonlinear media is suggested for the generation of Schrödinger cat-like states of a single-mode optical field. In the first step, a weak coherent signal undergoes a self-Kerr phase modulation in a icons/Journals/Common/chi" ALT="chi" ALIGN="TOP"/>(3) crystal, leading to a Kerr kitten, namely a microscopic superposition of two coherent states with opposite phases. In the second step, such a Kerr kitten enters a icons/Journals/Common/chi" ALT="chi" ALIGN="TOP"/>(2) crystal and, in turn, plays the role of a quantum seed for stimulated phase-sensitive amplification. The output state in the above-threshold regime consists in a quantum superposition of mesoscopically distinguishable squeezed states, i.e. an optical cat-like state. The whole setup does not rely on conditional measurements, and is robust against decoherence, as only weak signals interact with the Kerr medium.

  11. The Brain's Representations May Be Compatible With Convolution-Based Memory Models.

    PubMed

    Kato, Kenichi; Caplan, Jeremy B

    2017-02-13

    Convolution is a mathematical operation used in vector-models of memory that have been successful in explaining a broad range of behaviour, including memory for associations between pairs of items, an important primitive of memory upon which a broad range of everyday memory behaviour depends. However, convolution models have trouble with naturalistic item representations, which are highly auto-correlated (as one finds, e.g., with photographs), and this has cast doubt on their neural plausibility. Consequently, modellers working with convolution have used item representations composed of randomly drawn values, but introducing so-called noise-like representation raises the question how those random-like values might relate to actual item properties. We propose that a compromise solution to this problem may already exist. It has also long been known that the brain tends to reduce auto-correlations in its inputs. For example, centre-surround cells in the retina approximate a Difference-of-Gaussians (DoG) transform. This enhances edges, but also turns natural images into images that are closer to being statistically like white noise. We show the DoG-transformed images, although not optimal compared to noise-like representations, survive the convolution model better than naturalistic images. This is a proof-of-principle that the pervasive tendency of the brain to reduce auto-correlations may result in representations of information that are already adequately compatible with convolution, supporting the neural plausibility of convolution-based association-memory. (PsycINFO Database Record

  12. Fast vision through frameless event-based sensing and convolutional processing: application to texture recognition.

    PubMed

    Perez-Carrasco, Jose Antonio; Acha, Begona; Serrano, Carmen; Camunas-Mesa, Luis; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2010-04-01

    Address-event representation (AER) is an emergent hardware technology which shows a high potential for providing in the near future a solid technological substrate for emulating brain-like processing structures. When used for vision, AER sensors and processors are not restricted to capturing and processing still image frames, as in commercial frame-based video technology, but sense and process visual information in a pixel-level event-based frameless manner. As a result, vision processing is practically simultaneous to vision sensing, since there is no need to wait for sensing full frames. Also, only meaningful information is sensed, communicated, and processed. Of special interest for brain-like vision processing are some already reported AER convolutional chips, which have revealed a very high computational throughput as well as the possibility of assembling large convolutional neural networks in a modular fashion. It is expected that in a near future we may witness the appearance of large scale convolutional neural networks with hundreds or thousands of individual modules. In the meantime, some research is needed to investigate how to assemble and configure such large scale convolutional networks for specific applications. In this paper, we analyze AER spiking convolutional neural networks for texture recognition hardware applications. Based on the performance figures of already available individual AER convolution chips, we emulate large scale networks using a custom made event-based behavioral simulator. We have developed a new event-based processing architecture that emulates with AER hardware Manjunath's frame-based feature recognition software algorithm, and have analyzed its performance using our behavioral simulator. Recognition rate performance is not degraded. However, regarding speed, we show that recognition can be achieved before an equivalent frame is fully sensed and transmitted.

  13. Punctured Parallel and Serial Concatenated Convolutional Codes for BPSK/QPSK Channels

    NASA Technical Reports Server (NTRS)

    Acikel, Omer Fatih

    1999-01-01

    As available bandwidth for communication applications becomes scarce, bandwidth-efficient modulation and coding schemes become ever important. Since their discovery in 1993, turbo codes (parallel concatenated convolutional codes) have been the center of the attention in the coding community because of their bit error rate performance near the Shannon limit. Serial concatenated convolutional codes have also been shown to be as powerful as turbo codes. In this dissertation, we introduce algorithms for designing bandwidth-efficient rate r = k/(k + 1),k = 2, 3,..., 16, parallel and rate 3/4, 7/8, and 15/16 serial concatenated convolutional codes via puncturing for BPSK/QPSK (Binary Phase Shift Keying/Quadrature Phase Shift Keying) channels. Both parallel and serial concatenated convolutional codes have initially, steep bit error rate versus signal-to-noise ratio slope (called the -"cliff region"). However, this steep slope changes to a moderate slope with increasing signal-to-noise ratio, where the slope is characterized by the weight spectrum of the code. The region after the cliff region is called the "error rate floor" which dominates the behavior of these codes in moderate to high signal-to-noise ratios. Our goal is to design high rate parallel and serial concatenated convolutional codes while minimizing the error rate floor effect. The design algorithm includes an interleaver enhancement procedure and finds the polynomial sets (only for parallel concatenated convolutional codes) and the puncturing schemes that achieve the lowest bit error rate performance around the floor for the code rates of interest.

  14. A deterministic partial differential equation model for dose calculation in electron radiotherapy

    NASA Astrophysics Data System (ADS)

    Duclous, R.; Dubroca, B.; Frank, M.

    2010-07-01

    High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g. Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung

  15. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    NASA Astrophysics Data System (ADS)

    Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.; Bagby, L.; Baller, B.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Bugel, L.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; James, C.; de Vries, J. Jan; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Jones, B. J. P.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; von Rohr, C. Rudolf; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van de Water, R. G.; Viren, B.; Weber, M.; Weston, J.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2017-03-01

    We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.

  16. Blind separation of convolutive sEMG mixtures based on independent vector analysis

    NASA Astrophysics Data System (ADS)

    Wang, Xiaomei; Guo, Yina; Tian, Wenyan

    2015-12-01

    An independent vector analysis (IVA) method base on variable-step gradient algorithm is proposed in this paper. According to the sEMG physiological properties, the IVA model is applied to the frequency-domain separation of convolutive sEMG mixtures to extract motor unit action potentials information of sEMG signals. The decomposition capability of proposed method is compared to the one of independent component analysis (ICA), and experimental results show the variable-step gradient IVA method outperforms ICA in blind separation of convolutive sEMG mixtures.

  17. Implementation of large kernel 2-D convolution in limited FPGA resource

    NASA Astrophysics Data System (ADS)

    Zhong, Sheng; Li, Yang; Yan, Luxin; Zhang, Tianxu; Cao, Zhiguo

    2007-12-01

    2-D Convolution is a simple mathematical operation which is fundamental to many common image processing operators. Using FPGA to implement the convolver can greatly reduce the DSP's heavy burden in signal processing. But with the limit resource the FPGA can implement a convolver with small 2-D kernel. In this paper, An FIFO type line delayer is presented to serve as the data buffer for convolution to reduce the data fetching operation. A finite state machine is applied to control the reuse of multipliers and adders arrays. With these two techniques, a resource limited FPGA can be used to implement a larger kernel convolver which is commonly used in image process systems.

  18. Discrete singular convolution mapping methods for solving singular boundary value and boundary layer problems

    NASA Astrophysics Data System (ADS)

    Pindza, Edson; Maré, Eben

    2017-03-01

    A modified discrete singular convolution method is proposed. The method is based on the single (SE) and double (DE) exponential transformation to speed up the convergence of the existing methods. Numerical computations are performed on a wide variety of singular boundary value and singular perturbed problems in one and two dimensions. The obtained results from discrete singular convolution methods based on single and double exponential transformations are compared with each other, and with the existing methods too. Numerical results confirm that these methods are considerably efficient and accurate in solving singular and regular problems. Moreover, the method can be applied to a wide class of nonlinear partial differential equations.

  19. High-rate systematic recursive convolutional encoders: minimal trellis and code search

    NASA Astrophysics Data System (ADS)

    Benchimol, Isaac; Pimentel, Cecilio; Souza, Richard Demo; Uchôa-Filho, Bartolomeu F.

    2012-12-01

    We consider high-rate systematic recursive convolutional encoders to be adopted as constituent encoders in turbo schemes. Douillard and Berrou showed that, despite its complexity, the construction of high-rate turbo codes by means of high-rate constituent encoders is advantageous over the construction based on puncturing rate-1/2 constituent encoders. To reduce the decoding complexity of high-rate codes, we introduce the construction of the minimal trellis for a systematic recursive convolutional encoding matrix. A code search is conducted and examples are provided which indicate that a more finely grained decoding complexity-error performance trade-off is obtained.

  20. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    NASA Astrophysics Data System (ADS)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to

  1. A Fast Numerical Method for Max-Convolution and the Application to Efficient Max-Product Inference in Bayesian Networks.

    PubMed

    Serang, Oliver

    2015-08-01

    Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.

  2. Use of Superposition Models to Simulate Possible Depletion of Colorado River Water by Ground-Water Withdrawal

    USGS Publications Warehouse

    Leake, Stanley A.; Greer, William; Watt, Dennis; Weghorst, Paul

    2008-01-01

    According to the 'Law of the River', wells that draw water from the Colorado River by underground pumping need an entitlement for the diversion of water from the Colorado River. Consumptive use can occur through direct diversions of surface water, as well as through withdrawal of water from the river by underground pumping. To develop methods for evaluating the need for entitlements for Colorado River water, an assessment of possible depletion of water in the Colorado River by pumping wells is needed. Possible methods include simple analytical models and complex numerical ground-water flow models. For this study, an intermediate approach was taken that uses numerical superposition models with complex horizontal geometry, simple vertical geometry, and constant aquifer properties. The six areas modeled include larger extents of the previously defined river aquifer from the Lake Mead area to the Yuma area. For the modeled areas, a low estimate of transmissivity and an average estimate of transmissivity were derived from statistical analyses of transmissivity data. Aquifer storage coefficient, or specific yield, was selected on the basis of results of a previous study in the Yuma area. The USGS program MODFLOW-2000 (Harbaugh and others, 2000) was used with uniform 0.25-mile grid spacing along rows and columns. Calculations of depletion of river water by wells were made for a time of 100 years since the onset of pumping. A computer program was set up to run the models repeatedly, each time with a well in a different location. Maps were constructed for at least two transmissivity values for each of the modeled areas. The modeling results, based on the selected transmissivities, indicate that low values of depletion in 100 years occur mainly in parts of side valleys that are more than a few tens of miles from the Colorado River.

  3. Superposition of nonparaxial vectorial complex-source spherically focused beams: Axial Poynting singularity and reverse propagation

    NASA Astrophysics Data System (ADS)

    Mitri, F. G.

    2016-08-01

    In this work, counterintuitive effects such as the generation of an axial (i.e., long the direction of wave motion) zero-energy flux density (i.e., axial Poynting singularity) and reverse (i.e., negative) propagation of nonparaxial quasi-Gaussian electromagnetic (EM) beams are examined. Generalized analytical expressions for the EM field's components of a coherent superposition of two high-order quasi-Gaussian vortex beams of opposite handedness and different amplitudes are derived based on the complex-source-point method, stemming from Maxwell's vector equations and the Lorenz gauge condition. The general solutions exhibiting unusual effects satisfy the Helmholtz and Maxwell's equations. The EM beam components are characterized by nonzero integer degree and order (n ,m ) , respectively, an arbitrary waist w0, a diffraction convergence length known as the Rayleigh range zR, and a weighting (real) factor 0 ≤α ≤1 that describes the transition of the beam from a purely vortex (α =0 ) to a nonvortex (α =1 ) type. An attractive feature for this superposition is the description of strongly focused (or strongly divergent) wave fields. Computations of the EM power density as well as the linear and angular momentum density fluxes illustrate the analysis with particular emphasis on the polarization states of the vector potentials forming the beams and the weight of the coherent beam superposition causing the transition from the vortex to the nonvortex type. Should some conditions determined by the polarization state of the vector potentials and the beam parameters be met, an axial zero-energy flux density is predicted in addition to a negative retrograde propagation effect. Moreover, rotation reversal of the angular momentum flux density with respect to the beam handedness is anticipated, suggesting the possible generation of negative (left-handed) torques. The results are particularly useful in applications involving the design of strongly focused optical laser

  4. Adaptive superposition of finite element meshes in linear and nonlinear dynamic analysis

    NASA Astrophysics Data System (ADS)

    Yue, Zhihua

    2005-11-01

    The numerical analysis of transient phenomena in solids, for instance, wave propagation and structural dynamics, is a very important and active area of study in engineering. Despite the current evolutionary state of modern computer hardware, practical analysis of large scale, nonlinear transient problems requires the use of adaptive methods where computational resources are locally allocated according to the interpolation requirements of the solution form. Adaptive analysis of transient problems involves obtaining solutions at many different time steps, each of which requires a sequence of adaptive meshes. Therefore, the execution speed of the adaptive algorithm is of paramount importance. In addition, transient problems require that the solution must be passed from one adaptive mesh to the next adaptive mesh with a bare minimum of solution-transfer error since this form of error compromises the initial conditions used for the next time step. A new adaptive finite element procedure (s-adaptive) is developed in this study for modeling transient phenomena in both linear elastic solids and nonlinear elastic solids caused by progressive damage. The adaptive procedure automatically updates the time step size and the spatial mesh discretization in transient analysis, achieving the accuracy and the efficiency requirements simultaneously. The novel feature of the s-adaptive procedure is the original use of finite element mesh superposition to produce spatial refinement in transient problems. The use of mesh superposition enables the s-adaptive procedure to completely avoid the need for cumbersome multipoint constraint algorithms and mesh generators, which makes the s-adaptive procedure extremely fast. Moreover, the use of mesh superposition enables the s-adaptive procedure to minimize the solution-transfer error. In a series of different solid mechanics problem types including 2-D and 3-D linear elastic quasi-static problems, 2-D material nonlinear quasi-static problems

  5. Graded-Index Optics are Matched to Optical Geometry in the Superposition Eyes of Scarab Beetles

    NASA Astrophysics Data System (ADS)

    McIntyre, P.; Caveney, S.

    1985-11-01

    Detailed measurements were made of the gradients of refractive index (g.r.i.) and relevant optical properties of the lens components in the ventral superposition eyes of three crepuscular species of the dung-beetle genus Onitis (Scarabaeinae). Each ommatidial lens has two components, a corneal facet and a crystalline cone; in both of these, the gradients provide a significant proportion of the refractive power. The spatial relationship between the lenses and the retina (optical geometry) was also determined. A computer ray-trace model based on these data was used to analyse the optical properties of the lenses and of the eye as a whole. Ray traces were done in two and three dimensions. The ommatidial lenses in all three species are afocal g.r.i. telescopes of low angular magnification. Parallel incident rays emerge approximately parallel for all angles of incidence up to the maximum. The superposition image of a distant point source is a small patch of light about the size of a rhabdom. There are obvious differences in the lens properties of the three species, most significantly in the shape of the refractive-index gradients in the crystalline cone, in the extent of the g.r.i. region in the two lens components and in the front-surface curvature of the corneal facet lens. These give rise to different angular magnifications M of the ommatidial lenses, the values for the three species being 1.7, 1.3, 1.0. This variation in M is matched by a variation in optical geometry, most evident in the different clear-zone widths. As a result, the level of the best superposition image lies close to the retina in the model eyes of all three species. The angular magnification also sets the maximum aperture or pupil of the eye and hence the brightness of the image on the retina. The smaller M, the larger the aperture and the brighter the image. By adopting a suitable value for M and the appropriate eye geometry, an eye can set image brightness and hence sensitivity within a certain

  6. Superposition-additive approach in the description of thermodynamic parameters of formation and clusterization of substituted alkanes at the air/water interface.

    PubMed

    Vysotsky, Yu B; Belyaeva, E A; Fomina, E S; Vasylyev, A O; Vollhardt, D; Fainerman, V B; Aksenenko, E V; Miller, R

    2012-12-01

    The superposition-additive approach developed previously was shown to be applicable for the calculations of the thermodynamic parameters of formation and atomization of conjugate systems, their dipole polarizability, molecular diamagnetic susceptibility, π-electronic ring currents, etc. In the present work, the applicability of this approach for the calculation of the thermodynamic parameters of formation and clusterization at the water/air interface of alkanes, fatty alcohols, thioalcohols, amines, nitriles, fatty acids (C(n)H(2n+1)X, X is the functional group) and cis-unsaturated carboxylic acids (C(n)H(2n-1)COOH) is studied. Using the proposed approach the thermodynamic quantities determined agree well with the available data, either calculated using the semiempirical (PM3) quantum chemical method, or obtained in experiments. In particular, for enthalpy and Gibbs' energy of the formation of substituted alkane monomers from the elementary substances, and their absolute entropy, the standard deviations of the values calculated according to the superposition-additive scheme with the mutual superimposition domain C(n-2)H(2n-4) (n is the number of carbon atoms in the alkyl chain) from the results of PM3 calculations for alkanes, alcohols, thioalcohols, amines, fatty acids, nitriles and cis-unsaturated carboxylic acids are respectively: 0.05, 0.004, 2.87, 0.02, 0.01, 0.77, and 0.01 kJ/mol for enthalpy; 2.32, 5.26, 4.49, 0.53, 1.22, 1.02, 5.30 J/(molK) for absolute entropy; 0.69, 1.56, 3.82, 0.15, 0.37, 0.69, 1.58 kJ/mol for Gibbs' energy, whereas the deviations from the experimental data are: 0.52, 5.75, 1.40, 1.00, 4.86 kJ/mol; 0.52, 0.63, 1.40, 6.11, 2.21 J/(molK); 2.52, 5.76, 1.58, 1.78, 4.86 kJ/mol, respectively (for nitriles and cis-unsaturated carboxylic acids experimental data are not available). The proposed approach provides also quite accurate estimates of enthalpy, entropy and Gibbs' energy of boiling and melting, critical temperatures and standard heat

  7. Fully phase multiple-image encryption based on superposition principle and the digital holographic technique

    NASA Astrophysics Data System (ADS)

    Wang, Xiaogang; Zhao, Daomu

    2012-10-01

    We propose an optoelectronic image encryption and decryption technique based on coherent superposition principle and digital holography. With the help of a chaotic random phase mask (CRPM) that is generated by using logistic map, a real-valued primary image is encoded into a phase-only version and then recorded as an encoded hologram. As for multiple-image encryption, only one digital hologram is to be transmitted as the encrypted result by using the multiplexing technique changing the reference wave angle. The bifurcation parameters, the initial values for the logistic maps, the number of the removed elements and the reference wave parameters are kept and transmitted as private keys. Both the encryption and decryption processes can be implemented in opto-digital manner or fully digital manner. Simulation results are given for testing the feasibility of the proposed approach.

  8. Multi-level manual and autonomous control superposition for intelligent telerobot

    NASA Technical Reports Server (NTRS)

    Hirai, Shigeoki; Sato, T.

    1989-01-01

    Space telerobots are recognized to require cooperation with human operators in various ways. Multi-level manual and autonomous control superposition in telerobot task execution is described. The object model, the structured master-slave manipulation system, and the motion understanding system are proposed to realize the concept. The object model offers interfaces for task level and object level human intervention. The structured master-slave manipulation system offers interfaces for motion level human intervention. The motion understanding system maintains the consistency of the knowledge through all the levels which supports the robot autonomy while accepting the human intervention. The superposing execution of the teleoperational task at multi-levels realizes intuitive and robust task execution for wide variety of objects and in changeful environment. The performance of several examples of operating chemical apparatuses is shown.

  9. Stability in parametric resonance of axially accelerating beams constituted by Boltzmann's superposition principle

    NASA Astrophysics Data System (ADS)

    Yang, Xiao-Dong; Chen, Li-Qun

    2006-01-01

    Stability in transverse parametric vibration of axially accelerating viscoelastic beams is investigated. The governing equation is derived from Newton's second law, Boltzmann's superposition principle, and the geometrical relation. When the axial speed is a constant mean speed with small harmonic variations, the governing equation can be treated as a continuous gyroscopic system with small periodically parametric excitations and a damping term. The method of multiple scales is applied directly to the governing equation without discretization. The stability conditions are obtained for combination and principal parametric resonance. Numerical examples demonstrate that the increase of the viscosity coefficient causes the lager instability threshold of speed fluctuation amplitude for given detuning parameter and smaller instability range of the detuning parameter for given speed fluctuation amplitude. The instability region is much bigger in lower order principal resonance than that in the higher order.

  10. Similarity recognition of molecular structures by optimal atomic matching and rotational superposition.

    PubMed

    Helmich, Benjamin; Sierka, Marek

    2012-01-15

    An algorithm for similarity recognition of molecules and molecular clusters is presented which also establishes the optimum matching among atoms of different structures. In the first step of the algorithm, a set of molecules are coarsely superimposed by transforming them into a common reference coordinate system. The optimum atomic matching among structures is then found with the help of the Hungarian algorithm. For this, pairs of structures are represented as complete bipartite graphs with a weight function that uses intermolecular atomic distances. In the final step, a rotational superposition method is applied using the optimum atomic matching found. This yields the minimum root mean square deviation of intermolecular atomic distances with respect to arbitrary rotation and translation of the molecules. Combined with an effective similarity prescreening method, our algorithm shows robustness and an effective quadratic scaling of computational time with the number of atoms.

  11. Nonlocal quantum macroscopic superposition in a high-thermal low-purity state.

    PubMed

    Brezinski, Mark E; Liu, Bin

    2008-12-16

    Quantum state exchange between light and matter is an important ingredient for future quantum information networks as well as other applications. Photons are the fastest and simplest carriers of information for transmission but in general, it is difficult to localize and store photons, so usually one prefers choosing matter as quantum memory elements. Macroscopic superposition and nonlocal quantum interactions have received considerable interest for this purpose over recent years in fields ranging from quantum computers to cryptography, in addition to providing major insights into physical laws. However, these experiments are generally performed either with equipment or under conditions that are unrealistic for practical applications. Ideally, the two can be combined using conventional equipment and conditions to generate a "quantum teleportation"-like state, particularly with a very small amount of purity existing in an overall highly mixed thermal state (relatively low decoherence at high temperatures). In this study we used an experimental design to demonstrate these principles. We performed optical coherence tomography (OCT) using a thermal source at room temperatures of a specifically designed target in the sample arm. Here, position uncertainty (i.e., dispersion) was induced in the reference arm. In the sample arm (target) we placed two glass plates separated by a different medium while altering position uncertainty in the reference arm. This resulted in a chirped signal between the glass plate reflective surfaces in the combined interferogram. The chirping frequency, as measured by the fast Fourier transform (FFT), varies with the medium between the plates, which is a nonclassical phenomenon. These results are statistically significant and occur from a superposition between the glass surface and the medium with increasing position uncertainty, a true quantum-mechanical phenomenon produced by photon pressure from two-photon interference. The differences in

  12. Limitations of a measurement-assisted optomechanical route to quantum macroscopicity of superposition states

    NASA Astrophysics Data System (ADS)

    Carlisle, Andrew; Kwon, Hyukjoon; Jeong, Hyunseok; Ferraro, Alessandro; Paternostro, Mauro

    2015-08-01

    Optomechanics is currently believed to provide a promising route towards the achievement of genuine quantum effects at the large, massive-system scale. By using a recently proposed figure of merit that is well suited to address continuous-variable systems, in this paper we analyze the requirements needed for the state of a mechanical mode (embodied by an end-cavity cantilever or a membrane placed within an optical cavity) to be qualified as macroscopic. We show that according to the phase-space-based criterion that we have chosen for our quantitative analysis, the state achieved through strong single-photon radiation-pressure coupling to a quantized field of light and conditioned by measurements operated on the latter might be interpreted as macroscopically quantum. In general, though, genuine macroscopic quantum superpositions appear to be possible only under quite demanding experimental conditions.

  13. Digital coherent superposition of optical OFDM subcarrier pairs with Hermitian symmetry for phase noise mitigation.

    PubMed

    Yi, Xingwen; Chen, Xuemei; Sharma, Dinesh; Li, Chao; Luo, Ming; Yang, Qi; Li, Zhaohui; Qiu, Kun

    2014-06-02

    Digital coherent superposition (DCS) provides an approach to combat fiber nonlinearities by trading off the spectrum efficiency. In analogy, we extend the concept of DCS to the optical OFDM subcarrier pairs with Hermitian symmetry to combat the linear and nonlinear phase noise. At the transmitter, we simply use a real-valued OFDM signal to drive a Mach-Zehnder (MZ) intensity modulator biased at the null point and the so-generated OFDM signal is Hermitian in the frequency domain. At receiver, after the conventional OFDM signal processing, we conduct DCS of the optical OFDM subcarrier pairs, which requires only conjugation and summation. We show that the inter-carrier-interference (ICI) due to phase noise can be reduced because of the Hermitain symmetry. In a simulation, this method improves the tolerance to the laser phase noise. In a nonlinear WDM transmission experiment, this method also achieves better performance under the influence of cross phase modulation (XPM).

  14. Enhancing quantum entanglement for continuous variables by a coherent superposition of photon subtraction and addition

    SciTech Connect

    Lee, Su-Yong; Kim, Ho-Joon; Ji, Se-Wan; Nha, Hyunchul

    2011-07-15

    We investigate how the entanglement properties of a two-mode state can be improved by performing a coherent superposition operation ta+ra{sup {dagger}} of photon subtraction and addition, proposed by Lee and Nha [Phys. Rev. A 82, 053812 (2010)], on each mode. We show that the degree of entanglement, the Einstein-Podolsky-Rosen-type correlation, and the performance of quantum teleportation can be all enhanced for the output state when the coherent operation is applied to a two-mode squeezed state. The effects of the coherent operation are more prominent than those of the mere photon subtraction a and the addition a{sup {dagger}} particularly in the small-squeezing regime, whereas the optimal operation becomes the photon subtraction (case of r=0) in the large-squeezing regime.

  15. Practical method using superposition of individual magnetic fields for initial arrangement of undulator magnets

    SciTech Connect

    Tsuchiya, K.; Shioya, T.

    2015-04-15

    We have developed a practical method for determining an excellent initial arrangement of magnetic arrays for a pure-magnet Halbach-type undulator. In this method, the longitudinal magnetic field distribution of each magnet is measured using a moving Hall probe system along the beam axis with a high positional resolution. The initial arrangement of magnetic arrays is optimized and selected by analyzing the superposition of all distribution data in order to achieve adequate spectral quality for the undulator. We applied this method to two elliptically polarizing undulators (EPUs), called U#16-2 and U#02-2, at the Photon Factory storage ring (PF ring) in the High Energy Accelerator Research Organization (KEK). The measured field distribution of the undulator was demonstrated to be excellent for the initial arrangement of the magnet array, and this method saved a great deal of effort in adjusting the magnetic fields of EPUs.

  16. Superposition of Cohesive Elements to Account for R-Curve Toughening in the Fracture of Composites

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Rose, Cheryl A.; Song, Kyongchan

    2008-01-01

    The relationships between a resistance curve (R-curve), the corresponding fracture process zone length, the shape of the traction/displacement softening law, and the propagation of fracture are examined in the context of the through-the-thickness fracture of composite laminates. A procedure that accounts for R-curve toughening mechanisms by superposing bilinear cohesive elements is proposed. Simple equations are developed for determining the separation of the critical energy release rates and the strengths that define the independent contributions of each bilinear softening law in the superposition. It is shown that the R-curve measured with a Compact Tension specimen test can be reproduced by superposing two bilinear softening laws. It is also shown that an accurate representation of the R-curve is essential for predicting the initiation and propagation of fracture in composite laminates.

  17. Practical method using superposition of individual magnetic fields for initial arrangement of undulator magnets.

    PubMed

    Tsuchiya, K; Shioya, T

    2015-04-01

    We have developed a practical method for determining an excellent initial arrangement of magnetic arrays for a pure-magnet Halbach-type undulator. In this method, the longitudinal magnetic field distribution of each magnet is measured using a moving Hall probe system along the beam axis with a high positional resolution. The initial arrangement of magnetic arrays is optimized and selected by analyzing the superposition of all distribution data in order to achieve adequate spectral quality for the undulator. We applied this method to two elliptically polarizing undulators (EPUs), called U#16-2 and U#02-2, at the Photon Factory storage ring (PF ring) in the High Energy Accelerator Research Organization (KEK). The measured field distribution of the undulator was demonstrated to be excellent for the initial arrangement of the magnet array, and this method saved a great deal of effort in adjusting the magnetic fields of EPUs.

  18. Precise position measurement of an atom using superposition of two standing wave fields

    NASA Astrophysics Data System (ADS)

    Idrees, M.; Bacha, B. A.; Javed, M.; Ullah, S. A.

    2017-04-01

    We present a scheme that provides a strong basis for precise localization of atoms, using superposition of two standing wave fields in a three level Λ -type gain assisted model. We show how atomic interference and diffraction occur at a particular node or antinode region of the standing wave fields. Two, three, four and even single localized peaks of atoms are observed in both full-wavelength and sub-half-wavelength domains, with 100 percent localization probability in a single peak. Dark lines appearing in the node region of the standing wave fields show strong evidence for atomic destructive interference. The proposed scheme allows for efficient localization of an atom to a particular point.

  19. Effects of superposition of detector solenoid and FFS quadrupole fields in SLC and correction methods

    SciTech Connect

    Murray, J.J.

    1983-07-25

    For the so-called superconducting FFS option with L* = 2.2 m, the MK2 solenoid does not overlap Q1, the FFS quad nearest the IP. For the permanent magnet option with L* = 0.75 m, the MK2 solenoid would overlap both Q1 and Q2. In either case an 8 m long solenoid, contemplated for the SLD detector, would overlap both Q1 and Q2. The solenoid field cannot be shielded so in an overlap region one will have a superposition of solenoid an quadrupole fields. Recently, the question was raised, What are the optical consequences when the solenoid and quad fields are superimposed. The question had not been considered before, but rough estimates suggested immediately that there might indeed be ugly consequences in terms of an enlargement of spot size at the IP. The purpose of this note is to answer the question quantitatively and to consider methods of correction of the ugly consequences.

  20. Numerical model for macroscopic quantum superpositions based on phase-covariant quantum cloning

    NASA Astrophysics Data System (ADS)

    Buraczewski, A.; Stobińska, M.

    2012-10-01

    Macroscopically populated quantum superpositions pose a question to what extent the macroscopic world obeys quantum mechanical laws. Recently, such superpositions for light, generated by an optimal quantum cloner, have been demonstrated. They are of fundamental and technological interest. We present numerical methods useful for modeling of these states. Their properties are governed by a Gaussian hypergeometric function, which cannot be reduced to either elementary or easily tractable functions. We discuss the method of efficient computation of this function for half-integer parameters and a moderate value of its argument. We show how to dynamically estimate a cutoff for infinite sums involving this function performed over its parameters. Our algorithm exceeds double precision and is parallelizable. Depending on the experimental parameters it chooses one of the several ways of summation to achieve the best efficiency. The methods presented here can be adjusted for analysis of similar experimental schemes. Program summary Program title: MQSVIS Catalogue identifier: AEMR_ v1_ 0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEMR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1643 No. of bytes in distributed program, including test data, etc.: 13212 Distribution format: tar.gz Programming language: C with OpenMP extensions (main numerical program), Python (helper scripts). Computer: Modern PC (tested on AMD and Intel processors), HP BL2x220. Operating system: Unix/Linux. Has the code been vectorized or parallelized?: Yes (OpenMP). RAM: 200 MB for single run for 1000×1000 tile Classification: 4.15, 18. External routines: OpenMP Nature of problem: Recently, macroscopically populated quantum superpositions for light, generated by an optimal quantum cloner, have

  1. Superposition and entanglement of mesoscopic squeezed vacuum states in cavity QED

    SciTech Connect

    Chen Changyong; Feng Mang; Gao Kelin

    2006-03-15

    We propose a scheme to generate superposition and entanglement between the mesoscopic squeezed vacuum states by considering the two-photon interaction of N two-level atoms in a cavity with high quality factor, assisted by a strong driving field. By virtue of specific choices of the cavity detuning, a number of multiparty entangled states can be prepared, including the entanglement between the atomic and the squeezed vacuum cavity states and between the squeezed vacuum states and the coherent states of the cavities. We also present how to prepare entangled states and 'Schroedinger cats' states regarding the squeezed vacuum states of the cavity modes. The possible extension and application of our scheme are discussed. Our scheme is close to the reach with current cavity QED techniques.

  2. Long-distance measurement-device-independent quantum key distribution with coherent-state superpositions.

    PubMed

    Yin, H-L; Cao, W-F; Fu, Y; Tang, Y-L; Liu, Y; Chen, T-Y; Chen, Z-B

    2014-09-15

    Measurement-device-independent quantum key distribution (MDI-QKD) with decoy-state method is believed to be securely applied to defeat various hacking attacks in practical quantum key distribution systems. Recently, the coherent-state superpositions (CSS) have emerged as an alternative to single-photon qubits for quantum information processing and metrology. Here, in this Letter, CSS are exploited as the source in MDI-QKD. We present an analytical method that gives two tight formulas to estimate the lower bound of yield and the upper bound of bit error rate. We exploit the standard statistical analysis and Chernoff bound to perform the parameter estimation. Chernoff bound can provide good bounds in the long-distance MDI-QKD. Our results show that with CSS, both the security transmission distance and secure key rate are significantly improved compared with those of the weak coherent states in the finite-data case.

  3. Accelerating Convolutional Sparse Coding for Curvilinear Structures Segmentation by Refining SCIRD-TS Filter Banks.

    PubMed

    Annunziata, Roberto; Trucco, Emanuele

    2016-11-01

    Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.

  4. The venom apparatus in stenogastrine wasps: subcellular features of the convoluted gland.

    PubMed

    Petrocelli, Iacopo; Turillazzi, Stefano; Delfino, Giovanni

    2014-09-01

    In the wasp venom apparatus, the convoluted gland is the tract of the thin secretory unit, i.e. filament, contained in the muscular reservoir. Previous transmission electron microscope investigation on Stenogastrinae disclosed that the free filaments consist of distal and proximal tracts, from/to the venom reservoir, characterized by class 3 and 2 gland patterns, respectively. This study aims to extend the ultrastructural analysis to the convoluted tract, in order to provide a thorough, subcellular representation of the venom gland in these Asian wasps. Our findings showed that the convoluted gland is a continuation of the proximal tract, with secretory cells provided with a peculiar apical invagination, the extracellular cavity, collecting their products. This compartment holds a simple end-apparatus lined by large and ramified microvilli that contribute to the processing of the secretory product. A comparison between previous and present findings reveals a noticeable regionalization of the stenogastrine venom filaments and suggests that the secretory product acquires its ultimate composition in the convoluted tract.

  5. A multilevel local discrete convolution method for the numerical solution for Maxwell's Equations

    NASA Astrophysics Data System (ADS)

    Lo, Boris; Colella, Phillip

    2016-10-01

    We present a new discrete multilevel local discrete convolution method for solving Maxwell's equations in three dimensions. We obtain an explicit real-space representation for the propagator of an auxiliary system of differential equations with initial value constraints that is equivalent to Maxwell's equations. The propagator preserves finite speed of propagation and source locality. Because the propagator involves convolution against a singular distribution, we regularize via convolution with smoothing kernels (B-splines) prior to sampling. We have shown that the ultimate discrete convolutional propagator can be constructed to attain an arbitrarily high order of accuracy by using higher-order regularizing kernels and finite difference stencils. The discretized propagator is compactly supported and can be applied using Hockney's method (1970) and parallelized using domain decomposition, leading to a method that is computationally efficient. The algorithm is extended to work for locally refined fixed hierarchy of rectangular grids. This research is supported by the Office of Advanced Scientific Computing Research of the US Department of Energy under Contract Number DE-AC02-05CH11231.

  6. Digital Tomosynthesis System Geometry Analysis Using Convolution-Based Blur-and-Add (BAA) Model.

    PubMed

    Wu, Meng; Yoon, Sungwon; Solomon, Edward G; Star-Lack, Josh; Pelc, Norbert; Fahrig, Rebecca

    2016-01-01

    Digital tomosynthesis is a three-dimensional imaging technique with a lower radiation dose than computed tomography (CT). Due to the missing data in tomosynthesis systems, out-of-plane structures in the depth direction cannot be completely removed by the reconstruction algorithms. In this work, we analyzed the impulse responses of common tomosynthesis systems on a plane-to-plane basis and proposed a fast and accurate convolution-based blur-and-add (BAA) model to simulate the backprojected images. In addition, the analysis formalism describing the impulse response of out-of-plane structures can be generalized to both rotating and parallel gantries. We implemented a ray tracing forward projection and backprojection (ray-based model) algorithm and the convolution-based BAA model to simulate the shift-and-add (backproject) tomosynthesis reconstructions. The convolution-based BAA model with proper geometry distortion correction provides reasonably accurate estimates of the tomosynthesis reconstruction. A numerical comparison indicates that the simulated images using the two models differ by less than 6% in terms of the root-mean-squared error. This convolution-based BAA model can be used in efficient system geometry analysis, reconstruction algorithm design, out-of-plane artifacts suppression, and CT-tomosynthesis registration.

  7. The VLSI design of error-trellis syndrome decoding for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Jensen, J. M.; Truong, T. K.; Hsu, I. S.

    1985-01-01

    A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.

  8. The VLSI design of an error-trellis syndrome decoder for certain convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Jensen, J. M.; Hsu, I.-S.; Truong, T. K.

    1986-01-01

    A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.

  9. Linear diffusion-wave channel routing using a discrete Hayami convolution method

    NASA Astrophysics Data System (ADS)

    Wang, Li; Wu, Joan Q.; Elliot, William J.; Fiedler, Fritz R.; Lapin, Sergey

    2014-02-01

    The convolution of an input with a response function has been widely used in hydrology as a means to solve various problems analytically. Due to the high computation demand in solving the functions using numerical integration, it is often advantageous to use the discrete convolution instead of the integration of the continuous functions. This approach greatly reduces the amount of the computational work; however, it increases the possibility for mass balance errors. In this study, we analyzed the characteristics of the kernel function for the Hayami convolution solution to the linear diffusion-wave channel routing with distributed lateral inflow. We propose two ways of selection of the discrete kernel function values: using the exact point values or using the center-averaged values. Through a hypothetical example and the applications to Asotin Creek, WA and the Clearwater River, ID, we showed that when the point kernel function values were used in the discrete Hayami convolution (DHC) solution, the mass balance error of channel routing is dependent on the number of time steps on the rising limb of the Hayami kernel function. The mass balance error is negligible when there are more than 1.8 time steps on the rising limb of the kernel function. The fewer time steps on the rising limb, the greater risk of high mass balance errors. When the average kernel function values are used for the DHC solution, however, the mass balance is always maintained, since the integration of the discrete kernel function is always unity.

  10. Convolutional FEC design considerations for data transmission over PSK satellite channels

    NASA Astrophysics Data System (ADS)

    Garrison, G. J.; Wong, V. C.

    Simulation results are provided for rate R = 1/2 convolutional error correcting codes suited to data transmission over BPSK, gray coded QPSK, and OQPSK channels. The burst generation mechanism resulting from differential encoding/decoding is analyzed in terms of the impairment to code performance and offsetting internal/external interleaving techniques are described.

  11. Quantum control of electronic fluxes during adiabatic attosecond charge migration in degenerate superposition states of benzene

    NASA Astrophysics Data System (ADS)

    Jia, Dongming; Manz, Jörn; Paulus, Beate; Pohl, Vincent; Tremblay, Jean Christophe; Yang, Yonggang

    2017-01-01

    We design four linearly x- and y-polarized as well as circularly right (+) and left (-) polarized, resonant π / 2 -laser pulses that prepare the model benzene molecule in four different degenerate superposition states. These consist of equal (0.5) populations of the electronic ground state S0 (1A1g) plus one of four degenerate excited states, all of them accessible by dipole-allowed transitions. Specifically, for the molecule aligned in the xy-plane, these excited states include different complex-valued linear combinations of the 1E1u,x and 1E1u,y degenerate states. As a consequence, the laser pulses induce four different types of periodic adiabatic attosecond (as) charge migrations (AACM) in benzene, all with the same period, 504 as, but with four different types of angular fluxes. One of the characteristic differences of these fluxes are the two angles for zero fluxes, which appear as the instantaneous angular positions of the "source" and "sink" of two equivalent, or nearly equivalent branches of the fluxes which flow in pincer-type patterns from one molecular site (the "source") to the opposite one (the "sink"). These angles of zero fluxes are either fixed at the positions of two opposite carbon nuclei in the yz-symmetry plane, or at the centers of two opposite carbon-carbon bonds in the xz-symmetry plane, or the angles of zero fluxes rotate in angular forward (+) or backward (-) directions, respectively. As a resume, our quantum model simulations demonstrate quantum control of the electronic fluxes during AACM in degenerate superposition states, in the attosecond time domain, with the laser polarization as the key knob for control.

  12. On the twisted convolution product and the Weyl transformation of tempered distributions

    NASA Astrophysics Data System (ADS)

    Maillard, J. M.

    It is well known that the Weyl transformation in a phase space R21, transforms the elements of L( R21) in trace class operators and the elements of L 2( R21) in the Hilbert-Schmidt operators of the Hilbert space L 2( R1); this fact leads to a general method of quantization suggested by E. Wigner and J.E. Moyal and developed by M. Flato, A. Lichnerowicz, C. Fronsdal, D. Sternheimer and F. Bayen for an arbitrary symplectic manifold, known under the name of star-product method. In this context, it is important to study the Weyl transforms of the tempered distributions on the phase space and that of the star-exponentials which gave the spectrum in this process of quantization. We analyze here the relations between the star-product, the twisted convolution product and the Weyl transformation of tempered distributions. We introduce symplectic differential operators which permit us to study the structure of the space O1λ λ ≠ 0, (similar to the space O1C) of the left (twisted) convolution operators of L( R21) which permit to define the twisted convolution product in the space L( R21), and the structures of the admissible symbols for the Weyl transformation (i.e. the domain of the Weyl transformation). We prove that the bounded operators of L 2( R1) are exactly the Weyl transforms of the bounded (twisted) convolution operators of L 2( R21). We give an expression of the integral formula of the star product in terms of twisted convolution products which is valid in the most general case. The unitary representations of the Heisenberg group play an important role here.

  13. SU-E-T-329: Dosimetric Impact of Implementing Metal Artifact Reduction Methods and Metal Energy Deposition Kernels for Photon Dose Calculations

    SciTech Connect

    Huang, J; Followill, D; Howell, R; Liu, X; Mirkovic, D; Stingo, F; Kry, S

    2015-06-15

    Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titanium and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus

  14. Superposition tridimensionnelle (3-D) sur la base du crâne pour l’évaluation longitudinale des effets de la croissance et du traitement

    PubMed Central

    Cevidanes, Lucia H.S.; Styner, Martin; Proffit, William R.; Ngom, Traduit par Papa Ibrahima

    2010-01-01

    RÉSUMÉ – Pour évaluer les modifications liées à la croissance ou au traitement, il est nécessaire de superposer les céphalogrammes successifs sur une structure stable. En céphalométrie bidimensionnelle (2-D), la base du crâne est souvent utilisée pour les superpositions parce que les changements qu’elle subit après le développement cérébral sont mineurs. Toutefois, sur les céphalogrammes de profil et de face, les points de repère basicraniens sont peu fiables. Dans cet article, nous présentons une nouvelle méthode de superposition tridimensionnelle (3-D) basée sur un enregistrement entièrement automatisé des intensités de voxels, au niveau de la surface de la base du crâne. Le progiciel utilisé permet l’évaluation quantitative des modifications qui apparaissent dans le temps, grâce au calcul de la distance euclidienne entre les surfaces du modèle tridimensionnel. Il permet également l’appréciation visuelle de l’emplacement et de l’importance des modifications au niveau des maxillaires, grâce à une surimpression graphique. Les modifications sont visualisées par comparaison à des tables de correspondance de couleur. On peut ainsi réaliser une étude détaillée des modes d’adaptation chez les patients dont la croissance et/ou le traitement ont provoqué des modifications squelettiques cliniquement significatives. PMID:19954732

  15. Gamma Knife radiosurgery with CT image-based dose calculation.

    PubMed

    Xu, Andy Yuanguang; Bhatnagar, Jagdish; Bednarz, Greg; Niranjan, Ajay; Kondziolka, Douglas; Flickinger, John; Lunsford, L Dade; Huq, M Saiful

    2015-11-01

    The Leksell GammaPlan software version 10 introduces a CT image-based segmentation tool for automatic skull definition and a convolution dose calculation algorithm for tissue inhomogeneity correction. The purpose of this work was to evaluate the impact of these new approaches on routine clinical Gamma Knife treatment planning. Sixty-five patients who underwent CT image-guided Gamma Knife radiosurgeries at the University of Pittsburgh Medical Center in recent years were retrospectively investigated. The diagnoses for these cases include trigeminal neuralgia, meningioma, acoustic neuroma, AVM, glioma, and benign and metastatic brain tumors. Dose calculations were performed for each patient with the same dose prescriptions and the same shot arrangements using three different approaches: 1) TMR 10 dose calculation with imaging skull definition; 2) convolution dose calculation with imaging skull definition; 3) TMR 10 dose calculation with conventional measurement-based skull definition. For each treatment matrix, the total treatment time, the target coverage index, the selectivity index, the gradient index, and a set of dose statistics parameters were compared between the three calculations. The dose statistics parameters investigated include the prescription isodose volume, the 12 Gy isodose volume, the minimum, maximum and mean doses on the treatment targets, and the critical structures under consideration. The difference between the convolution and the TMR 10 dose calculations for the 104 treatment matrices were found to vary with the patient anatomy, location of the treatment shots, and the tissue inhomogeneities around the treatment target. An average difference of 8.4% was observed for the total treatment times between the convolution and the TMR algorithms. The maximum differences in the treatment times, the prescription isodose volumes, the 12 Gy isodose volumes, the target coverage indices, the selectivity indices, and the gradient indices from the convolution

  16. Gamma Knife radiosurgery with CT image-based dose calculation.

    PubMed

    Xu, Andy Yuanguang; Bhatnagar, Jagdish; Bednarz, Greg; Niranjan, Ajay; Kondziolka, Douglas; Flickinger, John; Lunsford, L Dade; Huq, M Saiful

    2015-11-08

    The Leksell GammaPlan software version 10 introduces a CT image-based segmentation tool for automatic skull definition and a convolution dose calculation algorithm for tissue inhomogeneity correction. The purpose of this work was to evaluate the impact of these new approaches on routine clinical Gamma Knife treatment planning. Sixty-five patients who underwent CT image-guided Gamma Knife radiosurgeries at the University of Pittsburgh Medical Center in recent years were retrospectively investigated. The diagnoses for these cases include trigeminal neuralgia, meningioma, acoustic neuroma, AVM, glioma, and benign and metastatic brain tumors. Dose calculations were performed for each patient with the same dose prescriptions and the same shot arrangements using three different approaches: 1) TMR 10 dose calculation with imaging skull definition; 2) convolution dose calculation with imaging skull definition; 3) TMR 10 dose calculation with conventional measurement-based skull definition. For each treatment matrix, the total treatment time, the target coverage index, the selectivity index, the gradient index, and a set of dose statistics parameters were compared between the three calculations. The dose statistics parameters investigated include the prescription isodose volume, the 12 Gy isodose volume, the minimum, maximum and mean doses on the treatment targets, and the critical structures under consideration. The difference between the convolution and the TMR 10 dose calculations for the 104 treatment matrices were found to vary with the patient anatomy, location of the treatment shots, and the tissue inhomogeneities around the treatment target. An average difference of 8.4% was observed for the total treatment times between the convolution and the TMR algorithms. The maximum differences in the treatment times, the prescription isodose volumes, the 12 Gy isodose volumes, the target coverage indices, the selectivity indices, and the gradient indices from the convolution

  17. Superposition of scalar and residual dipolar couplings: analytical transfer functions for three spins 1/2 under cylindrical mixing conditions.

    PubMed

    Luy, B; Glaser, S J

    2001-01-01

    The superposition of scalar and residual dipolar couplings gives rise to so-called cylindrical mixing Hamiltonians in dipolar coupling spectroscopy. General analytical polarization and coherence transfer functions are presented for three cylindrically coupled spins 12 under energy-matched conditions. In addition, the transfer efficiency is analyzed as a function of the relative coupling constants for characteristic special cases.

  18. The electron affinity of gallium nitride (GaN) and digallium nitride (GaNGa): the importance of the basis set superposition error in strongly bound systems.

    PubMed

    Tzeli, Demeter; Tsekouras, Athanassios A

    2008-04-14

    The electron affinity of GaN and Ga2N as well as the geometries and the dissociation energies of the ground states of gallium nitrides GaN, GaN(-), Ga2N, and Ga2N(-) were systematically studied by employing the coupled cluster method, RCCSD(T), in conjunction with a series of basis sets, (aug-)cc-pVxZ(-PP), x=D, T, Q, and 5 and cc-pwCVxZ(-PP), x=D, T, and Q. The calculated dissociation energy and the electron affinity of GaN are 2.12 and 1.84 eV, respectively, and those of Ga2N are 6.31 and 2.53 eV. The last value is in excellent agreement with a recent experimental value for the electron affinity of Ga2N of 2.506+/-0.008 eV. For such quality in the results to be achieved, the Ga 3d electrons had to be included in the correlation space. Moreover, when a basis set is used, which has not been developed for the number of the electrons which are correlated in a calculation, the quantities calculated need to be corrected for the basis set superposition error.

  19. Limitations on squeezing and formation of the superposition of two macroscopically distinguishable states at fundamental frequency in the process of second harmonic generation

    NASA Technical Reports Server (NTRS)

    Nikitin, S. P.; Masalov, A. V.

    1992-01-01

    The results of numerical simulations of quantum state evolution in the process of second harmonic generation (SHG) are discussed. It is shown that at a particular moment of time in the fundamental mode initially coherent state turns into a superposition of two macroscopically distinguished states. The question of whether this superposition exhibits quantum interference is analyzed.

  20. Evaluation of axial and lateral modal superposition for general 3D drilling riser analysis

    SciTech Connect

    Burgdorf, O. Jr.

    1996-12-31

    A 3D partially non-linear transient fully-coupled riser analysis method is evaluated which uses modal superposition of independently extracted lateral and axial modes. Many lateral modes are combined with a lesser number axial modes to minimize adverse time step requirements typically induced by axial flexibility in direct time integration of beam-column elements. The reduced computer time option enables much faster parametric analysis of hang-off, as well as other connected drilling environments normally examined. Axial-lateral coupling is explicitly enforced and, resonance fidelity is preserved when excitation is near or coincident with axial natural periods. Reasonable correlation is shown with envelopes of test case dynamic responses published by API. Applicability of the method is limited by linearity assumptions indigenous to modal representation of dynamic deflections relative to a mean deflected shape. Sensitivities of incipient buckling during hang-off to axial damping and stiffness are described for an example 6,000 ft. deep composite drilling riser system.

  1. Methodological developments and strategies for a fast flexible superposition of drug-size molecules.

    PubMed

    Klebe, G; Mietzner, T; Weber, F

    1999-01-01

    An alternative to experimental high through-put screening is the virtual screening of compound libraries on the computer. In absence of a detailed structure of the receptor protein, candidate molecules are compared with a known reference by mutually superimposing their skeletons and scoring their similarity. Since molecular shape highly depends on the adopted conformation, an efficient conformational screening is performed using a knowledge-based approach. A comprehensive torsion library has been compiled from crystal data stored in the Cambridge Structural Database. For molecular comparison a strategy is followed considering shape associated physicochemical properties in space such as steric occupancy, electrostatics, lipophilicity and potential hydrogen-bonding. Molecular shape is approximated by a set of Gaussian functions not necessarily located at the atomic positions. The superposition is performed in two steps: first by a global alignment search operating on multiple rigid conformations and then by conformationally relaxing the best scored hits of the global search. A normalized similarity scoring is used to allow for a comparison of molecules with rather different shape and size. The approach has been implemented on a cluster of parallel processors. As a case study, the search for ligands binding to the dopamine receptor is given.

  2. Methodological developments and strategies for a fast flexible superposition of drug-size molecules

    NASA Astrophysics Data System (ADS)

    Klebe, Gerhard; Mietzner, Thomas; Weber, Frank

    1999-01-01

    An alternative to experimental high through-put screening is the virtual screening of compound libraries on the computer. In absence of a detailed structure of the receptor protein, candidate molecules are compared with a known reference by mutually superimposing their skeletons and scoring their similarity. Since molecular shape highly depends on the adopted conformation, an efficient conformational screening is performed using a knowledge-based approach. A comprehensive torsion library has been compiled from crystal data stored in the Cambridge Structural Database. For molecular comparison a strategy is followed considering shape associated physicochemical properties in space such as steric occupancy, electrostatics, lipophilicity and potential hydrogen-bonding. Molecular shape is approximated by a set of Gaussian functions not necessarily located at the atomic positions. The superposition is performed in two steps: first by a global alignment search operating on multiple rigid conformations and then by conformationally relaxing the best scored hits of the global search. A normalized similarity scoring is used to allow for a comparison of molecules with rather different shape and size. The approach has been implemented on a cluster of parallel processors. As a case study, the search for ligands binding to the dopamine receptor is given.

  3. A new optical image cryptosystem based on two-beam coherent superposition and unequal modulus decomposition

    NASA Astrophysics Data System (ADS)

    Chen, Linfei; Gao, Xiong; Chen, Xudong; He, Bingyu; Liu, Jingyu; Li, Dan

    2016-04-01

    In this paper, a new optical image cryptosystem is proposed based on two-beam coherent superposition and unequal modulus decomposition. Different from the equal modulus decomposition or unit vector decomposition, the proposed method applies common vector decomposition to accomplish encryption process. In the proposed method, the original image is firstly Fourier transformed and the complex function in spectrum domain will be obtained. The complex distribution is decomposed into two vector components with unequal amplitude and phase by the common vector decomposition method. Subsequently, the two components are modulated by two random phases and transformed from spectrum domain to spatial domain, and amplitude parts are extracted as encryption results and phase parts are extracted as private keys. The advantages of the proposed cryptosystem are: four different phase and amplitude information created by the method of common vector decomposition strengthens the security of the cryptosystem, and it fully solves the silhouette problem. Simulation results are presented to show the feasibility and the security of the proposed cryptosystem.

  4. Quantum superposition of a single microwave photon in two different 'colour' states

    NASA Astrophysics Data System (ADS)

    Zakka-Bajjani, Eva; Nguyen, François; Lee, Minhyea; Vale, Leila R.; Simmonds, Raymond W.; Aumentado, José

    2011-08-01

    Fully controlled coherent coupling of arbitrary harmonic oscillators is an important tool for processing quantum information. Coupling between quantum harmonic oscillators has previously been demonstrated in several physical systems using a two-level system as a mediating element. Direct interaction at the quantum level has only recently been realized by means of resonant coupling between trapped ions. Here we implement a tunable direct coupling between the microwave harmonics of a superconducting resonator by means of parametric frequency conversion. We accomplish this by coupling the mode currents of two harmonics through a superconducting quantum interference device (SQUID) and modulating its flux at the difference (~7GHz) of the harmonic frequencies. We deterministically prepare a single-photon Fock state and coherently manipulate it between multiple modes, effectively controlling it in a superposition of two different 'colours'. This parametric interaction can be described as a beamsplitter-like operation that couples different frequency modes. As such, it could be used to implement linear optical quantum computing protocols on-chip.

  5. Large-field-of-view wide-spectrum artificial reflecting superposition compound eyes.

    PubMed

    Huang, Chi-Chieh; Wu, Xiudong; Liu, Hewei; Aldalali, Bader; Rogers, John A; Jiang, Hongrui

    2014-08-13

    In nature, reflecting superposition compound eyes (RSCEs) found in shrimps, lobsters and some other decapods are extraordinary imaging systems with numerous optical features such as minimum chromatic aberration, wide-angle field of view (FOV), high sensitivity to light and superb acuity to motion. Here, we present life-sized, large-FOV, wide-spectrum artificial RSCEs as optical imaging devices inspired by the unique designs of their natural counterparts. Our devices can form real, clear images based on reflection rather than refraction, hence avoiding chromatic aberration due to dispersion by the optical materials. Compared to imaging at visible wavelengths using conventional refractive lenses of comparable size, our artificial RSCEs demonstrate minimum chromatic aberration, exceptional FOV up to 165° without distortion, modest aberrations and comparable imaging quality without any post-image processing. Together with an augmenting cruciform pattern surrounding each focused image, our large-FOV, wide-spectrum artificial RSCEs possess enhanced motion-tracking capability ideal for diverse applications in military, security, medical imaging and astronomy.

  6. Identification of Distant Drug Off-Targets by Direct Superposition of Binding Pocket Surfaces

    PubMed Central

    Schumann, Marcel; Armen, Roger S.

    2013-01-01

    Correctly predicting off-targets for a given molecular structure, which would have the ability to bind a large range of ligands, is both particularly difficult and important if they share no significant sequence or fold similarity with the respective molecular target (“distant off-targets”). A novel approach for identification of off-targets by direct superposition of protein binding pocket surfaces is presented and applied to a set of well-studied and highly relevant drug targets, including representative kinases and nuclear hormone receptors. The entire Protein Data Bank is searched for similar binding pockets and convincing distant off-target candidates were identified that share no significant sequence or fold similarity with the respective target structure. These putative target off-target pairs are further supported by the existence of compounds that bind strongly to both with high topological similarity, and in some cases, literature examples of individual compounds that bind to both. Also, our results clearly show that it is possible for binding pockets to exhibit a striking surface similarity, while the respective off-target shares neither significant sequence nor significant fold similarity with the respective molecular target (“distant off-target”). PMID:24391782

  7. Stabilizing the phase of superpositions of cat states in a cavity using real-time feedback

    NASA Astrophysics Data System (ADS)

    Ofek, N.; Petrenko, A.; Heeres, R.; Reinhold, P.; Liu, Y.; Leghtas, Z.; Vlastakis, B.; Frunzio, L.; Jiang, Liang; Mirrahimi, M.; Devoret, M. H.; Schoelkopf, R. J.

    In a superconducting cQED architecture, a hardware efficient quantum error correction (QEC) scheme exists, called the cat code, which maps a qubit onto superpositions of cat states in a superconducting resonator, by mapping the occurrence of errors, or single photon jumps, onto unitary rotations of the encoded state. By tracking the parity of the encoded state, we can count the number of photon jumps and are able to apply a correcting unitary transformation. However, the situation is complicated by the fact that photon jumps do not commute with the deterministic anharmonic time evolution of a resonator state, or Kerr, inherited by the resonator from its coupling to a Josephson junction. As predicted in, a field in the resonator will inherit an overall phase θ = KT in IQ space each time a photon jumps that is proportional to the Kerr K and the time T at which the jump occurs. Here I will present how we can track the errors in real time, take them into account together with the time they occur and make it possible to stabilize the qubit information. Please place my talk right after the talk of Andrei Petrenko.

  8. Deterministic preparation of superpositions of vacuum plus one photon by adaptive homodyne detection: experimental considerations

    NASA Astrophysics Data System (ADS)

    Dalla Pozza, Nicola; Wiseman, Howard M.; Huntington, Elanor H.

    2015-01-01

    The preparation stage of optical qubits is an essential task in all the experimental setups employed for the test and demonstration of quantum optics principles. We consider a deterministic protocol for the preparation of qubits as a superposition of vacuum and one photon number states, which has the advantage to reduce the amount of resources required via phase-sensitive measurements using a local oscillator (‘dyne detection’). We investigate the performances of the protocol using different phase measurement schemes: homodyne, heterodyne, and adaptive dyne detection (involving a feedback loop). First, we define a suitable figure of merit for the prepared state and we obtain an analytical expression for that in terms of the phase measurement considered. Further, we study limitations that the phase measurement can exhibit, such as delay or limited resources in the feedback strategy. Finally, we evaluate the figure of merit of the protocol for different mode-shapes handily available in an experimental setup. We show that even in the presence of such limitations simple feedback algorithms can perform surprisingly well, outperforming the protocols when simple homodyne or heterodyne schemes are employed.

  9. Superposition of elliptic functions as solutions for a large number of nonlinear equations

    NASA Astrophysics Data System (ADS)

    Khare, Avinash; Saxena, Avadh

    2014-03-01

    For a large number of nonlinear equations, both discrete and continuum, we demonstrate a kind of linear superposition. We show that whenever a nonlinear equation admits solutions in terms of both Jacobi elliptic functions cn(x, m) and dn(x, m) with modulus m, then it also admits solutions in terms of their sum as well as difference. We have checked this in the case of several nonlinear equations such as the nonlinear Schrödinger equation, MKdV, a mixed KdV-MKdV system, a mixed quadratic-cubic nonlinear Schrödinger equation, the Ablowitz-Ladik equation, the saturable nonlinear Schrödinger equation, λϕ4, the discrete MKdV as well as for several coupled field equations. Further, for a large number of nonlinear equations, we show that whenever a nonlinear equation admits a periodic solution in terms of dn2(x, m), it also admits solutions in terms of dn^2(x,m) ± sqrt{m} cn(x,m) dn(x,m), even though cn(x, m)dn(x, m) is not a solution of these nonlinear equations. Finally, we also obtain superposed solutions of various forms for several coupled nonlinear equations.

  10. Multi-dimensional color image storage and retrieval for a normal arbitrary quantum superposition state

    NASA Astrophysics Data System (ADS)

    Li, Hai-Sheng; Zhu, Qingxin; Zhou, Ri-Gui; Song, Lan; Yang, Xing-jiang

    2014-04-01

    Multi-dimensional color image processing has two difficulties: One is that a large number of bits are needed to store multi-dimensional color images, such as, a three-dimensional color image of needs bits. The other one is that the efficiency or accuracy of image segmentation is not high enough for some images to be used in content-based image search. In order to solve the above problems, this paper proposes a new representation for multi-dimensional color image, called a -qubit normal arbitrary quantum superposition state (NAQSS), where qubits represent colors and coordinates of pixels (e.g., represent a three-dimensional color image of only using 30 qubits), and the remaining 1 qubit represents an image segmentation information to improve the accuracy of image segmentation. And then we design a general quantum circuit to create the NAQSS state in order to store a multi-dimensional color image in a quantum system and propose a quantum circuit simplification algorithm to reduce the number of the quantum gates of the general quantum circuit. Finally, different strategies to retrieve a whole image or the target sub-image of an image from a quantum system are studied, including Monte Carlo sampling and improved Grover's algorithm which can search out a coordinate of a target sub-image only running in where and are the numbers of pixels of an image and a target sub-image, respectively.

  11. Modified superposition: A simple time series approach to closed-loop manual controller identification

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.; Leban, F.; Mashiko, S.

    1986-01-01

    Single-channel pilot manual control output in closed-tracking tasks is modeled in terms of linear discrete transfer functions which are parsimonious and guaranteed stable. The transfer functions are found by applying a modified super-position time series generation technique. A Levinson-Durbin algorithm is used to determine the filter which prewhitens the input and a projective (least squares) fit of pulse response estimates is used to guarantee identified model stability. Results from two case studies are compared to previous findings, where the source of data are relatively short data records, approximately 25 seconds long. Time delay effects and pilot seasonalities are discussed and analyzed. It is concluded that single-channel time series controller modeling is feasible on short records, and that it is important for the analyst to determine a criterion for best time domain fit which allows association of model parameter values, such as pure time delay, with actual physical and physiological constraints. The purpose of the modeling is thus paramount.

  12. Securing multiple color information by optical coherent superposition based spiral phase encoding

    NASA Astrophysics Data System (ADS)

    Abuturab, Muhammad Rafiq

    2014-05-01

    A new optical multiple-color image cryptosystem using optical coherent superposition based spiral phase encoding is proposed, which can be applied to achieve a nonlinear multiple-image encryption of the same size. This multiplexed coding scheme is lensless, non time-consuming and decoding procedure is free from cross talk and noise effects in real time. In this contribution, a color image is decomposed into three independent channels, i.e., red, green and blue. Each channel is then divided into an arbitrarily selected spiral phase mask (SPM) and a spiral key mask (SKM). The selected SPM is introduced as an encrypted image for multiple color images. The SKMs are employed as different decryption keys for different images. That means, only need is to send the construction parameters (as the order, the wavelength, the focal length, and the radius) of the SPM independently to multiple-user, but not the key itself, so it enhances robustness against existing attacks than double random phase encoding techniques. Moreover, the maximum data can be securely handled with a single parameter variation. The encryption process can be performed digitally while the decryption process is very simple and can be implemented using optoelectronic architecture. A set of numerical simulation results confirm the feasibility and effectiveness of the proposed cryptosystem for multiple-color image encryption.

  13. Quantum Delayed-Choice Experiment with a Beam Splitter in a Quantum Superposition

    NASA Astrophysics Data System (ADS)

    Zheng, Shi-Biao; Zhong, You-Peng; Xu, Kai; Wang, Qi-Jue; Wang, H.; Shen, Li-Tuo; Yang, Chui-Ping; Martinis, John M.; Cleland, A. N.; Han, Si-Yuan

    2015-12-01

    A quantum system can behave as a wave or as a particle, depending on the experimental arrangement. When, for example, measuring a photon using a Mach-Zehnder interferometer, the photon acts as a wave if the second beam splitter is inserted, but as a particle if this beam splitter is omitted. The decision of whether or not to insert this beam splitter can be made after the photon has entered the interferometer, as in Wheeler's famous delayed-choice thought experiment. In recent quantum versions of this experiment, this decision is controlled by a quantum ancilla, while the beam splitter is itself still a classical object. Here, we propose and realize a variant of the quantum delayed-choice experiment. We configure a superconducting quantum circuit as a Ramsey interferometer, where the element that acts as the first beam splitter can be put in a quantum superposition of its active and inactive states, as verified by the negative values of its Wigner function. We show that this enables the wave and particle aspects of the system to be observed with a single setup, without involving an ancilla that is not itself a part of the interferometer. We also study the transition of this quantum beam splitter from a quantum to a classical object due to decoherence, as observed by monitoring the interferometer output.

  14. Fast 3D molecular superposition and similarity search in databases of flexible molecules

    NASA Astrophysics Data System (ADS)

    Krämer, Andreas; Horn, Hans W.; Rice, Julia E.

    2003-01-01

    We present a new method (fFLASH) for the virtual screening of compound databases that is based on explicit three-dimensional molecular superpositions. fFLASH takes the torsional flexibility of the database molecules fully into account, and can deal with an arbitrary number of conformation-dependent molecular features. The method utilizes a fragmentation-reassembly approach which allows for an efficient sampling of the conformational space. A fast clique-based pattern matching algorithm generates alignments of pairs of adjacent molecular fragments on the rigid query molecule that are subsequently reassembled to complete database molecules. Using conventional molecular features (hydrogen bond donors and acceptors, charges, and hydrophobic groups) we show that fFLASH is able to rapidly produce accurate alignments of medium-sized drug-like molecules. Experiments with a test database containing a diverse set of 1780 drug-like molecules (including all conformers) have shown that average query processing times of the order of 0.1 seconds per molecule can be achieved on a PC.

  15. Brain tumor grading based on Neural Networks and Convolutional Neural Networks.

    PubMed

    Yuehao Pan; Weimin Huang; Zhiping Lin; Wanzheng Zhu; Jiayin Zhou; Wong, Jocelyn; Zhongxiang Ding

    2015-08-01

    This paper studies brain tumor grading using multiphase MRI images and compares the results with various configurations of deep learning structure and baseline Neural Networks. The MRI images are used directly into the learning machine, with some combination operations between multiphase MRIs. Compared to other researches, which involve additional effort to design and choose feature sets, the approach used in this paper leverages the learning capability of deep learning machine. We present the grading performance on the testing data measured by the sensitivity and specificity. The results show a maximum improvement of 18% on grading performance of Convolutional Neural Networks based on sensitivity and specificity compared to Neural Networks. We also visualize the kernels trained in different layers and display some self-learned features obtained from Convolutional Neural Networks.

  16. Pulmonary Nodule Classification with Deep Convolutional Neural Networks on Computed Tomography Images.

    PubMed

    Li, Wei; Cao, Peng; Zhao, Dazhe; Wang, Junbo

    2016-01-01

    Computer aided detection (CAD) systems can assist radiologists by offering a second opinion on early diagnosis of lung cancer. Classification and feature representation play critical roles in false-positive reduction (FPR) in lung nodule CAD. We design a deep convolutional neural networks method for nodule classification, which has an advantage of autolearning representation and strong generalization ability. A specified network structure for nodule images is proposed to solve the recognition of three types of nodules, that is, solid, semisolid, and ground glass opacity (GGO). Deep convolutional neural networks are trained by 62,492 regions-of-interest (ROIs) samples including 40,772 nodules and 21,720 nonnodules from the Lung Image Database Consortium (LIDC) database. Experimental results demonstrate the effectiveness of the proposed method in terms of sensitivity and overall accuracy and that it consistently outperforms the competing methods.

  17. Video-based convolutional neural networks for activity recognition from robot-centric videos

    NASA Astrophysics Data System (ADS)

    Ryoo, M. S.; Matthies, Larry

    2016-05-01

    In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.

  18. Pulmonary Nodule Classification with Deep Convolutional Neural Networks on Computed Tomography Images

    PubMed Central

    Li, Wei; Zhao, Dazhe; Wang, Junbo

    2016-01-01

    Computer aided detection (CAD) systems can assist radiologists by offering a second opinion on early diagnosis of lung cancer. Classification and feature representation play critical roles in false-positive reduction (FPR) in lung nodule CAD. We design a deep convolutional neural networks method for nodule classification, which has an advantage of autolearning representation and strong generalization ability. A specified network structure for nodule images is proposed to solve the recognition of three types of nodules, that is, solid, semisolid, and ground glass opacity (GGO). Deep convolutional neural networks are trained by 62,492 regions-of-interest (ROIs) samples including 40,772 nodules and 21,720 nonnodules from the Lung Image Database Consortium (LIDC) database. Experimental results demonstrate the effectiveness of the proposed method in terms of sensitivity and overall accuracy and that it consistently outperforms the competing methods. PMID:28070212

  19. Bolus calculators.

    PubMed

    Schmidt, Signe; Nørgaard, Kirsten

    2014-09-01

    Matching meal insulin to carbohydrate intake, blood glucose, and activity level is recommended in type 1 diabetes management. Calculating an appropriate insulin bolus size several times per day is, however, challenging and resource demanding. Accordingly, there is a need for bolus calculators to support patients in insulin treatment decisions. Currently, bolus calculators are available integrated in insulin pumps, as stand-alone devices and in the form of software applications that can be downloaded to, for example, smartphones. Functionality and complexity of bolus calculators vary greatly, and the few handfuls of published bolus calculator studies are heterogeneous with regard to study design, intervention, duration, and outcome measures. Furthermore, many factors unrelated to the specific device affect outcomes from bolus calculator use and therefore bolus calculator study comparisons should be conducted cautiously. Despite these reservations, there seems to be increasing evidence that bolus calculators may improve glycemic control and treatment satisfaction in patients who use the devices actively and as intended.

  20. Fuzzy Logic Module of Convolutional Neural Network for Handwritten Digits Recognition

    NASA Astrophysics Data System (ADS)

    Popko, E. A.; Weinstein, I. A.

    2016-08-01

    Optical character recognition is one of the important issues in the field of pattern recognition. This paper presents a method for recognizing handwritten digits based on the modeling of convolutional neural network. The integrated fuzzy logic module based on a structural approach was developed. Used system architecture adjusted the output of the neural network to improve quality of symbol identification. It was shown that proposed algorithm was flexible and high recognition rate of 99.23% was achieved.

  1. Transient electromagnetic modeling of the ZR accelerator water convolute and stack.

    SciTech Connect

    Lehr, Jane Marie; Elizondo-Decanini, Juan Manuel; Turner, C. David; Coats, Rebecca Sue; Bohnhoff, William J.; Pointon, Timothy David; Pasik, Michael Francis; Johnson, William Arthur; Savage, Mark Edward

    2005-06-01

    The ZR accelerator is a refurbishment of Sandia National Laboratories Z accelerator [1]. The ZR accelerator components were designed using electrostatic and circuit modeling tools. Transient electromagnetic modeling has played a complementary role in the analysis of ZR components [2]. In this paper we describe a 3D transient electromagnetic analysis of the ZR water convolute and stack using edge-based finite element techniques.

  2. Proteomic analysis of brush-border membrane vesicles isolated from purified proximal convoluted tubules

    PubMed Central

    Walmsley, Scott J.; Broeckling, Corey; Hess, Ann; Prenni, Jessica

    2010-01-01

    The renal proximal convoluted tubule is the primary site of water, electrolyte and nutrient reabsorption and of active secretion of selected molecules. Proteins in the apical brush-border membrane facilitate these functions and initiate some of the cellular responses to altered renal physiology. The current study uses two-dimensional liquid chromatography/mass spectrometry to compare brush border membrane vesicles isolated from rat renal cortex (BBMVCTX) and from purified proximal convoluted tubules (BBMVPCT). Both proteomic data and Western blot analysis indicate that the BBMVCTX contain apical membrane proteins from cortical cells other than the proximal tubule. This heterogeneity was greatly reduced in the BBMVPCT. Proteomic analysis identified 193 proteins common to both samples, 21 proteins unique to BBMVCTX, and 57 proteins unique to BBMVPCT. Spectral counts were used to quantify relative differences in protein abundance. This analysis identified 42 and 50 proteins that are significantly enriched (p values ≤0.001) in the BBMVCTX and BBMVPCT, respectively. These data were validated by measurement of γ-glutamyltranspeptidase activity and by Western blot analysis. The combined results establish that BBMVPCT are primarily derived from the proximal convoluted tubule (S1 and S2 segments), whereas BBMVCTX include proteins from the proximal straight tubule (S3 segment). Analysis of functional annotations indicated that BBMVPCT are enriched in mitochondrial proteins and enzymes involved in glucose and organic acid metabolism. Thus the current study reports a detailed proteomic analysis of the brush-border membrane of the rat renal proximal convoluted tubule and provides a database for future hypothesis-driven research. PMID:20219825

  3. Convolution-based estimation of organ dose in tube current modulated CT

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoyu; Segars, W. P.; Dixon, R. L.; Samei, Ehsan

    2015-03-01

    Among the various metrics that quantify radiation dose in computed tomography (CT), organ dose is one of the most representative quantities reflecting patient-specific radiation burden.1 Accurate estimation of organ dose requires one to effectively model the patient anatomy and the irradiation field. As illustrated in previous studies, the patient anatomy factor can be modeled using a library of computational phantoms with representative body habitus.2 However, the modeling of irradiation field can be practically challenging, especially for CT exams performed with tube current modulation. The central challenge is to effectively quantify the scatter irradiation field created by the dynamic change of tube current. In this study, we present a convolution-based technique to effectively quantify the primary and scatter irradiation field for TCM examinations. The organ dose for a given clinical patient can then be rapidly determined using the convolution-based method, a patient-matching technique, and a library of computational phantoms. 58 adult patients were included in this study (age range: 18-70 y.o., weight range: 60-180 kg). One computational phantom was created based on the clinical images of each patient. Each patient was optimally matched against one of the remaining 57 computational phantoms using a leave-one-out strategy. For each computational phantom, the organ dose coefficients (CTDIvol-normalized organ dose) under fixed tube current were simulated using a validated Monte Carlo simulation program. Such organ dose coefficients were multiplied by a scaling factor, (CTDIvol )organ, convolution that quantifies the regional irradiation field. The convolution-based organ dose was compared with the organ dose simulated from Monte Carlo program with TCM profiles explicitly modeled on the original phantom created based on patient images. The estimation error was within 10% across all organs and modulation profiles for abdominopelvic examination. This strategy

  4. Transfer Learning with Convolutional Neural Networks for Classification of Abdominal Ultrasound Images.

    PubMed

    Cheng, Phillip M; Malhi, Harshawn S

    2017-04-01

    The purpose of this study is to evaluate transfer learning with deep convolutional neural networks for the classification of abdominal ultrasound images. Grayscale images from 185 consecutive clinical abdominal ultrasound studies were categorized into 11 categories based on the text annotation specified by the technologist for the image. Cropped images were rescaled to 256 × 256 resolution and randomized, with 4094 images from 136 studies constituting the training set, and 1423 images from 49 studies constituting the test set. The fully connected layers of two convolutional neural networks based on CaffeNet and VGGNet, previously trained on the 2012 Large Scale Visual Recognition Challenge data set, were retrained on the training set. Weights in the convolutional layers of each network were frozen to serve as fixed feature extractors. Accuracy on the test set was evaluated for each network. A radiologist experienced in abdominal ultrasound also independently classified the images in the test set into the same 11 categories. The CaffeNet network classified 77.3% of the test set images accurately (1100/1423 images), with a top-2 accuracy of 90.4% (1287/1423 images). The larger VGGNet network classified 77.9% of the test set accurately (1109/1423 images), with a top-2 accuracy of VGGNet was 89.7% (1276/1423 images). The radiologist classified 71.7% of the test set images correctly (1020/1423 images). The differences in classification accuracies between both neural networks and the radiologist were statistically significant (p < 0.001). The results demonstrate that transfer learning with convolutional neural networks may be used to construct effective classifiers for abdominal ultrasound images.

  5. Quantum Fields Obtained from Convoluted Generalized White Noise Never Have Positive Metric

    NASA Astrophysics Data System (ADS)

    Albeverio, Sergio; Gottschalk, Hanno

    2016-05-01

    It is proven that the relativistic quantum fields obtained from analytic continuation of convoluted generalized (Lévy type) noise fields have positive metric, if and only if the noise is Gaussian. This follows as an easy observation from a criterion by Baumann, based on the Dell'Antonio-Robinson-Greenberg theorem, for a relativistic quantum field in positive metric to be a free field.

  6. Convolutional Neural Network on Embedded Linux System-on-Chip: A Methodology and Performance Benchmark

    DTIC Science & Technology

    2016-05-01

    heat sink. Note that a final system could be made much smaller than this development board, which has “wasted” space compared to a board used in a...TECHNICAL REPORT 3010 May 2016 Convolutional Neural Network on Embedded Linux® System -on-Chip A Methodology and Performance Benchmark Daniel...in this report was performed by the IO Support to National Security Branch (Code 56120), the Mission Systems Engineering Branch (Code 56170), and the

  7. TH-E-BRE-03: A Novel Method to Account for Ion Chamber Volume Averaging Effect in a Commercial Treatment Planning System Through Convolution

    SciTech Connect

    Barraclough, B; Li, J; Liu, C; Yan, G

    2014-06-15

    Purpose: Fourier-based deconvolution approaches used to eliminate ion chamber volume averaging effect (VAE) suffer from measurement noise. This work aims to investigate a novel method to account for ion chamber VAE through convolution in a commercial treatment planning system (TPS). Methods: Beam profiles of various field sizes and depths of an Elekta Synergy were collected with a finite size ion chamber (CC13) to derive a clinically acceptable beam model for a commercial TPS (Pinnacle{sup 3}), following the vendor-recommended modeling process. The TPS-calculated profiles were then externally convolved with a Gaussian function representing the chamber (σ = chamber radius). The agreement between the convolved profiles and measured profiles was evaluated with a one dimensional Gamma analysis (1%/1mm) as an objective function for optimization. TPS beam model parameters for focal and extra-focal sources were optimized and loaded back into the TPS for new calculation. This process was repeated until the objective function converged using a Simplex optimization method. Planar dose of 30 IMRT beams were calculated with both the clinical and the re-optimized beam models and compared with MapCHEC™ measurements to evaluate the new beam model. Results: After re-optimization, the two orthogonal source sizes for the focal source reduced from 0.20/0.16 cm to 0.01/0.01 cm, which were the minimal allowed values in Pinnacle. No significant change in the parameters for the extra-focal source was observed. With the re-optimized beam model, average Gamma passing rate for the 30 IMRT beams increased from 92.1% to 99.5% with a 3%/3mm criterion and from 82.6% to 97.2% with a 2%/2mm criterion. Conclusion: We proposed a novel method to account for ion chamber VAE in a commercial TPS through convolution. The reoptimized beam model, with VAE accounted for through a reliable and easy-to-implement convolution and optimization approach, outperforms the original beam model in standard IMRT QA

  8. Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection.

    PubMed

    Kim, Jihun; Kim, Jonghong; Jang, Gil-Jin; Lee, Minho

    2017-03-01

    Deep learning has received significant attention recently as a promising solution to many problems in the area of artificial intelligence. Among several deep learning architectures, convolutional neural networks (CNNs) demonstrate superior performance when compared to other machine learning methods in the applications of object detection and recognition. We use a CNN for image enhancement and the detection of driving lanes on motorways. In general, the process of lane detection consists of edge extraction and line detection. A CNN can be used to enhance the input images before lane detection by excluding noise and obstacles that are irrelevant to the edge detection result. However, training conventional CNNs requires considerable computation and a big dataset. Therefore, we suggest a new learning algorithm for CNNs using an extreme learning machine (ELM). The ELM is a fast learning method used to calculate network weights between output and hidden layers in a single iteration and thus, can dramatically reduce learning time while producing accurate results with minimal training data. A conventional ELM can be applied to networks with a single hidden layer; as such, we propose a stacked ELM architecture in the CNN framework. Further, we modify the backpropagation algorithm to find the targets of hidden layers and effectively learn network weights while maintaining performance. Experimental results confirm that the proposed method is effective in reducing learning time and improving performance.

  9. Convolution modeling of two-domain, nonlinear water-level responses in karst aquifers (Invited)

    NASA Astrophysics Data System (ADS)

    Long, A. J.

    2009-12-01

    Convolution modeling is a useful method for simulating the hydraulic response of water levels to sinking streamflow or precipitation infiltration at the macro scale. This approach is particularly useful in karst aquifers, where the complex geometry of the conduit and pore network is not well characterized but can be represented approximately by a parametric impulse-response function (IRF) with very few parameters. For many applications, one-dimensional convolution models can be equally effective as complex two- or three-dimensional models for analyzing water-level responses to recharge. Moreover, convolution models are well suited for identifying and characterizing the distinct domains of quick flow and slow flow (e.g., conduit flow and diffuse flow). Two superposed lognormal functions were used in the IRF to approximate the impulses of the two flow domains. Nonlinear response characteristics of the flow domains were assessed by observing temporal changes in the IRFs. Precipitation infiltration was simulated by filtering the daily rainfall record with a backward-in-time exponential function that weights each day’s rainfall with the rainfall of previous days and thus accounts for the effects of soil moisture on aquifer infiltration. The model was applied to the Edwards aquifer in Texas and the Madison aquifer in South Dakota. Simulations of both aquifers showed similar characteristics, including a separation on the order of years between the quick-flow and slow-flow IRF peaks and temporal changes in the IRF shapes when water levels increased and empty pore spaces became saturated.

  10. Joint multiple fully connected convolutional neural network with extreme learning machine for hepatocellular carcinoma nuclei grading.

    PubMed

    Li, Siqi; Jiang, Huiyan; Pang, Wenbo

    2017-03-22

    Accurate cell grading of cancerous tissue pathological image is of great importance in medical diagnosis and treatment. This paper proposes a joint multiple fully connected convolutional neural network with extreme learning machine (MFC-CNN-ELM) architecture for hepatocellular carcinoma (HCC) nuclei grading. First, in preprocessing stage, each grayscale image patch with the fixed size is obtained using center-proliferation segmentation (CPS) method and the corresponding labels are marked under the guidance of three pathologists. Next, a multiple fully connected convolutional neural network (MFC-CNN) is designed to extract the multi-form feature vectors of each input image automatically, which considers multi-scale contextual information of deep layer maps sufficiently. After that, a convolutional neural network extreme learning machine (CNN-ELM) model is proposed to grade HCC nuclei. Finally, a back propagation (BP) algorithm, which contains a new up-sample method, is utilized to train MFC-CNN-ELM architecture. The experiment comparison results demonstrate that our proposed MFC-CNN-ELM has superior performance compared with related works for HCC nuclei grading. Meanwhile, external validation using ICPR 2014 HEp-2 cell dataset shows the good generalization of our MFC-CNN-ELM architecture.

  11. Convolutional neural network approach for buried target recognition in FL-LWIR imagery

    NASA Astrophysics Data System (ADS)

    Stone, K.; Keller, J. M.

    2014-05-01

    A convolutional neural network (CNN) approach to recognition of buried explosive hazards in forward-looking long-wave infrared (FL-LWIR) imagery is presented. The convolutional filters in the first layer of the network are learned in the frequency domain, making enforcement of zero-phase and zero-dc response characteristics much easier. The spatial domain representations of the filters are forced to have unit l2 norm, and penalty terms are added to the online gradient descent update to encourage orthonormality among the convolutional filters, as well smooth first and second order derivatives in the spatial domain. The impact of these modifications on the generalization performance of the CNN model is investigated. The CNN approach is compared to a second recognition algorithm utilizing shearlet and log-gabor decomposition of the image coupled with cell-structured feature extraction and support vector machine classification. Results are presented for multiple FL-LWIR data sets recently collected from US Army test sites. These data sets include vehicle position information allowing accurate transformation between image and world coordinates and realistic evaluation of detection and false alarm rates.

  12. Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification

    PubMed Central

    Yang, Xinyi

    2016-01-01

    In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods. PMID:27610128

  13. Using hybrid GPU/CPU kernel splitting to accelerate spherical convolutions

    NASA Astrophysics Data System (ADS)

    Sutter, P. M.; Wandelt, B. D.; Elsner, F.

    2015-06-01

    We present a general method for accelerating by more than an order of magnitude the convolution of pixelated functions on the sphere with a radially-symmetric kernel. Our method splits the kernel into a compact real-space component and a compact spherical harmonic space component. These components can then be convolved in parallel using an inexpensive commodity GPU and a CPU. We provide models for the computational cost of both real-space and Fourier space convolutions and an estimate for the approximation error. Using these models we can determine the optimum split that minimizes the wall clock time for the convolution while satisfying the desired error bounds. We apply this technique to the problem of simulating a cosmic microwave background (CMB) anisotropy sky map at the resolution typical of the high resolution maps produced by the Planck mission. For the main Planck CMB science channels we achieve a speedup of over a factor of ten, assuming an acceptable fractional rms error of order 10-5 in the power spectrum of the output map.

  14. Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification.

    PubMed

    Pang, Shan; Yang, Xinyi

    2016-01-01

    In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.

  15. New basis set superposition error free ab initio MO-VB interaction potential: Molecular-dynamics simulation of water at critical and supercritical conditions

    NASA Astrophysics Data System (ADS)

    Famulari, Antonino; Specchio, Roberto; Sironi, Maurizio; Raimondi, Mario

    1998-02-01

    Recently, a controversy has come to light in literature regarding the structure of water in nonambient conditions. Disagreement is evident between the site-site pair correlation functions of water derived from neutron diffraction and those obtained by computer simulations which employ effective pairwise potentials to express the intermolecular interactions. In this paper the SCFMI method (self-consistent field for molecular interaction) followed by nonorthogonal CI (configuration interaction) calculations was used to determine a new water-water interaction potential, which is BSSE (basis set superposition error) free in an a priori fashion. Extensive calculations were performed on water dimer and trimer and a new parametrization of a NCC-like (Niesar-Corongiu-Clementi) potential was accomplished. This was employed in the molecular-dynamics simulation of water. The effect of temperature and density variations was examined. Acceptable agreement between site-site correlation functions derived from neutron diffraction data and from computer simulation was reached. In particular, a weakening of the hydrogen bonded structure was observed on approaching the critical point, which reproduces the experimental behavior. The simulations were performed using the MOTECC (modern techniques in computational chemistry) suite of programs. The present results show the importance of BSSE-free nonorthogonal orbitals in an accurate description of the intermolecular potential of water.

  16. Enthalpy difference between conformations of normal alkanes: Intramolecular basis set superposition error (BSSE) in the case of n-butane and n-hexane

    NASA Astrophysics Data System (ADS)

    Balabin, Roman M.

    2008-10-01

    In this paper, an extra error source for high-quality ab initio calculation of conformation equilibrium in normal alkanes—intramolecular basis set superposition error (BSSE)—is discussed. Normal butane (n-butane) and normal hexane (n-hexane) are used as representative examples. Single-point energy difference and BSSE values of trans and gauche conformations for n-butane (and trans-trans-trans and gauche-gauche-gauche conformations for n-hexane) were calculated using popular electron correlation methods: The second-order Moller-Plesset (MP2), the fourth-order Moller-Plesset (MP4), and coupled cluster with single and double substitutions with noniterative triple excitation [CCSD(T)] levels of theory. Extrapolation to the complete basis set is applied. The difference between BSSE-corrected and uncorrected relative energy values ranges from ˜100 cal/mol (in case of n-butane) to more than 1000 cal/mol (in case of n-hexane). The influence of basis set type (Pople or Dunning) and size [up to 6-311G(3df,3pd) and aug-cc-pVQZ] is discussed.

  17. Convolution-based estimation of organ dose in tube current modulated CT

    PubMed Central

    Tian, Xiaoyu; Segars, W Paul; Dixon, Robert L; Samei, Ehsan

    2016-01-01

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460–7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18–70 years, weight range: 60–180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients (hOrgan) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate (CTDIvol)organ, convolution values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying (CTDIvol)organ, convolution with the organ dose coefficients (hOrgan). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The discrepancy between the estimated organ dose and dose simulated using TCM Monte Carlo program was quantified. We further compared the

  18. Convolution-based estimation of organ dose in tube current modulated CT

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan

    2016-05-01

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The

  19. Generalization of Abel's mechanical problem: The extended isochronicity condition and the superposition principle

    SciTech Connect

    Kinugawa, Tohru

    2014-02-15

    This paper presents a simple but nontrivial generalization of Abel's mechanical problem, based on the extended isochronicity condition and the superposition principle. There are two primary aims. The first one is to reveal the linear relation between the transit-time T and the travel-length X hidden behind the isochronicity problem that is usually discussed in terms of the nonlinear equation of motion (d{sup 2}X)/(dt{sup 2}) +(dU)/(dX) =0 with U(X) being an unknown potential. Second, the isochronicity condition is extended for the possible Abel-transform approach to designing the isochronous trajectories of charged particles in spectrometers and/or accelerators for time-resolving experiments. Our approach is based on the integral formula for the oscillatory motion by Landau and Lifshitz [Mechanics (Pergamon, Oxford, 1976), pp. 27–29]. The same formula is used to treat the non-periodic motion that is driven by U(X). Specifically, this unknown potential is determined by the (linear) Abel transform X(U) ∝ A[T(E)], where X(U) is the inverse function of U(X), A=(1/√(π))∫{sub 0}{sup E}dU/√(E−U) is the so-called Abel operator, and T(E) is the prescribed transit-time for a particle with energy E to spend in the region of interest. Based on this Abel-transform approach, we have introduced the extended isochronicity condition: typically, τ = T{sub A}(E) + T{sub N}(E) where τ is a constant period, T{sub A}(E) is the transit-time in the Abel type [A-type] region spanning X > 0 and T{sub N}(E) is that in the Non-Abel type [N-type] region covering X < 0. As for the A-type region in X > 0, the unknown inverse function X{sub A}(U) is determined from T{sub A}(E) via the Abel-transform relation X{sub A}(U) ∝ A[T{sub A}(E)]. In contrast, the N-type region in X < 0 does not ensure this linear relation: the region is covered with a predetermined potential U{sub N}(X) of some arbitrary choice, not necessarily obeying the Abel-transform relation. In discussing

  20. FAST-PT II: an algorithm to calculate convolution integrals of general tensor quantities in cosmological perturbation theory

    NASA Astrophysics Data System (ADS)

    Fang, Xiao; Blazek, Jonathan A.; McEwen, Joseph E.; Hirata, Christopher M.

    2017-02-01

    Cosmological perturbation theory is a powerful tool to predict the statistics of large-scale structure in the weakly non-linear regime, but even at 1-loop order it results in computationally expensive mode-coupling integrals. Here we present a fast algorithm for computing 1-loop power spectra of quantities that depend on the observer's orientation, thereby generalizing the FAST-PT framework (McEwen et al., 2016) that was originally developed for scalars such as the matter density. This algorithm works for an arbitrary input power spectrum and substantially reduces the time required for numerical evaluation. We apply the algorithm to four examples: intrinsic alignments of galaxies in the tidal torque model; the Ostriker-Vishniac effect; the secondary CMB polarization due to baryon flows; and the 1-loop matter power spectrum in redshift space. Code implementing this algorithm and these applications is publicly available at https://github.com/JoeMcEwen/FAST-PT.

  1. Exploration geophysics calculator programs for use on Hewlett-Packard models 67 and 97 programmable calculators

    USGS Publications Warehouse

    Campbell, David L.; Watts, Raymond D.

    1978-01-01

    Program listing, instructions, and example problems are given for 12 programs for the interpretation of geophysical data, for use on Hewlett-Packard models 67 and 97 programmable hand-held calculators. These are (1) gravity anomaly over 2D prism with = 9 vertices--Talwani method; (2) magnetic anomaly (?T, ?V, or ?H) over 2D prism with = 8 vertices?Talwani method; (3) total-field magnetic anomaly profile over thick sheet/thin dike; (4) single dipping seismic refractor--interpretation and design; (5) = 4 dipping seismic refractors--interpretation; (6) = 4 dipping seismic refractors?design; (7) vertical electrical sounding over = 10 horizontal layers--Schlumberger or Wenner forward calculation; (8) vertical electric sounding: Dar Zarrouk calculations; (9) magnetotelluric planewave apparent conductivity and phase angle over = 9 horizontal layers--forward calculation; (10) petrophysics: a.c. electrical parameters; (11) petrophysics: elastic constants; (12) digital convolution with = 10-1ength filter.

  2. Superposition well-test method for reservoir characterization and pressure management during CO2 injection

    NASA Astrophysics Data System (ADS)

    White, J. A.

    2014-12-01

    As a significant fraction of a carbon storage project's budget is devoted to site characterization and monitoring, there has been an intense drive in recent years to both lower cost and improve the quality of data obtained. Two data streams that are cheap and always available are pressure and flow rate measurements from the injection well. Falloff testing, in which the well is shut-in for some period of time and the pressure decline curve measured, is often used to probe the storage zone and look for indications of hydraulic barriers, fracture-dominated flow, and other reservoir characteristics. These tests can be used to monitor many hydromechanical processes of interest, including hydraulic fracturing and fault reactivation. Unfortunately, the length of the shut-in period controls how far away from the injector information may be obtained. For operational reasons these tests are typically kept short and infrequent, limiting their usefulness. In this work, we present a new analysis method in which ongoing injection data is used to reconstruct an equivalent falloff test, without shutting in the well. The entire history of injection may therefore be used as a stand in for a very long test. The method relies upon a simple superposition principle to transform a multi-rate injection sequence into an equivalent single-rate process. We demonstrate the effectiveness of the method using injection data from the Snøhvit storage project. We also explore its utility in an active pressure management scenario. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  3. Bioeffects induced by exposure to microwaves are mitigated by superposition of ELF noise.

    PubMed

    Litovitz, T A; Penafiel, L M; Farrel, J M; Krause, D; Meister, R; Mullins, J M

    1997-01-01

    We have previously demonstrated that microwave fields, amplitude modulated (AM) by an extremely low-frequency (ELF) sine wave, can induce a nearly twofold enhancement in the activity of ornithine decarboxylase (ODC) in L929 cells at SAR levels of the order of 2.5 W/kg. Similar, although less pronounced, effects were also observed from exposure to a typical digital cellular phone test signal of the same power level, burst modulated at 50 Hz. We have also shown that ODC enhancement in L929 cells produced by exposure to ELF fields can be inhibited by superposition of ELF noise. In the present study, we explore the possibility that similar inhibition techniques can be used to suppress the microwave response. We concurrently exposed L929 cells to 60 Hz AM microwave fields or a 50 Hz burst-modulated DAMPS (Digital Advanced Mobile Phone System) digital cellular phone field at levels known to produce ODC enhancement, together with band-limited 30-100 Hz ELF noise with root mean square amplitude of up to 10 microT. All exposures were carried out for 8 h, which was previously found to yield the peak microwave response. In both cases, the ODC enhancement was found to decrease exponentially as a function of the noise root mean square amplitude. With 60 Hz AM microwaves, complete inhibition was obtained with noise levels at or above 2 microT. With the DAMPS digital cellular phone signal, complete inhibition occurred with noise levels at or above 5 microT. These results suggest a possible practical means to inhibit biological effects from exposure to both ELF and microwave fields.

  4. Large-field-of-view wide-spectrum artificial reflecting superposition compound eyes

    NASA Astrophysics Data System (ADS)

    Huang, Chi-Chieh

    The study of the imaging principles of natural compound eyes has become an active area of research and has fueled the advancement of modern optics with many attractive design features beyond those available with conventional technologies. Most prominent among all compound eyes is the reflecting superposition compound eyes (RSCEs) found in some decapods. They are extraordinary imaging systems with numerous optical features such as minimum chromatic aberration, wide-angle field of view (FOV), high sensitivity to light and superb acuity to motion. Inspired by their remarkable visual system, we were able to implement the unique lens-free, reflection-based imaging mechanisms into a miniaturized, large-FOV optical imaging device operating at the wide visible spectrum to minimize chromatic aberration without any additional post-image processing. First, two micro-transfer printing methods, a multiple and a shear-assisted transfer printing technique, were studied and discussed to realize life-sized artificial RSCEs. The processes exploited the differential adhesive tendencies of the microstructures formed between a donor and a transfer substrate to accomplish an efficient release and transfer process. These techniques enabled conformal wrapping of three-dimensional (3-D) microstructures, initially fabricated in two-dimensional (2-D) layouts with standard fabrication technology onto a wide range of surfaces with complex and curvilinear shapes. Final part of this dissertation was focused on implementing the key operational features of the natural RSCEs into large-FOV, wide-spectrum artificial RSCEs as an optical imaging device suitable for the wide visible spectrum. Our devices can form real, clear images based on reflection rather than refraction, hence avoiding chromatic aberration due to dispersion by the optical materials. Compared to the performance of conventional refractive lenses of comparable size, our devices demonstrated minimum chromatic aberration, exceptional

  5. Model calculations of spectral transmission for the CLAES etalons

    NASA Technical Reports Server (NTRS)

    James, T. C.; Roche, A. E.; Kumer, J. B.

    1989-01-01

    This paper describes models for calculating spectral transmission for the Cryogenic-Limb-Array-Etalon-Spectrometer (CLAES) etalons. These models involve a convolution of the Airy function for a given thickness with the distribution of surface thicknesses, the effect of absorption in the substrate, and the field of view broadening as a function of etalon tilt angle. A comparison of model calculations with experimental transmission data for CLAES etalons centered at 3.52, 5.72, 8.0, and 11.86 microns showed that these models are able to provide a good description of the CLAES etalons.

  6. Knowledge Based 3d Building Model Recognition Using Convolutional Neural Networks from LIDAR and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2016-06-01

    In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.

  7. BrainNetCNN: Convolutional neural networks for brain networks; towards predicting neurodevelopment.

    PubMed

    Kawahara, Jeremy; Brown, Colin J; Miller, Steven P; Booth, Brian G; Chau, Vann; Grunau, Ruth E; Zwicker, Jill G; Hamarneh, Ghassan

    2017-02-01

    We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain.

  8. On the mechanism of parathyroid hormone stimulation of calcium uptake by mouse distal convoluted tubule cells.

    PubMed Central

    Gesek, F A; Friedman, P A

    1992-01-01

    PTH stimulates transcellular Ca2+ absorption in renal distal convoluted tubules. The effect of PTH on membrane voltage, the ionic basis of the change in voltage, and the relations between voltage and calcium entry were determined on immortalized mouse distal convoluted tubule cells. PTH (10(-8) M) significantly increased 45Ca2+ uptake from basal levels of 2.81 +/- 0.16 to 3.88 +/- 0.19 nmol min-1 mg protein-1. PTH-induced 45Ca2+ uptake was abolished by the dihydropyridine antagonist, nifedipine (10(-5) M). PTH did not affect 22Na+ uptake. Intracellular calcium activity ([Ca2+]i) was measured in cells loaded with fura-2. Control [Ca2+]i averaged 112 +/- 21 nM. PTH increased [Ca2+]i over the range of 10(-11) to 10(-7) M. Maximal stimulation to 326 +/- 31 nM was achieved at 10(-8) M PTH. Resting membrane voltage measured with the potential sensitive dye DiO6(3) averaged -71 +/- 2 mV. PTH hyperpolarized cells by 19 +/- 4 mV. The chloride-channel blocker NPPB prevented PTH-induced hyperpolarization. PTH decreased and NPPB increased intracellular chloride, measured with the fluorescent dye SPQ. Chloride permeability was estimated by measuring the rate of 125I- efflux. PTH increased 125I- efflux and this effect was blocked by NPPB. Clamping voltage with K+/valinomycin; depolarizing membrane voltage by reducing extracellular chloride; or addition of NPPB prevented PTH-induced calcium uptake. In conclusion, PTH increases chloride conductance in distal convoluted tubule cells leading to decreased intracellular chloride activity, membrane hyperpolarization, and increased calcium entry through dihydropyridine-sensitive calcium channels. PMID:1522230

  9. Dense Semantic Labeling of Subdecimeter Resolution Images With Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Volpi, Michele; Tuia, Devis

    2017-02-01

    Semantic labeling (or pixel-level land-cover classification) in ultra-high resolution imagery (< 10cm) requires statistical models able to learn high level concepts from spatial data, with large appearance variations. Convolutional Neural Networks (CNNs) achieve this goal by learning discriminatively a hierarchy of representations of increasing abstraction. In this paper we present a CNN-based system relying on an downsample-then-upsample architecture. Specifically, it first learns a rough spatial map of high-level representations by means of convolutions and then learns to upsample them back to the original resolution by deconvolutions. By doing so, the CNN learns to densely label every pixel at the original resolution of the image. This results in many advantages, including i) state-of-the-art numerical accuracy, ii) improved geometric accuracy of predictions and iii) high efficiency at inference time. We test the proposed system on the Vaihingen and Potsdam sub-decimeter resolution datasets, involving semantic labeling of aerial images of 9cm and 5cm resolution, respectively. These datasets are composed by many large and fully annotated tiles allowing an unbiased evaluation of models making use of spatial information. We do so by comparing two standard CNN architectures to the proposed one: standard patch classification, prediction of local label patches by employing only convolutions and full patch labeling by employing deconvolutions. All the systems compare favorably or outperform a state-of-the-art baseline relying on superpixels and powerful appearance descriptors. The proposed full patch labeling CNN outperforms these models by a large margin, also showing a very appealing inference time.

  10. Distribution of effective field in the ising spin glass of the +/- J model at T = O. Solutions as superpositions of delta-functions.

    PubMed

    Katsura, S; Fukuda, W; Inawashiro, S; Fujiki, N M; Gebauer, R

    1987-12-01

    The integral equation for the distribution function of effective field of the +/- J random Ising model in the pair (Bethe) approximation is investigated. Its exact solutions at H (magnetic field) = O, T (temperature) = O and for z (coordination number) = 3 expressed as superpositions of 2N + 1 (more than 3), delta functions are considered. Then the integral equation is reduced to a system of algebraic equations of z - 1th degree with N + 1 unknowns. The system of the equations is solved by the Gröbner basis method with N = 1,2,3,4. The number of physically acceptable solutions for a given N is omega(N) + 1, where omega(N) is the number of divisors of N. The ground-state energy and entropy for these are calculated. They are very close in value (entropies are positive), and it is suggested that a number of physically acceptable solutions correspond to local stationary spin-glass states, as discussed in the literatures.

  11. Engineering chiral density waves and topological band structures by multiple-Q superpositions of collinear up-up-down-down orders

    NASA Astrophysics Data System (ADS)

    Hayami, Satoru; Ozawa, Ryo; Motome, Yukitoshi

    2016-07-01

    Magnetic orders characterized by multiple ordering vectors harbor noncollinear and noncoplanar spin textures and can be a source of unusual electronic properties through the spin Berry phase mechanism. We theoretically show that such multiple-Q states are stabilized in itinerant magnets in the form of superpositions of collinear up-up-down-down (UUDD) spin states, which accompany the density waves of vector and scalar chirality. The result is drawn by examining the ground state of the Kondo lattice model with classical localized moments, especially when the Fermi surface is tuned to be partially nested by the symmetry-related commensurate vectors. We unveil the instability toward a double-Q UUDD state with vector chirality density waves on the square lattice and a triple-Q UUDD state with scalar chirality density waves on the triangular lattice, using the perturbative theory and variational calculations. The former double-Q state is also confirmed by large-scale Langevin dynamics simulations. We also show that, for a sufficiently large exchange coupling, the chirality density waves can induce rich nontrivial topology of electronic structures, such as the massless Dirac semimetal, Chern insulator with quantized topological Hall response, and peculiar edge states which depend on the phase of chirality density waves at the edges.

  12. A nonlinear training set superposition filter derived by neural network training methods for implementation in a shift-invariant optical correlator

    NASA Astrophysics Data System (ADS)

    Kypraios, Ioannis; Young, Rupert C. D.; Birch, Philip M.; Chatwin, Christopher R.

    2003-08-01

    The various types of synthetic discriminant function (sdf) filter result in a weighted linear superposition of the training set images. Neural network training procedures result in a non-linear superposition of the training set images or, effectively, a feature extraction process, which leads to better interpolation properties than achievable with the sdf filter. However, generally, shift invariance is lost since a data dependant non-linear weighting function is incorporated in the input data window. As a compromise, we train a non-linear superposition filter via neural network methods with the constraint of a linear input to allow for shift invariance. The filter can then be used in a frequency domain based optical correlator. Simulation results are presented that demonstrate the improved training set interpolation achieved by the non-linear filter as compared to a linear superposition filter.

  13. Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator

    DOE PAGES

    Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; ...

    2014-12-08

    Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator’s vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator’s vacuum-insulator stack (at a radius of 1.6 m) by using standard D-dot and B-dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator’s magnetically insulated transmission lines (MITLs)more » and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z. These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that showed

  14. convolve_image.pro: Common-Resolution Convolution Kernels for Space- and Ground-Based Telescopes

    NASA Astrophysics Data System (ADS)

    Aniano, Gonzalo J.

    2014-01-01

    The IDL package convolve_image.pro transforms images between different instrumental point spread functions (PSFs). It can load an image file and corresponding kernel and return the convolved image, thus preserving the colors of the astronomical sources. Convolution kernels are available for images from Spitzer (IRAC MIPS), Herschel (PACS SPIRE), GALEX (FUV NUV), WISE (W1 - W4), Optical PSFs (multi- Gaussian and Moffat functions), and Gaussian PSFs; they allow the study of the Spectral Energy Distribution (SED) of extended objects and preserve the characteristic SED in each pixel.

  15. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision.

    PubMed

    Zhong, Bineng; Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of training data. Finally, to alleviate the tracker drifting problem caused by model updating, we jointly consider three different types of positive samples. Extensive experiments validate the robustness and effectiveness of the proposed method.

  16. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision

    PubMed Central

    Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of training data. Finally, to alleviate the tracker drifting problem caused by model updating, we jointly consider three different types of positive samples. Extensive experiments validate the robustness and effectiveness of the proposed method. PMID:27847827

  17. Using convolutional neural networks for human activity classification on micro-Doppler radar spectrograms

    NASA Astrophysics Data System (ADS)

    Jordan, Tyler S.

    2016-05-01

    This paper presents the findings of using convolutional neural networks (CNNs) to classify human activity from micro-Doppler features. An emphasis on activities involving potential security threats such as holding a gun are explored. An automotive 24 GHz radar on chip was used to collect the data and a CNN (normally applied to image classification) was trained on the resulting spectrograms. The CNN achieves an error rate of 1.65 % on classifying running vs. walking, 17.3 % error on armed walking vs. unarmed walking, and 22 % on classifying six different actions.

  18. Semi-supervised Convolutional Neural Networks for Text Categorization via Region Embedding

    PubMed Central

    Johnson, Rie; Zhang, Tong

    2016-01-01

    This paper presents a new semi-supervised framework with convolutional neural networks (CNNs) for text categorization. Unlike the previous approaches that rely on word embeddings, our method learns embeddings of small text regions from unlabeled data for integration into a supervised CNN. The proposed scheme for embedding learning is based on the idea of two-view semi-supervised learning, which is intended to be useful for the task of interest even though the training is done on unlabeled data. Our models achieve better results than previous approaches on sentiment classification and topic classification tasks. PMID:27087766

  19. tf_unet: Generic convolutional neural network U-Net implementation in Tensorflow

    NASA Astrophysics Data System (ADS)

    Akeret, Joel; Chang, Chihway; Lucchi, Aurelien; Refregier, Alexandre

    2016-11-01

    tf_unet mitigates radio frequency interference (RFI) signals in radio data using a special type of Convolutional Neural Network, the U-Net, that enables the classification of clean signal and RFI signatures in 2D time-ordered data acquired from a radio telescope. The code is not tied to a specific segmentation and can be used, for example, to detect radio frequency interference (RFI) in radio astronomy or galaxies and stars in widefield imaging data. This U-Net implementation can outperform classical RFI mitigation algorithms.

  20. Multi-Scale Rotation-Invariant Convolutional Neural Networks for Lung Texture Classification.

    PubMed

    Wang, Qiangchang; Zheng, Yuanjie; Yang, Gongping; Jin, Weidong; Chen, Xinjian; Yin, Yilong

    2017-03-21

    We propose a new Multi-scale Rotation-invariant Convolutional Neural Network (MRCNN) model for classifying various lung tissue types on high-resolution computed tomography (HRCT). MRCNN employs Gabor-local binary pattern (Gabor-LBP) which introduces a good property in image analysis - invariance to image scales and rotations. In addition, we offer an approach to deal with the problems caused by imbalanced number of samples between different classes in most of the existing works, accomplished by changing the overlapping size between the adjacent patches. Experimental results on a public Interstitial Lung Disease (ILD) database show a superior performance of the proposed method to state-of-the-art.

  1. Satellite image processing for precision agriculture and agroindustry using convolutional neural network and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Firdaus; Arkeman, Y.; Buono, A.; Hermadi, I.

    2017-01-01

    Translating satellite imagery to a useful data for decision making during this time are usually done manually by human. In this research, we are going to translate satellite imagery by using artificial intelligence method specifically using convolutional neural network and genetic algorithm to become a useful data for decision making, especially for precision agriculture and agroindustry. In this research, we are focused on how to made a sustainable land use planning with 3 objectives. The first is maximizing economic factor. Second is minimizing CO2 emission and the last is minimizing land degradation. Results show that by using artificial intelligence method, can produced a good pareto optimum solutions in a short time.

  2. Parity retransmission hybrid ARQ using rate 1/2 convolutional codes on a nonstationary channel

    NASA Technical Reports Server (NTRS)

    Lugand, Laurent R.; Costello, Daniel J., Jr.; Deng, Robert H.

    1989-01-01

    A parity retransmission hybrid automatic repeat request (ARQ) scheme is proposed which uses rate 1/2 convolutional codes and Viterbi decoding. A protocol is described which is capable of achieving higher throughputs than previously proposed parity retransmission schemes. The performance analysis is based on a two-state Markov model of a nonstationary channel. This model constitutes a first approximation to a nonstationary channel. The two-state channel model is used to analyze the throughput and undetected error probability of the protocol presented when the receiver has both an infinite and a finite buffer size. It is shown that the throughput improves as the channel becomes more bursty.

  3. Diffuse dispersive delay and the time convolution/attenuation of transients

    NASA Technical Reports Server (NTRS)

    Bittner, Burt J.

    1991-01-01

    Test data and analytic evaluations are presented to show that relatively poor 100 KHz shielding of 12 Db can effectively provide an electromagnetic pulse transient reduction of 100 Db. More importantly, several techniques are shown for lightning surge attenuation as an alternative to crowbar, spark gap, or power zener type clipping which simply reflects the surge. A time delay test method is shown which allows CW testing, along with a convolution program to define transient shielding effectivity where the Fourier phase characteristics of the transient are known or can be broadly estimated.

  4. Architectural style classification of Mexican historical buildings using deep convolutional neural networks and sparse features

    NASA Astrophysics Data System (ADS)

    Obeso, Abraham Montoya; Benois-Pineau, Jenny; Acosta, Alejandro Álvaro Ramirez; Vázquez, Mireya Saraí García

    2017-01-01

    We propose a convolutional neural network to classify images of buildings using sparse features at the network's input in conjunction with primary color pixel values. As a result, a trained neuronal model is obtained to classify Mexican buildings in three classes according to the architectural styles: prehispanic, colonial, and modern with an accuracy of 88.01%. The problem of poor information in a training dataset is faced due to the unequal availability of cultural material. We propose a data augmentation and oversampling method to solve this problem. The results are encouraging and allow for prefiltering of the content in the search tasks.

  5. Processing circuit with asymmetry corrector and convolutional encoder for digital data

    NASA Technical Reports Server (NTRS)

    Pfiffner, Harold J. (Inventor)

    1987-01-01

    A processing circuit is provided for correcting for input parameter variations, such as data and clock signal symmetry, phase offset and jitter, noise and signal amplitude, in incoming data signals. An asymmetry corrector circuit performs the correcting function and furnishes the corrected data signals to a convolutional encoder circuit. The corrector circuit further forms a regenerated clock signal from clock pulses in the incoming data signals and another clock signal at a multiple of the incoming clock signal. These clock signals are furnished to the encoder circuit so that encoded data may be furnished to a modulator at a high data rate for transmission.

  6. Models For Diffracting Aperture Identification : A Comparison Between Ideal And Convolutional Observations

    NASA Astrophysics Data System (ADS)

    Crosta, Giovanni

    1983-09-01

    We consider a number of inverse diffraction problems where different models are compared. Ideal measurements yield Cauchy data , to which corresponds a unique solution. If a convolutional observation map is chosen, uniqueness can no longer be insured. We also briefly examine a non-linear non-invertible observation map , which describes a quadratic detector. In all of these cases we discuss the link between aperture identification and optimal control theory , which leads to regularised functional minimisation. This task can be performed by a discrete gradient algorithm of which we give the flow chart.

  7. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1976-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  8. Revision of the theory of tracer transport and the convolution model of dynamic contrast enhanced magnetic resonance imaging

    PubMed Central

    Bammer, Roland; Stollberger, Rudolf

    2012-01-01

    Counterexamples are used to motivate the revision of the established theory of tracer transport. Then dynamic contrast enhanced magnetic resonance imaging in particular is conceptualized in terms of a fully distributed convection–diffusion model from which a widely used convolution model is derived using, alternatively, compartmental discretizations or semigroup theory. On this basis, applications and limitations of the convolution model are identified. For instance, it is proved that perfusion and tissue exchange states cannot be identified on the basis of a single convolution equation alone. Yet under certain assumptions, particularly that flux is purely convective at the boundary of a tissue region, physiological parameters such as mean transit time, effective volume fraction, and volumetric flow rate per unit tissue volume can be deduced from the kernel. PMID:17429633

  9. Large patch convolutional neural networks for the scene classification of high spatial resolution imagery

    NASA Astrophysics Data System (ADS)

    Zhong, Yanfei; Fei, Feng; Zhang, Liangpei

    2016-04-01

    The increase of the spatial resolution of remote-sensing sensors helps to capture the abundant details related to the semantics of surface objects. However, it is difficult for the popular object-oriented classification approaches to acquire higher level semantics from the high spatial resolution remote-sensing (HSR-RS) images, which is often referred to as the "semantic gap." Instead of designing sophisticated operators, convolutional neural networks (CNNs), a typical deep learning method, can automatically discover intrinsic feature descriptors from a large number of input images to bridge the semantic gap. Due to the small data volume of the available HSR-RS scene datasets, which is far away from that of the natural scene datasets, there have been few reports of CNN approaches for HSR-RS image scene classifications. We propose a practical CNN architecture for HSR-RS scene classification, named the large patch convolutional neural network (LPCNN). The large patch sampling is used to generate hundreds of possible scene patches for the feature learning, and a global average pooling layer is used to replace the fully connected network as the classifier, which can greatly reduce the total parameters. The experiments confirm that the proposed LPCNN can learn effective local features to form an effective representation for different land-use scenes, and can achieve a performance that is comparable to the state-of-the-art on public HSR-RS scene datasets.

  10. Localization and Classification of Paddy Field Pests using a Saliency Map and Deep Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong

    2016-02-01

    We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods.

  11. Emotional textile image classification based on cross-domain convolutional sparse autoencoders with feature selection

    NASA Astrophysics Data System (ADS)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin

    2017-01-01

    We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.

  12. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    PubMed Central

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612

  13. Scatter-to-primary based scatter fractions for transmission-dependent convolution subtraction of SPECT images.

    PubMed

    Larsson, Anne; Johansson, Lennart

    2003-11-21

    In single photon emission computed tomography (SPECT), transmission-dependent convolution subtraction has been shown to be useful when correcting for scattered events. The method is based on convolution subtraction, but includes a matrix of scatter fractions instead of a global scatter fraction. The method can be extended to iteratively improve the scatter estimate, but in this note we show that this requires a modification of the theory to use scatter-to-total scatter fractions for the first iteration only and scatter-to-primary fractions thereafter. To demonstrate this, scatter correction is performed on a Monte Carlo simulated image of a point source of activity in water. The modification of the theory is compared to corrections where the scatter fractions are based on the scatter-to-total ratio, using one and ten iterations. The resulting ratios of subtracted to original counts are compared to the true scatter-to-total ratio of the simulation and the most accurate result is found for our modification of the theory.

  14. Muon Neutrino Disappearance in NOvA with a Deep Convolutional Neural Network Classifier

    NASA Astrophysics Data System (ADS)

    Rocco, Dominick Rosario

    The NuMI Off-axis Neutrino Appearance Experiment (NOvA) is designed to study neutrino oscillation in the NuMI (Neutrinos at the Main Injector) beam. NOvA observes neutrino oscillation using two detectors separated by a baseline of 810 km; a 14 kt Far Detector in Ash River, MN and a functionally identical 0.3 kt Near Detector at Fermilab. The experiment aims to provide new measurements of $[special characters omitted]. and theta23 and has potential to determine the neutrino mass hierarchy as well as observe CP violation in the neutrino sector. Essential to these analyses is the classification of neutrino interaction events in NOvA detectors. Raw detector output from NOvA is interpretable as a pair of images which provide orthogonal views of particle interactions. A recent advance in the field of computer vision is the advent of convolutional neural networks, which have delivered top results in the latest image recognition contests. This work presents an approach novel to particle physics analysis in which a convolutional neural network is used for classification of particle interactions. The approach has been demonstrated to improve the signal efficiency and purity of the event selection, and thus physics sensitivity. Early NOvA data has been analyzed (2.74 x 1020 POT, 14 kt equivalent) to provide new best-fit measurements of sin2(theta23) = 0.43 (with a statistically-degenerate compliment near 0.60) and [special characters omitted]..

  15. The probabilistic convolution tree: efficient exact Bayesian inference for faster LC-MS/MS protein inference.

    PubMed

    Serang, Oliver

    2014-01-01

    Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called "causal independence"). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to O(k log(k)2) and the space to O(k log(k)) where k is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions.

  16. Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification

    NASA Astrophysics Data System (ADS)

    Salamon, Justin; Bello, Juan Pablo

    2017-03-01

    The ability of deep convolutional neural networks (CNN) to learn discriminative spectro-temporal patterns makes them well suited to environmental sound classification. However, the relative scarcity of labeled data has impeded the exploitation of this family of high-capacity models. This study has two primary contributions: first, we propose a deep convolutional neural network architecture for environmental sound classification. Second, we propose the use of audio data augmentation for overcoming the problem of data scarcity and explore the influence of different augmentations on the performance of the proposed CNN architecture. Combined with data augmentation, the proposed model produces state-of-the-art results for environmental sound classification. We show that the improved performance stems from the combination of a deep, high-capacity model and an augmented training set: this combination outperforms both the proposed CNN without augmentation and a "shallow" dictionary learning model with augmentation. Finally, we examine the influence of each augmentation on the model's classification accuracy for each class, and observe that the accuracy for each class is influenced differently by each augmentation, suggesting that the performance of the model could be improved further by applying class-conditional data augmentation.

  17. Performance of convolutional codes on fading channels typical of planetary entry missions

    NASA Technical Reports Server (NTRS)

    Modestino, J. W.; Mui, S. Y.; Reale, T. J.

    1974-01-01

    The performance of convolutional codes in fading channels typical of the planetary entry channel is examined in detail. The signal fading is due primarily to turbulent atmospheric scattering of the RF signal transmitted from an entry probe through a planetary atmosphere. Short constraint length convolutional codes are considered in conjunction with binary phase-shift keyed modulation and Viterbi maximum likelihood decoding, and for longer constraint length codes sequential decoding utilizing both the Fano and Zigangirov-Jelinek (ZJ) algorithms are considered. Careful consideration is given to the modeling of the channel in terms of a few meaningful parameters which can be correlated closely with theoretical propagation studies. For short constraint length codes the bit error probability performance was investigated as a function of E sub b/N sub o parameterized by the fading channel parameters. For longer constraint length codes the effect was examined of the fading channel parameters on the computational requirements of both the Fano and ZJ algorithms. The effects of simple block interleaving in combatting the memory of the channel is explored, using the analytic approach or digital computer simulation.

  18. Drug drug interaction extraction from biomedical literature using syntax convolutional neural network

    PubMed Central

    Zhao, Zhehuan; Yang, Zhihao; Luo, Ling; Lin, Hongfei; Wang, Jian

    2016-01-01

    Motivation: Detecting drug-drug interaction (DDI) has become a vital part of public health safety. Therefore, using text mining techniques to extract DDIs from biomedical literature has received great attentions. However, this research is still at an early stage and its performance has much room to improve. Results: In this article, we present a syntax convolutional neural network (SCNN) based DDI extraction method. In this method, a novel word embedding, syntax word embedding, is proposed to employ the syntactic information of a sentence. Then the position and part of speech features are introduced to extend the embedding of each word. Later, auto-encoder is introduced to encode the traditional bag-of-words feature (sparse 0–1 vector) as the dense real value vector. Finally, a combination of embedding-based convolutional features and traditional features are fed to the softmax classifier to extract DDIs from biomedical literature. Experimental results on the DDIExtraction 2013 corpus show that SCNN obtains a better performance (an F-score of 0.686) than other state-of-the-art methods. Availability and Implementation: The source code is available for academic use at http://202.118.75.18:8080/DDI/SCNN-DDI.zip. Contact: yangzh@dlut.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27466626

  19. Efficient pedestrian detection from aerial vehicles with object proposals and deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Minnehan, Breton; Savakis, Andreas

    2016-05-01

    As Unmanned Aerial Systems grow in numbers, pedestrian detection from aerial platforms is becoming a topic of increasing importance. By providing greater contextual information and a reduced potential for occlusion, the aerial vantage point provided by Unmanned Aerial Systems is highly advantageous for many surveillance applications, such as target detection, tracking, and action recognition. However, due to the greater distance between the camera and scene, targets of interest in aerial imagery are generally smaller and have less detail. Deep Convolutional Neural Networks (CNN's) have demonstrated excellent object classification performance and in this paper we adopt them to the problem of pedestrian detection from aerial platforms. We train a CNN with five layers consisting of three convolution-pooling layers and two fully connected layers. We also address the computational inefficiencies of the sliding window method for object detection. In the sliding window configuration, a very large number of candidate patches are generated from each frame, while only a small number of them contain pedestrians. We utilize the Edge Box object proposal generation method to screen candidate patches based on an "objectness" criterion, so that only regions that are likely to contain objects are processed. This method significantly reduces the number of image patches processed by the neural network and makes our classification method very efficient. The resulting two-stage system is a good candidate for real-time implementation onboard modern aerial vehicles. Furthermore, testing on three datasets confirmed that our system offers high detection accuracy for terrestrial pedestrian detection in aerial imagery.

  20. Superposition-free comparison and clustering of antibody binding sites: implications for the prediction of the nature of their antigen

    PubMed Central

    Di Rienzo, Lorenzo; Milanetti, Edoardo; Lepore, Rosalba; Olimpieri, Pier Paolo; Tramontano, Anna

    2017-01-01

    We describe here a superposition free method for comparing the surfaces of antibody binding sites based on the Zernike moments and show that they can be used to quickly compare and cluster sets of antibodies. The clusters provide information about the nature of the bound antigen that, when combined with a method for predicting the number of direct antibody antigen contacts, allows the discrimination between protein and non-protein binding antibodies with an accuracy of 76%. This is of relevance in several aspects of antibody science, for example to select the framework to be used for a combinatorial antibody library. PMID:28338016