A convolution-superposition dose calculation engine for GPUs
Hissoiny, Sami; Ozell, Benoit; Despres, Philippe
2010-03-15
Purpose: Graphic processing units (GPUs) are increasingly used for scientific applications, where their parallel architecture and unprecedented computing power density can be exploited to accelerate calculations. In this paper, a new GPU implementation of a convolution/superposition (CS) algorithm is presented. Methods: This new GPU implementation has been designed from the ground-up to use the graphics card's strengths and to avoid its weaknesses. The CS GPU algorithm takes into account beam hardening, off-axis softening, kernel tilting, and relies heavily on raytracing through patient imaging data. Implementation details are reported as well as a multi-GPU solution. Results: An overall single-GPU acceleration factor of 908x was achieved when compared to a nonoptimized version of the CS algorithm implemented in PlanUNC in single threaded central processing unit (CPU) mode, resulting in approximatively 2.8 s per beam for a 3D dose computation on a 0.4 cm grid. A comparison to an established commercial system leads to an acceleration factor of approximately 29x or 0.58 versus 16.6 s per beam in single threaded mode. An acceleration factor of 46x has been obtained for the total energy released per mass (TERMA) calculation and a 943x acceleration factor for the CS calculation compared to PlanUNC. Dose distributions also have been obtained for a simple water-lung phantom to verify that the implementation gives accurate results. Conclusions: These results suggest that GPUs are an attractive solution for radiation therapy applications and that careful design, taking the GPU architecture into account, is critical in obtaining significant acceleration factors. These results potentially can have a significant impact on complex dose delivery techniques requiring intensive dose calculations such as intensity-modulated radiation therapy (IMRT) and arc therapy. They also are relevant for adaptive radiation therapy where dose results must be obtained rapidly.
NASA Astrophysics Data System (ADS)
Naqvi, Shahid A.; D'Souza, Warren D.; Earl, Matthew A.; Ye, Sung-Joon; Shih, Rompin; Li, X. Allen
2005-09-01
For a given linac design, the dosimetric characteristics of a photon beam are determined uniquely by the energy and radial distributions of the electron beam striking the x-ray target. However, in the usual commissioning of a beam from measured data, a large number of variables can be independently tuned, making it difficult to derive a unique and self-consistent beam model. For example, the measured dosimetric penumbra in water may be attributed in various proportions to the lateral secondary electron range, the focal spot size and the transmission through the tips of a non-divergent collimator; the head-scatter component in the tails of the transverse profiles may not be easy to resolve from phantom scatter and head leakage; and the head-scatter tails corresponding to a certain extra-focal source model may not agree self-consistently with in-air output factors measured on the central axis. To reduce the number of adjustable variables in beam modelling, we replace the focal and extra-focal sources with a single phase-space plane scored just above the highest adjustable collimator in a EGS/BEAM simulation of the linac. The phase-space plane is then used as photon source in a stochastic convolution/superposition dose engine. A photon sampled from the uncollimated phase-space plane is first propagated through an arbitrary collimator arrangement and then interacted in the simulation phantom. Energy deposition kernel rays are then randomly issued from the interaction points and dose is deposited along these rays. The electrons in the phase-space file are used to account for electron contamination. 6 MV and 18 MV photon beams from an Elekta SL linac are used as representative examples. Except for small corrections for monitor backscatter and collimator forward scatter for large field sizes (<0.5% with <20 × 20 cm2 field size), we found that the use of a single phase-space photon source provides accurate and self-consistent results for both relative and absolute dose
NASA Astrophysics Data System (ADS)
Ulmer, W.; Pyyry, J.; Kaissl, W.
2005-04-01
Based on previous publications on a triple Gaussian analytical pencil beam model and on Monte Carlo calculations using Monte Carlo codes GEANT-Fluka, versions 95, 98, 2002, and BEAMnrc/EGSnrc, a three-dimensional (3D) superposition/convolution algorithm for photon beams (6 MV, 18 MV) is presented. Tissue heterogeneity is taken into account by electron density information of CT images. A clinical beam consists of a superposition of divergent pencil beams. A slab-geometry was used as a phantom model to test computed results by measurements. An essential result is the existence of further dose build-up and build-down effects in the domain of density discontinuities. These effects have increasing magnitude for field sizes <=5.5 cm2 and densities <=0.25 g cm-3, in particular with regard to field sizes considered in stereotaxy. They could be confirmed by measurements (mean standard deviation 2%). A practical impact is the dose distribution at transitions from bone to soft tissue, lung or cavities. This work has partially been presented at WC 2003, Sydney.
Sharpe, M B; Battista, J J
1993-01-01
The convolution/superposition method of dose calculation has the potential to become the preferred technique for radiotherapy treatment planning. When this approach is used for therapeutic x-ray beams, the dose spread kernels are usually aligned parallel to the central axis of the incident beam. While this reduces the computational burden, it is more rigorous to tilt the kernel axis to align it with the diverging beam rays that define the incident direction of primary photons. We have assessed the validity of the parallel kernel approximation by computing dose distributions using parallel and tilted kernels for monoenergetic photons of 2, 6, and 10 MeV; source-to-surface distances (SSDs) of 50, 80, and 100 cm; and for field sizes of 5 x 5, 15 x 15, and 30 x 30 cm2. Over most of the irradiated volume, the parallel kernel approximation yields results that differ from tilted kernel calculations by 3% or less for SSDs greater than 80 cm. Under extreme conditions of a short SSD, a large field size and high incident photon energy, the parallel kernel approximation results in discrepancies that may be clinically unacceptable. For 10-MeV photons, we have observed that the parallel kernel approximation can overestimate the dose by up to 4.4% of the maximum on the central axis for a field size of 30 x 30 cm2 applied with a SSD of 50 cm. Very localized dose underestimations of up to 27% of the maximum dose occurred in the penumbral region of a 30 x 30-cm2 field of 10-MeV photons applied with a SSD of 50 cm. PMID:8309441
Vanderstraeten, Barbara; Reynaert, Nick; Paelinck, Leen; Madani, Indira; Wagter, Carlos de; Gersem, Werner de; Neve, Wilfried de; Thierens, Hubert
2006-09-15
The accuracy of dose computation within the lungs depends strongly on the performance of the calculation algorithm in regions of electronic disequilibrium that arise near tissue inhomogeneities with large density variations. There is a lack of data evaluating the performance of highly developed analytical dose calculation algorithms compared to Monte Carlo computations in a clinical setting. We compared full Monte Carlo calculations (performed by our Monte Carlo dose engine MCDE) with two different commercial convolution/superposition (CS) implementations (Pinnacle-CS and Helax-TMS's collapsed cone model Helax-CC) and one pencil beam algorithm (Helax-TMS's pencil beam model Helax-PB) for 10 intensity modulated radiation therapy (IMRT) lung cancer patients. Treatment plans were created for two photon beam qualities (6 and 18 MV). For each dose calculation algorithm, patient, and beam quality, the following set of clinically relevant dose-volume values was reported: (i) minimal, median, and maximal dose (D{sub min}, D{sub 50}, and D{sub max}) for the gross tumor and planning target volumes (GTV and PTV); (ii) the volume of the lungs (excluding the GTV) receiving at least 20 and 30 Gy (V{sub 20} and V{sub 30}) and the mean lung dose; (iii) the 33rd percentile dose (D{sub 33}) and D{sub max} delivered to the heart and the expanded esophagus; and (iv) D{sub max} for the expanded spinal cord. Statistical analysis was performed by means of one-way analysis of variance for repeated measurements and Tukey pairwise comparison of means. Pinnacle-CS showed an excellent agreement with MCDE within the target structures, whereas the best correspondence for the organs at risk (OARs) was found between Helax-CC and MCDE. Results from Helax-PB were unsatisfying for both targets and OARs. Additionally, individual patient results were analyzed. Within the target structures, deviations above 5% were found in one patient for the comparison of MCDE and Helax-CC, while all differences
Vanderstraeten, Barbara; Reynaert, Nick; Paelinck, Leen; Madani, Indira; De Wagter, Carlos; De Gersem, Werner; De Neve, Wilfried; Thierens, Hubert
2006-09-01
The accuracy of dose computation within the lungs depends strongly on the performance of the calculation algorithm in regions of electronic disequilibrium that arise near tissue inhomogeneities with large density variations. There is a lack of data evaluating the performance of highly developed analytical dose calculation algorithms compared to Monte Carlo computations in a clinical setting. We compared full Monte Carlo calculations (performed by our Monte Carlo dose engine MCDE) with two different commercial convolution/superposition (CS) implementations (Pinnacle-CS and Helax-TMS's collapsed cone model Helax-CC) and one pencil beam algorithm (Helax-TMS's pencil beam model Helax-PB) for 10 intensity modulated radiation therapy (IMRT) lung cancer patients. Treatment plans were created for two photon beam qualities (6 and 18 MV). For each dose calculation algorithm, patient, and beam quality, the following set of clinically relevant dose-volume values was reported: (i) minimal, median, and maximal dose (Dmin, D50, and Dmax) for the gross tumor and planning target volumes (GTV and PTV); (ii) the volume of the lungs (excluding the GTV) receiving at least 20 and 30 Gy (V20 and V30) and the mean lung dose; (iii) the 33rd percentile dose (D33) and Dmax delivered to the heart and the expanded esophagus; and (iv) Dmax for the expanded spinal cord. Statistical analysis was performed by means of one-way analysis of variance for repeated measurements and Tukey pairwise comparison of means. Pinnacle-CS showed an excellent agreement with MCDE within the target structures, whereas the best correspondence for the organs at risk (OARs) was found between Helax-CC and MCDE. Results from Helax-PB were unsatisfying for both targets and OARs. Additionally, individual patient results were analyzed. Within the target structures, deviations above 5% were found in one patient for the comparison of MCDE and Helax-CC, while all differences between MCDE and Pinnacle-CS were below 5%. For both
NASA Astrophysics Data System (ADS)
Alaei, Parham
2000-11-01
A number of procedures in diagnostic radiology and cardiology make use of long exposures to x rays from fluoroscopy units. Adverse effects of these long exposure times on the patients' skin have been documented in recent years. These include epilation, erythema, and, in severe cases, moist desquamation and tissue necrosis. Potential biological effects from these exposures to other organs include radiation-induced cataracts and pneumonitis. Although there have been numerous studies to measure or calculate the dose to skin from these procedures, there have only been a handful of studies to determine the dose to other organs. Therefore, there is a need for accurate methods to measure the dose in tissues and organs other than the skin. This research was concentrated in devising a method to determine accurately the radiation dose to these tissues and organs. The work was performed in several stages: First, a three dimensional (3D) treatment planning system used in radiation oncology was modified and complemented to make it usable with the low energies of x rays used in diagnostic radiology. Using the system for low energies required generation of energy deposition kernels using Monte Carlo methods. These kernels were generated using the EGS4 Monte Carlo system of codes and added to the treatment planning system. Following modification, the treatment planning system was evaluated for its accuracy of calculations in low energies within homogeneous and heterogeneous media. A study of the effects of lungs and bones on the dose distribution was also performed. The next step was the calculation of dose distributions in humanoid phantoms using this modified system. The system was used to calculate organ doses in these phantoms and the results were compared to those obtained from other methods. These dose distributions can subsequently be used to create dose-volume histograms (DVHs) for internal organs irradiated by these beams. Using this data and the concept of normal tissue
Real-time dose computation: GPU-accelerated source modeling and superposition/convolution
Jacques, Robert; Wong, John; Taylor, Russell; McNutt, Todd
2011-01-15
Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for the total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3
Fluence-convolution broad-beam (FCBB) dose calculation.
Lu, Weiguo; Chen, Mingli
2010-12-01
IMRT optimization requires a fast yet relatively accurate algorithm to calculate the iteration dose with small memory demand. In this paper, we present a dose calculation algorithm that approaches these goals. By decomposing the infinitesimal pencil beam (IPB) kernel into the central axis (CAX) component and lateral spread function (LSF) and taking the beam's eye view (BEV), we established a non-voxel and non-beamlet-based dose calculation formula. Both LSF and CAX are determined by a commissioning procedure using the collapsed-cone convolution/superposition (CCCS) method as the standard dose engine. The proposed dose calculation involves a 2D convolution of a fluence map with LSF followed by ray tracing based on the CAX lookup table with radiological distance and divergence correction, resulting in complexity of O(N(3)) both spatially and temporally. This simple algorithm is orders of magnitude faster than the CCCS method. Without pre-calculation of beamlets, its implementation is also orders of magnitude smaller than the conventional voxel-based beamlet-superposition (VBS) approach. We compared the presented algorithm with the CCCS method using simulated and clinical cases. The agreement was generally within 3% for a homogeneous phantom and 5% for heterogeneous and clinical cases. Combined with the 'adaptive full dose correction', the algorithm is well suitable for calculating the iteration dose during IMRT optimization. PMID:21081826
Huang, Jessie Y.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.; Eklund, David; Childress, Nathan L.
2013-12-15
Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm.Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels.Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found
Kalet, Alan M; Sandison, George A; Phillips, Mark H; Parvathaneni, Upendra
2013-01-01
We evaluate a photon convolution-superposition algorithm used to model a fast neutron therapy beam in a commercial treatment planning system (TPS). The neutron beam modeled was the Clinical Neutron Therapy System (CNTS) fast neutron beam produced by 50 MeV protons on a Be target at our facility, and we implemented the Pinnacle3 dose calculation model for computing neutron doses. Measured neutron data were acquired by an IC30 ion chamber flowing 5 cc/min of tissue equivalent gas. Output factors and profile scans for open and wedged fields were measured according to the Pinnacle physics reference guide recommendations for photon beams in a Wellhofer water tank scanning system. Following the construction of a neutron beam model, computed doses were then generated using 100 monitor units (MUs) beams incident on a water-equivalent phantom for open and wedged square fields, as well as multileaf collimator (MLC)-shaped irregular fields. We compared Pinnacle dose profiles, central axis doses, and off-axis doses (in irregular fields) with 1) doses computed using the Prism treatment planning system, and 2) doses measured in a water phantom and having matching geometry to the computation setup. We found that the Pinnacle photon model may be used to model most of the important dosimetric features of the CNTS fast neutron beam. Pinnacle-calculated dose points among open and wedged square fields exhibit dose differences within 3.9 cGy of both Prism and measured doses along the central axis, and within 5 cGy difference of measurement in the penumbra region. Pinnacle dose point calculations using irregular treatment type fields showed a dose difference up to 9 cGy from measured dose points, although most points of comparison were below 5 cGy. Comparisons of dose points that were chosen from cases planned in both Pinnacle and Prism show an average dose difference less than 0.6%, except in certain fields which incorporate both wedges and heavy blocking of the central axis. All
Sterpin, E.; Salvat, F.; Olivera, G.; Vynckier, S.
2009-05-15
The reliability of the convolution/superposition (C/S) algorithm of the Hi-Art tomotherapy system is evaluated by using the Monte Carlo model TomoPen, which has been already validated for homogeneous phantoms. The study was performed in three stages. First, measurements with EBT Gafchromic film for a 1.25x2.5 cm{sup 2} field in a heterogeneous phantom consisting of two slabs of polystyrene separated with Styrofoam were compared to simulation results from TomoPen. The excellent agreement found in this comparison justifies the use of TomoPen as the reference for the remaining parts of this work. Second, to allow analysis and interpretation of the results in clinical cases, dose distributions calculated with TomoPen and C/S were compared for a similar phantom geometry, with multiple slabs of various densities. Even in conditions of lack of lateral electronic equilibrium, overall good agreement was obtained between C/S and TomoPen results, with deviations within 3%/2 mm, showing that the C/S algorithm accounts for modifications in secondary electron transport due to the presence of a low density medium. Finally, calculations were performed with TomoPen and C/S of dose distributions in various clinical cases, from large bilateral head and neck tumors to small lung tumors with diameter of <3 cm. To ensure a ''fair'' comparison, identical dose calculation grid and dose-volume histogram calculator were used. Very good agreement was obtained for most of the cases, with no significant differences between the DVHs obtained from both calculations. However, deviations of up to 4% for the dose received by 95% of the target volume were found for the small lung tumors. Therefore, the approximations in the C/S algorithm slightly influence the accuracy in small lung tumors even though the C/S algorithm of the tomotherapy system shows very good overall behavior.
A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures
Neylon, J. Sheng, K.; Yu, V.; Low, D. A.; Kupelian, P.; Santhanam, A.; Chen, Q.
2014-10-15
, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. Results: The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. Conclusions: The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.
Calvo, Oscar I; Gutiérrez, Alonso N; Stathakis, Sotirios; Esquivel, Carlos; Papanikolaou, Nikos
2012-01-01
Specialized techniques that make use of small field dosimetry are common practice in today's clinics. These new techniques represent a big challenge to the treatment planning systems due to the lack of lateral electronic equilibrium. Because of this, the necessity of planning systems to overcome such difficulties and provide an accurate representation of the true value is of significant importance. Pinnacle3 is one such planning system. During the IMRT optimization process, Pinnacle3 treatment planning system allows the user to specify a minimum segment size which results in multiple beams composed of several subsets of different widths. In this study, the accuracy of the engine dose calculation, collapsed cone convolution superposition algorithm (CCCS) used by Pinnacle3, was quantified by Monte Carlo simulations, ionization chamber, and Kodak extended dose range film (EDR2) measurements for 11 SBRT lung patients. Lesions were < 3.0 cm in maximal diameter and <27.0cm3 in volume. The Monte Carlo EGSnrc\\BEAMnrc and EGS4\\MCSIM were used in the comparison. The minimum segment size allowable during optimization had a direct impact on the number of monitor units calculated for each beam. Plans with the smallest minimum segment size (0.1 cm2 to 2.0 cm2) had the largest number of MUs. Although PTV coverage remained unaffected, the segment size did have an effect on the dose to the organs at risk. Pinnacle3-calculated PTV mean doses were in agreement with Monte Carlo-calculated mean doses to within 5.6% for all plans. On average, the mean dose difference between Monte Carlo and Pinnacle3 for all 88 plans was 1.38%. The largest discrepancy in maximum dose was 5.8%, and was noted for one of the plans using a minimum segment size of 1.0 cm2. For minimum dose to the PTV, a maximum discrepancy between Monte Carlo and Pinnacle3 was noted of 12.5% for a plan using a 6.0 cm2 minimum segment size. Agreement between point dose measurements and Pinnacle3-calculated doses were on
A 3D pencil-beam-based superposition algorithm for photon dose calculation in heterogeneous media
NASA Astrophysics Data System (ADS)
Tillikainen, L.; Helminen, H.; Torsti, T.; Siljamäki, S.; Alakuijala, J.; Pyyry, J.; Ulmer, W.
2008-07-01
In this work, a novel three-dimensional superposition algorithm for photon dose calculation is presented. The dose calculation is performed as a superposition of pencil beams, which are modified based on tissue electron densities. The pencil beams have been derived from Monte Carlo simulations, and are separated into lateral and depth-directed components. The lateral component is modeled using exponential functions, which allows accurate modeling of lateral scatter in heterogeneous tissues. The depth-directed component represents the total energy deposited on each plane, which is spread out using the lateral scatter functions. Finally, convolution in the depth direction is applied to account for tissue interface effects. The method can be used with the previously introduced multiple-source model for clinical settings. The method was compared against Monte Carlo simulations in several phantoms including lung- and bone-type heterogeneities. Comparisons were made for several field sizes for 6 and 18 MV energies. The deviations were generally within (2%, 2 mm) of the field central axis dmax. Significantly larger deviations (up to 8%) were found only for the smallest field in the lung slab phantom for 18 MV. The presented method was found to be accurate in a wide range of conditions making it suitable for clinical planning purposes.
FAST-PT: Convolution integrals in cosmological perturbation theory calculator
NASA Astrophysics Data System (ADS)
McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.; Blazek, Jonathan A.
2016-03-01
FAST-PT calculates 1-loop corrections to the matter power spectrum in cosmology. The code utilizes Fourier methods combined with analytic expressions to reduce the computation time down to scale as N log N, where N is the number of grid point in the input linear power spectrum. FAST-PT is extremely fast, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation.
NASA Astrophysics Data System (ADS)
Butson, Martin J.; Elferink, Rebecca; Cheung, Tsang; Yu, Peter K. N.; Stokes, Michael; You Quach, Kim; Metcalfe, Peter
2000-11-01
Verification of calculated lung dose in an anthropomorphic phantom is performed using two dosimetry media. Dosimetry is complicated by factors such as variations in density at slice interfaces and appropriate position on CT scanning slice to accommodate these factors. Dose in lung for a 6 MV and 10 MV anterior-posterior field was calculated with a collapsed cone convolution method using an ADAC Pinnacle, 3D planning system. Up to 5% variations between doses calculated at the centre and near the edge of the 2 cm phantom slice positioned at the beam central axis were seen, due to the composition of each phantom slice. Validation of dose was performed with LiF thermoluminescent dosimeters (TLDs) and X-Omat V radiographic film. Both dosimetry media produced dose results which agreed closely with calculated results nearest their physical positioning in the phantom. The collapsed cone convolution method accurately calculates dose within inhomogeneous lung regions at 6 MV and 10 MV x-ray energy.
Essentially exact ground-state calculations by superpositions of nonorthogonal Slater determinants
NASA Astrophysics Data System (ADS)
Goto, Hidekazu; Kojo, Masashi; Sasaki, Akira; Hirose, Kikuji
2013-05-01
An essentially exact ground-state calculation algorithm for few-electron systems based on superposition of nonorthogonal Slater determinants (SDs) is described, and its convergence properties to ground states are examined. A linear combination of SDs is adopted as many-electron wave functions, and all one-electron wave functions are updated by employing linearly independent multiple correction vectors on the basis of the variational principle. The improvement of the convergence performance to the ground state given by the multi-direction search is shown through comparisons with the conventional steepest descent method. The accuracy and applicability of the proposed scheme are also demonstrated by calculations of the potential energy curves of few-electron molecular systems, compared with the conventional quantum chemistry calculation techniques.
Faddegon, B.A.; Villarreal-Barajas, J.E.
2005-11-15
The Final Aperture Superposition Technique (FAST) is described and applied to accurate, near instantaneous calculation of the relative output factor (ROF) and central axis percentage depth dose curve (PDD) for clinical electron beams used in radiotherapy. FAST is based on precalculation of dose at select points for the two extreme situations of a fully open final aperture and a final aperture with no opening (fully shielded). This technique is different than conventional superposition of dose deposition kernels: The precalculated dose is differential in position of the electron or photon at the downstream surface of the insert. The calculation for a particular aperture (x-ray jaws or MLC, insert in electron applicator) is done with superposition of the precalculated dose data, using the open field data over the open part of the aperture and the fully shielded data over the remainder. The calculation takes explicit account of all interactions in the shielded region of the aperture except the collimator effect: Particles that pass from the open part into the shielded part, or visa versa. For the clinical demonstration, FAST was compared to full Monte Carlo simulation of 10x10,2.5x2.5, and 2x8 cm{sup 2} inserts. Dose was calculated to 0.5% precision in 0.4x0.4x0.2 cm{sup 3} voxels, spaced at 0.2 cm depth intervals along the central axis, using detailed Monte Carlo simulation of the treatment head of a commercial linear accelerator for six different electron beams with energies of 6-21 MeV. Each simulation took several hours on a personal computer with a 1.7 Mhz processor. The calculation for the individual inserts, done with superposition, was completed in under a second on the same PC. Since simulations for the pre calculation are only performed once, higher precision and resolution can be obtained without increasing the calculation time for individual inserts. Fully shielded contributions were largest for small fields and high beam energy, at the surface, reaching a
GPU-accelerated Monte Carlo convolution∕superposition implementation for dose calculation
Zhou, Bo; Yu, Cedric X.; Chen, Danny Z.; Hu, X. Sharon
2010-01-01
Purpose: Dose calculation is a key component in radiation treatment planning systems. Its performance and accuracy are crucial to the quality of treatment plans as emerging advanced radiation therapy technologies are exerting ever tighter constraints on dose calculation. A common practice is to choose either a deterministic method such as the convolution∕superposition (CS) method for speed or a Monte Carlo (MC) method for accuracy. The goal of this work is to boost the performance of a hybrid Monte Carlo convolution∕superposition (MCCS) method by devising a graphics processing unit (GPU) implementation so as to make the method practical for day-to-day usage. Methods: Although the MCCS algorithm combines the merits of MC fluence generation and CS fluence transport, it is still not fast enough to be used as a day-to-day planning tool. To alleviate the speed issue of MC algorithms, the authors adopted MCCS as their target method and implemented a GPU-based version. In order to fully utilize the GPU computing power, the MCCS algorithm is modified to match the GPU hardware architecture. The performance of the authors’ GPU-based implementation on an Nvidia GTX260 card is compared to a multithreaded software implementation on a quad-core system. Results: A speedup in the range of 6.7–11.4× is observed for the clinical cases used. The less than 2% statistical fluctuation also indicates that the accuracy of the authors’ GPU-based implementation is in good agreement with the results from the quad-core CPU implementation. Conclusions: This work shows that GPU is a feasible and cost-efficient solution compared to other alternatives such as using cluster machines or field-programmable gate arrays for satisfying the increasing demands on computation speed and accuracy of dose calculation. But there are also inherent limitations of using GPU for accelerating MC-type applications, which are also analyzed in detail in this article. PMID:21158271
NASA Astrophysics Data System (ADS)
Tsakstara, V.; Kosmas, T. S.
2011-12-01
Convoluted differential and total cross sections of inelastic ν scattering on 128,130Te isotopes are computed from the original cross sections calculated previously using the quasiparticle random-phase approximation. We adopt various spectral distributions for the neutrino energy spectra such as the common two-parameter Fermi-Dirac and power-law distributions appropriate to explore nuclear detector responses to supernova neutrino spectra. We also concentrate on the use of low-energy β-beam neutrinos, originating from boosted β--radioactive 6He ions, to decompose original supernova (anti)neutrino spectra that are subsequently employed to simulate total cross sections of the reactions 130Te(ν˜,ν˜')130Te*. The concrete nuclear regimes selected, 128,130Te, are contents of the multipurpose CUORE and COBRA rare event detectors. Our present investigation may provide useful information about the efficiency of the Te detector medium of the above experiments in their potential use in supernova neutrino searches.
Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud
2014-09-01
To speed-up the absorbed dose (AD) computation while accounting for tissue heterogeneities, a Collapsed Cone (CC) superposition algorithm was developed and validated for (90)Y. The superposition was implemented with an Energy Deposition Kernel scaled with the radiological distance, along with CC acceleration. The validation relative to Monte Carlo simulations was performed on 6 phantoms involving soft tissue, lung and bone, a radioembolisation treatment and a simulated bone metastasis treatment. As a figure of merit, the relative AD difference (ΔAD) in low gradient regions (LGR), distance to agreement (DTA) in high gradient regions and the γ(1%,1 mm) criterion were used for the phantoms. Mean organ doses and γ(3%,3 mm) were used for the patient data. For the semi-infinite sources, ΔAD in LGR was below 1%. DTA was below 0.6 mm. All profiles verified the γ(1%,1 mm) criterion. For both clinical cases, mean doses differed by less than 1% for the considered organs and all profiles verified the γ(3%,3 mm). The calculation time was below 4 min on a single processor for CC superposition and 40 h on a 40 nodes cluster for MCNP (10(8) histories). Our results show that the CC superposition is a very promising alternative to MC for (90)Y dosimetry, while significantly reducing computation time. PMID:25097006
NASA Astrophysics Data System (ADS)
Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud
2014-09-01
To speed-up the absorbed dose (AD) computation while accounting for tissue heterogeneities, a Collapsed Cone (CC) superposition algorithm was developed and validated for 90Y. The superposition was implemented with an Energy Deposition Kernel scaled with the radiological distance, along with CC acceleration. The validation relative to Monte Carlo simulations was performed on 6 phantoms involving soft tissue, lung and bone, a radioembolisation treatment and a simulated bone metastasis treatment. As a figure of merit, the relative AD difference (ΔAD) in low gradient regions (LGR), distance to agreement (DTA) in high gradient regions and the γ(1%,1 mm) criterion were used for the phantoms. Mean organ doses and γ(3%,3 mm) were used for the patient data. For the semi-infinite sources, ΔAD in LGR was below 1%. DTA was below 0.6 mm. All profiles verified the γ(1%,1 mm) criterion. For both clinical cases, mean doses differed by less than 1% for the considered organs and all profiles verified the γ(3%,3 mm). The calculation time was below 4 min on a single processor for CC superposition and 40 h on a 40 nodes cluster for MCNP (108 histories). Our results show that the CC superposition is a very promising alternative to MC for 90Y dosimetry, while significantly reducing computation time.
Wu, Vincent W C; Tse, Teddy K H; Ho, Cola L M; Yeung, Eric C Y
2013-01-01
Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time
Wu, Vincent W.C.; Tse, Teddy K.H.; Ho, Cola L.M.; Yeung, Eric C.Y.
2013-07-01
Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time.
NASA Astrophysics Data System (ADS)
Copeland, Kyle
2015-07-01
The superposition approximation was commonly employed in atmospheric nuclear transport modeling until recent years and is incorporated into flight dose calculation codes such as CARI-6 and EPCARD. The useful altitude range for this approximation is investigated using Monte Carlo transport techniques. CARI-7A simulates atmospheric radiation transport of elements H-Fe using a database of precalculated galactic cosmic radiation showers calculated with MCNPX 2.7.0 and is employed here to investigate the influence of the superposition approximation on effective dose rates, relative to full nuclear transport of galactic cosmic ray primary ions. Superposition is found to produce results less than 10% different from nuclear transport at current commercial and business aviation altitudes while underestimating dose rates at higher altitudes. The underestimate sometimes exceeds 20% at approximately 23 km and exceeds 40% at 50 km. Thus, programs employing this approximation should not be used to estimate doses or dose rates for high-altitude portions of the commercial space and near-space manned flights that are expected to begin soon.
Superposition model calculation of zero-field splitting of Fe3+ in LiTaO3 crystal
NASA Astrophysics Data System (ADS)
Yeom, T. H.
2001-11-01
The second-order zero-field splitting (ZFS) parameter b20 of the Fe3+ ion centre at the Li site, the Ta site and the structural vacancy site in the LiTaO3 crystal are calculated using the empirical superposition model. The fourth-order ZFS parameters b40, b43 and b4-3 are also calculated at the Li and Ta site, respectively. The calculated b20 of Fe3+ ion at the Li site agrees well with the experimental one. It is concluded that the Fe3+ replaces the Li+ ion rather than the Ta5+ ion in the LiTaO3 crystal. This conclusion confirms the site assignment from the electron nuclear double resonance experiments.
NASA Astrophysics Data System (ADS)
Naik, Mehul S.
Intensity-modulated radiation therapy (IMRT) is a 3D conformal radiation therapy technique that utilizes either a multileaf intensity-modulating collimator (MIMiC used with the NOMOS Peacock system) or a multileaf collimator (MLC) on a conventional linear accelerator for beam intensity modulation to afford increased conformity in dose distributions. Due to the high-dose gradient regions that are effectively created, particular emphasis should be placed in the accurate determination of pencil beam kernels that are utilized by pencil beam convolution algorithms employed by a number of commercial IMRT treatment planning systems (TPS). These kernels are determined from relatively large field dose profiles that are typically collected using an ion chamber during commissioning of the TPS, while recent studies have demonstrated improvements in dose calculation accuracy when incorporating film data into the commissioning measurements. For this study, it has been proposed that the shape of high-resolution dose kernels can be extracted directly from single pencil beam (beamlet) profile measurements acquired using high-precision dosimetric film in order to accurately compute dose distributions, specifically for small fields and the penumbra regions of the larger fields. The effectiveness of GafChromic EBT film as an appropriate dosimeter to acquire the necessary measurements was evaluated and compared to the conventional silver-halide Kodak EDR2 film. Using the NOMOS Peacock system, similar dose kernels were extracted through deconvolution of the elementary pencil beam profiles using the two different types of films. Independent convolution-based calculations were performed using these kernels, resulting in better agreement with the measured relative dose profiles, as compared to those determined by CORVUS TPS' finite-size pencil beam (FSPB) algorithm. Preliminary evaluation of the proposed method in performing kernel extraction for an MLC-based IMRT system also showed
Brandenburg, Jan Gerit; Alessio, Maristella; Civalleri, Bartolomeo; Peintinger, Michael F; Bredow, Thomas; Grimme, Stefan
2013-09-26
We extend the previously developed geometrical correction for the inter- and intramolecular basis set superposition error (gCP) to periodic density functional theory (DFT) calculations. We report gCP results compared to those from the standard Boys-Bernardi counterpoise correction scheme and large basis set calculations. The applicability of the method to molecular crystals as the main target is tested for the benchmark set X23. It consists of 23 noncovalently bound crystals as introduced by Johnson et al. (J. Chem. Phys. 2012, 137, 054103) and refined by Tkatchenko et al. (J. Chem. Phys. 2013, 139, 024705). In order to accurately describe long-range electron correlation effects, we use the standard atom-pairwise dispersion correction scheme DFT-D3. We show that a combination of DFT energies with small atom-centered basis sets, the D3 dispersion correction, and the gCP correction can accurately describe van der Waals and hydrogen-bonded crystals. Mean absolute deviations of the X23 sublimation energies can be reduced by more than 70% and 80% for the standard functionals PBE and B3LYP, respectively, to small residual mean absolute deviations of about 2 kcal/mol (corresponding to 13% of the average sublimation energy). As a further test, we compute the interlayer interaction of graphite for varying distances and obtain a good equilibrium distance and interaction energy of 6.75 Å and -43.0 meV/atom at the PBE-D3-gCP/SVP level. We fit the gCP scheme for a recently developed pob-TZVP solid-state basis set and obtain reasonable results for the X23 benchmark set and the potential energy curve for water adsorption on a nickel (110) surface. PMID:23947824
NASA Astrophysics Data System (ADS)
Russo, G.; Attili, A.; Battistoni, G.; Bertrand, D.; Bourhaleb, F.; Cappucci, F.; Ciocca, M.; Mairani, A.; Milian, F. M.; Molinelli, S.; Morone, M. C.; Muraro, S.; Orts, T.; Patera, V.; Sala, P.; Schmitt, E.; Vivaldo, G.; Marchetto, F.
2016-01-01
The calculation algorithm of a modern treatment planning system for ion-beam radiotherapy should ideally be able to deal with different ion species (e.g. protons and carbon ions), to provide relative biological effectiveness (RBE) evaluations and to describe different beam lines. In this work we propose a new approach for ion irradiation outcomes computations, the beamlet superposition (BS) model, which satisfies these requirements. This model applies and extends the concepts of previous fluence-weighted pencil-beam algorithms to quantities of radiobiological interest other than dose, i.e. RBE- and LET-related quantities. It describes an ion beam through a beam-line specific, weighted superposition of universal beamlets. The universal physical and radiobiological irradiation effect of the beamlets on a representative set of water-like tissues is evaluated once, coupling the per-track information derived from FLUKA Monte Carlo simulations with the radiobiological effectiveness provided by the microdosimetric kinetic model and the local effect model. Thanks to an extension of the superposition concept, the beamlet irradiation action superposition is applicable for the evaluation of dose, RBE and LET distributions. The weight function for the beamlets superposition is derived from the beam phase space density at the patient entrance. A general beam model commissioning procedure is proposed, which has successfully been tested on the CNAO beam line. The BS model provides the evaluation of different irradiation quantities for different ions, the adaptability permitted by weight functions and the evaluation speed of analitical approaches. Benchmarking plans in simple geometries and clinical plans are shown to demonstrate the model capabilities.
NASA Astrophysics Data System (ADS)
Koncek, O.; Krivonoska, J.
2014-11-01
The MCNP Monte Carlo code was used to simulate the collimating system of the 60Co therapy unit to calculate the primary and scattered photon fluences as well as the electron contamination incident to the isocentric plane as the functions of the irradiation field size. Furthermore, a Monte Carlo simulation for the polyenergetic Pencil Beam Kernels (PBKs) generation was performed using the calculated photon and electron spectra. The PBK was analytically fitted to speed up the dose calculation using the convolution technique in the homogeneous media. The quality of the PBK fit was verified by comparing the calculated and simulated 60Co broad beam profiles and depth dose curves in a homogeneous water medium. The inhomogeneity correction coefficients were derived from the PBK simulation of an inhomogeneous slab phantom consisting of various materials. The inhomogeneity calculation model is based on the changes in the PBK radial displacement and on the change of the forward and backward electron scattering. The inhomogeneity correction is derived from the electron density values gained from a complete 3D CT array and considers different electron densities through which the pencil beam is propagated as well as the electron density values located between the interaction point and the point of dose deposition. Important aspects and details of the algorithm implementation are also described in this study.
Do a bit more with convolution.
Olsthoorn, Theo N
2008-01-01
Convolution is a form of superposition that efficiently deals with input varying arbitrarily in time or space. It works whenever superposition is applicable, that is, for linear systems. Even though convolution is well-known since the 19th century, this valuable method is still missing in most textbooks on ground water hydrology. This limits widespread application in this field. Perhaps most papers are too complex mathematically as they tend to focus on the derivation of analytical expressions rather than solving practical problems. However, convolution is straightforward with standard mathematical software or even a spreadsheet, as is demonstrated in the paper. The necessary system responses are not limited to analytic solutions; they may also be obtained by running an already existing ground water model for a single stress period until equilibrium is reached. With these responses, high-resolution time series of head or discharge may then be computed by convolution for arbitrary points and arbitrarily varying input, without further use of the model. There are probably thousands of applications in the field of ground water hydrology that may benefit from convolution. Therefore, its inclusion in ground water textbooks and courses is strongly needed. PMID:18181860
The Convolution Method in Neutrino Physics Searches
Tsakstara, V.; Kosmas, T. S.; Chasioti, V. C.; Divari, P. C.; Sinatkas, J.
2007-12-26
We concentrate on the convolution method used in nuclear and astro-nuclear physics studies and, in particular, in the investigation of the nuclear response of various neutrino detection targets to the energy-spectra of specific neutrino sources. Since the reaction cross sections of the neutrinos with nuclear detectors employed in experiments are extremely small, very fine and fast convolution techniques are required. Furthermore, sophisticated de-convolution methods are also needed whenever a comparison between calculated unfolded cross sections and existing convoluted results is necessary.
SU-E-T-08: A Convolution Model for Head Scatter Fluence in the Intensity Modulated Field
Chen, M; Mo, X; Chen, Y; Parnell, D; Key, S; Olivera, G; Galmarini, W; Lu, W
2014-06-01
Purpose: To efficiently calculate the head scatter fluence for an arbitrary intensity-modulated field with any source distribution using the source occlusion model. Method: The source occlusion model with focal and extra focal radiation (Jaffray et al, 1993) can be used to account for LINAC head scatter. In the model, the fluence map of any field shape at any point can be calculated via integration of the source distribution within the visible range, as confined by each segment, using the detector eye's view. A 2D integration would be required for each segment and each fluence plane point, which is time-consuming, as an intensity-modulated field contains typically tens to hundreds of segments. In this work, we prove that the superposition of the segmental integrations is equivalent to a simple convolution regardless of what the source distribution is. In fact, for each point, the detector eye's view of the field shape can be represented as a function with the origin defined at the point's pinhole reflection through the center of the collimator plane. We were thus able to reduce hundreds of source plane integration to one convolution. We calculated the fluence map for various 3D and IMRT beams and various extra-focal source distributions using both the segmental integration approach and the convolution approach and compared the computation time and fluence map results of both approaches. Results: The fluence maps calculated using the convolution approach were the same as those calculated using the segmental approach, except for rounding errors (<0.1%). While it took considerably longer time to calculate all segmental integrations, the fluence map calculation using the convolution approach took only ∼1/3 of the time for typical IMRT fields with ∼100 segments. Conclusions: The convolution approach for head scatter fluence calculation is fast and accurate and can be used to enhance the online process.
Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen
2012-01-01
In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081
Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen
2012-07-01
In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081
Ellison, David H.
2014-01-01
The distal convoluted tubule is the nephron segment that lies immediately downstream of the macula densa. Although short in length, the distal convoluted tubule plays a critical role in sodium, potassium, and divalent cation homeostasis. Recent genetic and physiologic studies have greatly expanded our understanding of how the distal convoluted tubule regulates these processes at the molecular level. This article provides an update on the distal convoluted tubule, highlighting concepts and pathophysiology relevant to clinical practice. PMID:24855283
NASA Astrophysics Data System (ADS)
Emül, Y.; Erbahar, D.; Açıkgöz, M.
2015-08-01
Analyses of the local crystal and electronic structure in the vicinity of Fe3+ centers in perovskite KMgF3 crystal have been carried out in a comprehensive manner. A combination of density functional theory (DFT) and a semi-empirical superposition model (SPM) is used for a complete analysis of all Fe3+ centers in this study for the first time. Some quantitative information has been derived from the DFT calculations on both the electronic structure and the local geometry around Fe3+ centers. All of the trigonal (K-vacancy case, K-Li substitution case, and normal trigonal Fe3+ center case), FeF5O cluster, and tetragonal (Mg-vacancy and Mg-Li substitution cases) centers have been taken into account based on the previously suggested experimental and theoretical inferences. The collaboration between the experimental data and the results of both DFT and SPM calculations provides us to understand most probable structural model for Fe3+ centers in KMgF3.
Some easily analyzable convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R.; Dolinar, S.; Pollara, F.; Vantilborg, H.
1989-01-01
Convolutional codes have played and will play a key role in the downlink telemetry systems on many NASA deep-space probes, including Voyager, Magellan, and Galileo. One of the chief difficulties associated with the use of convolutional codes, however, is the notorious difficulty of analyzing them. Given a convolutional code as specified, say, by its generator polynomials, it is no easy matter to say how well that code will perform on a given noisy channel. The usual first step in such an analysis is to computer the code's free distance; this can be done with an algorithm whose complexity is exponential in the code's constraint length. The second step is often to calculate the transfer function in one, two, or three variables, or at least a few terms in its power series expansion. This step is quite hard, and for many codes of relatively short constraint lengths, it can be intractable. However, a large class of convolutional codes were discovered for which the free distance can be computed by inspection, and for which there is a closed-form expression for the three-variable transfer function. Although for large constraint lengths, these codes have relatively low rates, they are nevertheless interesting and potentially useful. Furthermore, the ideas developed here to analyze these specialized codes may well extend to a much larger class.
Multipartite entanglement of superpositions
NASA Astrophysics Data System (ADS)
Cavalcanti, D.; Terra Cunha, M. O.; Acín, A.
2007-10-01
The entanglement of superpositions [Linden , Phys. Rev. Lett. 97, 100502 (2006)]is generalized to the multipartite scenario: an upper bound to the multipartite entanglement of a superposition is given in terms of the entanglement of the superposed states and the superposition coefficients. This bound is proven to be tight for a class of states composed of an arbitrary number of qubits. We also extend the result to a large family of quantifiers, which includes the negativity, the robustness of entanglement, and the best separable approximation measure.
Multipartite entanglement of superpositions
Cavalcanti, D.; Terra Cunha, M. O.; Acin, A.
2007-10-15
The entanglement of superpositions [Linden et al., Phys. Rev. Lett. 97, 100502 (2006)]is generalized to the multipartite scenario: an upper bound to the multipartite entanglement of a superposition is given in terms of the entanglement of the superposed states and the superposition coefficients. This bound is proven to be tight for a class of states composed of an arbitrary number of qubits. We also extend the result to a large family of quantifiers, which includes the negativity, the robustness of entanglement, and the best separable approximation measure.
Kruse, Holger; Grimme, Stefan
2012-04-21
A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model
Network Class Superposition Analyses
Pearson, Carl A. B.; Zeng, Chen; Simha, Rahul
2013-01-01
Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., for the yeast cell cycle process [1]), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix , which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for derived from Boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with . We show how to generate Derrida plots based on . We show that -based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on . We motivate all of these results in terms of a popular molecular biology Boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for , for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses. PMID:23565141
Asymmetric quantum convolutional codes
NASA Astrophysics Data System (ADS)
La Guardia, Giuliano G.
2016-01-01
In this paper, we construct the first families of asymmetric quantum convolutional codes (AQCCs). These new AQCCs are constructed by means of the CSS-type construction applied to suitable families of classical convolutional codes, which are also constructed here. The new codes have non-catastrophic generator matrices, and they have great asymmetry. Since our constructions are performed algebraically, i.e. we develop general algebraic methods and properties to perform the constructions, it is possible to derive several families of such codes and not only codes with specific parameters. Additionally, several different types of such codes are obtained.
Engineering mesoscopic superpositions of superfluid flow
Hallwood, D. W.; Brand, J.
2011-10-15
Modeling strongly correlated atoms demonstrates the possibility to prepare quantum superpositions that are robust against experimental imperfections and temperature. Such superpositions of vortex states are formed by adiabatic manipulation of interacting ultracold atoms confined to a one-dimensional ring trapping potential when stirred by a barrier. Here, we discuss the influence of nonideal experimental procedures and finite temperature. Adiabaticity conditions for changing the stirring rate reveal that superpositions of many atoms are most easily accessed in the strongly interacting, Tonks-Girardeau, regime, which is also the most robust at finite temperature. NOON-type superpositions of weakly interacting atoms are most easily created by adiabatically decreasing the interaction strength by means of a Feshbach resonance. The quantum dynamics of small numbers of particles is simulated and the size of the superpositions is calculated based on their ability to make precision measurements. The experimental creation of strongly correlated and NOON-type superpositions with about 100 atoms seems feasible in the near future.
Superposition State Molecular Dynamics.
Venkatnathan, Arun; Voth, Gregory A
2005-01-01
The ergodic sampling of rough energy landscapes is crucial for understanding phenomena like protein folding, peptide aggregation, polymer dynamics, and the glass transition. These rough energy landscapes are characterized by the presence of many local minima separated by high energy barriers, where Molecular Dynamics (MD) fails to satisfy ergodicity. To enhance ergodic behavior, we have developed the Superposition State Molecular Dynamics (SSMD) method, which uses a superposition of energy states to obtain an effective potential for the MD simulation. In turn, the dynamics on this effective potential can be used to sample the configurational free energy of the real potential. The effectiveness of the SSMD method for a one-dimensional rough potential energy landscape is presented as a test case. PMID:26641113
Artificial neural superposition eye.
Brückner, Andreas; Duparré, Jacques; Dannberg, Peter; Bräuer, Andreas; Tünnermann, Andreas
2007-09-17
We propose an ultra-thin imaging system which is based on the neural superposition compound eye of insects. Multiple light sensitive pixels in the footprint of each lenslet of this multi-channel configuration enable the parallel imaging of the individual object points. Together with the digital superposition of related signals this multiple sampling enables advanced functionalities for artificial compound eyes. Using this technique, color imaging and a circumvention for the trade-off between resolution and sensitivity of ultra-compact camera devices have been demonstrated in this article. The optical design and layout of such a system is discussed in detail. Experimental results are shown which indicate the attractiveness of microoptical artificial compound eyes for applications in the field of machine vision, surveillance or automotive imaging. PMID:19547555
Yu, Chang-shui; Yi, X. X.; Song, He-shan
2007-02-15
Bounds on the concurrence of the superposition state in terms of the concurrences of the states being superposed are found in this paper. The bounds on concurrence are quite different from those on the entanglement measured by von Neumann entropy [Linden et al., Phys. Rev. Lett. 97, 100502 (2006)]. In particular, a nonzero lower bound can be provided if the states being superposed are properly constrained.
Understanding deep convolutional networks.
Mallat, Stéphane
2016-04-13
Deep convolutional networks provide state-of-the-art classifications and regressions results over many high-dimensional problems. We review their architecture, which scatters data with a cascade of linear filter weights and nonlinearities. A mathematical framework is introduced to analyse their properties. Computations of invariants involve multiscale contractions with wavelets, the linearization of hierarchical symmetries and sparse separations. Applications are discussed. PMID:26953183
Takeda, Atsuya; Sanuki, Naoko; Kunieda, Etsuo Ohashi, Toshio; Oku, Yohei; Takeda, Toshiaki; Shigematsu, Naoyuki; Kubo, Atsushi
2009-02-01
Purpose: To retrospectively analyze the clinical outcomes of stereotactic body radiotherapy (SBRT) for patients with Stages 1A and 1B non-small-cell lung cancer. Methods and Materials: We reviewed the records of patients with non-small-cell lung cancer treated with curative intent between Dec 2001 and May 2007. All patients had histopathologically or cytologically confirmed disease, increased levels of tumor markers, and/or positive findings on fluorodeoxyglucose positron emission tomography. Staging studies identified their disease as Stage 1A or 1B. Performance status was 2 or less according to World Health Organization guidelines in all cases. The prescribed dose of 50 Gy total in five fractions, calculated by using a superposition algorithm, was defined for the periphery of the planning target volume. Results: One hundred twenty-one patients underwent SBRT during the study period, and 63 were eligible for this analysis. Thirty-eight patients had Stage 1A (T1N0M0) and 25 had Stage 1B (T2N0M0). Forty-nine patients were not appropriate candidates for surgery because of chronic pulmonary disease. Median follow-up of these 49 patients was 31 months (range, 10-72 months). The 3-year local control, disease-free, and overall survival rates in patients with Stages 1A and 1B were 93% and 96% (p = 0.86), 76% and 77% (p = 0.83), and 90% and 63% (p = 0.09), respectively. No acute toxicity was observed. Grade 2 or higher radiation pneumonitis was experienced by 3 patients, and 1 of them had fatal bacterial pneumonia. Conclusions: The SBRT at 50 Gy total in five fractions to the periphery of the planning target volume calculated by using a superposition algorithm is feasible. High local control rates were achieved for both T2 and T1 tumors.
Butts, J R; Foster, A E
2001-01-01
This study uses an anthropomorphic phantom and its computed tomography (CT) data set to evaluate monitor unit (MU) calculations using the CMS Focus Clarkson, the CMS Focus Multigrid Superposition Model, the CMS Focus FFT Convolution Model, and the ADAC Pinnacle Collapsed Cone Convolution Superposition Algorithms. Using heterogeneity corrections, a treatment plan and corresponding MU calculations were generated for several typical clinical situations. A diode detector, placed in an anthropomorphic phantom, was used to compare the treatment planning algorithms' predicted doses with measured data. Differences between diode measurements and the algorithms' calculations were within reasonable levels of acceptability as recommended by Van Dyk et al. [Int. J. Rad. Onc. Biol. Phys. 26, 261-273 (1993)], except for the CMS Clarkson algorithm, which predicted too few MU for delivery of the intended dose to chest wall fields. PMID:11674836
Tupitsyn, I.I.
1988-03-01
The ionization potentials of the halogen group have been calculated. The calculations were carried out using the relativistic Hartree-Fock method taking into account correlation effects. Comparison of theoretical results with experimental data for the elements F, Cl, Br, and I allows an estimation of the accuracy and reliability of the method. The theoretical values of the ionization potential of astatine obtained here may be of definite interest for the chemistry of astatine.
Reexamination of entanglement of superpositions
NASA Astrophysics Data System (ADS)
Gour, Gilad
2007-11-01
We find tight lower and upper bounds on the entanglement of a superposition of two bipartite states in terms of the entanglement of the two states constituting the superposition. Our upper bound is dramatically tighter than the one presented by Linden [Phys. Rev. Lett. 97, 100502 (2006)] and our lower bound can be used to provide lower bounds on different measures of entanglement such as the entanglement of formation and the entanglement of subspaces. We also find that in the case in which the two states are one-sided orthogonal, the entanglement of the superposition state can be expressed explicitly in terms of the entanglement of the two states in the superposition.
Superposition Enhanced Nested Sampling
NASA Astrophysics Data System (ADS)
Martiniani, Stefano; Stevenson, Jacob D.; Wales, David J.; Frenkel, Daan
2014-07-01
The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.
Emül, Y.; Erbahar, D.; Açıkgöz, M.
2015-08-14
Analyses of the local crystal and electronic structure in the vicinity of Fe{sup 3+} centers in perovskite KMgF{sub 3} crystal have been carried out in a comprehensive manner. A combination of density functional theory (DFT) and a semi-empirical superposition model (SPM) is used for a complete analysis of all Fe{sup 3+} centers in this study for the first time. Some quantitative information has been derived from the DFT calculations on both the electronic structure and the local geometry around Fe{sup 3+} centers. All of the trigonal (K-vacancy case, K-Li substitution case, and normal trigonal Fe{sup 3+} center case), FeF{sub 5}O cluster, and tetragonal (Mg-vacancy and Mg-Li substitution cases) centers have been taken into account based on the previously suggested experimental and theoretical inferences. The collaboration between the experimental data and the results of both DFT and SPM calculations provides us to understand most probable structural model for Fe{sup 3+} centers in KMgF{sub 3}.
Convolutional coding techniques for data protection
NASA Technical Reports Server (NTRS)
Massey, J. L.
1975-01-01
Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.
Determinate-state convolutional codes
NASA Technical Reports Server (NTRS)
Collins, O.; Hizlan, M.
1991-01-01
A determinate state convolutional code is formed from a conventional convolutional code by pruning away some of the possible state transitions in the decoding trellis. The type of staged power transfer used in determinate state convolutional codes proves to be an extremely efficient way of enhancing the performance of a concatenated coding system. The decoder complexity is analyzed along with free distances of these new codes and extensive simulation results is provided of their performance at the low signal to noise ratios where a real communication system would operate. Concise, practical examples are provided.
Quantum superpositions of crystalline structures
Baltrusch, Jens D.; Morigi, Giovanna; Cormick, Cecilia; De Chiara, Gabriele; Calarco, Tommaso
2011-12-15
A procedure is discussed for creating coherent superpositions of motional states of ion strings. The motional states are across the structural transition linear-zigzag, and their coherent superposition is achieved by means of spin-dependent forces, such that a coherent superposition of the electronic states of one ion evolves into an entangled state between the chain's internal and external degrees of freedom. It is shown that the creation of such an entangled state can be revealed by performing Ramsey interferometry with one ion of the chain.
Reexamination of entanglement of superpositions
Gour, Gilad
2007-11-15
We find tight lower and upper bounds on the entanglement of a superposition of two bipartite states in terms of the entanglement of the two states constituting the superposition. Our upper bound is dramatically tighter than the one presented by Linden et al. [Phys. Rev. Lett. 97, 100502 (2006)] and our lower bound can be used to provide lower bounds on different measures of entanglement such as the entanglement of formation and the entanglement of subspaces. We also find that in the case in which the two states are one-sided orthogonal, the entanglement of the superposition state can be expressed explicitly in terms of the entanglement of the two states in the superposition.
Martens, C; Reynaert, N; De Wagter, C; Nilsson, P; Coghe, M; Palmans, H; Thierens, H; De Neve, W
2002-07-01
Head-and-neck tumors are often situated at an air-tissue interface what may result in an underdosage of part of the tumor in radiotherapy treatments using megavoltage photons, especially for small fields. In addition to effects of transient electronic disequilibrium, for these small fields, an increased lateral electron range in air will result in an important extra reduction of the central axis dose beyond the cavity. Therefore dose calculation algorithms need to model electron transport accurately. We simulated the trachea by a 2 cm diameter cylindrical air cavity with the rim situated 2 cm beneath the phantom surface. A 6 MV photon beam from an Elekta SLiplus linear accelerator, equipped with the standard multileaf collimator (MLC), was assessed. A 10 x 2 cm2 and a 10 x 1 cm2 field, both widthwise collimated by the MLC, were applied with their long side parallel to the cylinder axis. Central axis dose rebuild-up was studied. Radiochromic film measurements were performed in an in-house manufactured polystyrene phantom with the films oriented either along or perpendicular to the beam axis. Monte Carlo simulations were performed with BEAM and EGSnrc. Calculations were also performed using the pencil beam (PB) algorithm and the collapsed cone convolution (CCC) algorithm of Helax-TMS (MDS Nordion, Kanata, Cahada) version 6.0.2 and using the CCC algorithm of Pinnacle (ADAC Laboratories, Milpitas, CA, USA) version 4.2. A very good agreement between the film measurements and the Monte Carlo simulations was found. The CCC algorithms were not able to predict the interface dose accurately when lateral electronic disequilibrium occurs, but were shown to be a considerable improvement compared to the PB algorithm. The CCC algorithms overestimate the dose in the rebuild-up region. The interface dose was overestimated by a maximum of 31% or 54%, depending on the implementation of the CCC algorithm. At a depth of 1 mm, the maximum dose overestimation was 14% or 24%. PMID
Entanglement-assisted quantum convolutional coding
Wilde, Mark M.; Brun, Todd A.
2010-04-15
We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.
NASA Technical Reports Server (NTRS)
Truong, T. K.; Reed, I. S.
1985-01-01
Simple recursive algorithm efficiently calculates minimum-weight error vectors using Diophantine equations. Recursive algorithm uses general solution of polynomial linear Diophantine equation to determine minimum-weight error polynomial vector in equation in polynomial space.
A quantum algorithm for Viterbi decoding of classical convolutional codes
NASA Astrophysics Data System (ADS)
Grice, Jon R.; Meyer, David A.
2015-07-01
We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.
QCDNUM: Fast QCD evolution and convolution
NASA Astrophysics Data System (ADS)
Botje, M.
2011-02-01
The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline
Two dimensional convolute integers for machine vision and image recognition
NASA Technical Reports Server (NTRS)
Edwards, Thomas R.
1988-01-01
Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.
Linear superposition in nonlinear equations.
Khare, Avinash; Sukhatme, Uday
2002-06-17
Several nonlinear systems such as the Korteweg-de Vries (KdV) and modified KdV equations and lambda phi(4) theory possess periodic traveling wave solutions involving Jacobi elliptic functions. We show that suitable linear combinations of these known periodic solutions yield many additional solutions with different periods and velocities. This linear superposition procedure works by virtue of some remarkable new identities involving elliptic functions. PMID:12059300
Student ability to distinguish between superposition states and mixed states in quantum mechanics
NASA Astrophysics Data System (ADS)
Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.
2015-12-01
Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the experimental implications of a superposition state. In particular, they fail to recognize how a superposition state and a mixed state (sometimes called a "lack of knowledge" state) can produce different experimental results. We present data that suggest that superposition in quantum mechanics is a difficult concept for students enrolled in sophomore-, junior-, and graduate-level quantum mechanics courses. We illustrate how an interactive lecture tutorial can improve student understanding of quantum mechanical superposition. A longitudinal study suggests that the impact persists after an additional quarter of quantum mechanics instruction that does not specifically address these ideas.
The Paraconsistent Logic of Quantum Superpositions
NASA Astrophysics Data System (ADS)
da Costa, N.; de Ronde, C.
2013-07-01
Physical superpositions exist both in classical and in quantum physics. However, what is exactly meant by `superposition' in each case is extremely different. In this paper we discuss some of the multiple interpretations which exist in the literature regarding superpositions in quantum mechanics. We argue that all these interpretations have something in common: they all attempt to avoid `contradiction'. We argue in this paper, in favor of the importance of developing a new interpretation of superpositions which takes into account contradiction, as a key element of the formal structure of the theory, "right from the start". In order to show the feasibility of our interpretational project we present an outline of a paraconsistent approach to quantum superpositions which attempts to account for the contradictory properties present in general within quantum superpositions. This approach must not be understood as a closed formal and conceptual scheme but rather as a first step towards a different type of understanding regarding quantum superpositions.
Approximating large convolutions in digital images.
Mount, D M; Kanungo, T; Netanyahu, N S; Piatko, C; Silverman, R; Wu, A Y
2001-01-01
Computing discrete two-dimensional (2-D) convolutions is an important problem in image processing. In mathematical morphology, an important variant is that of computing binary convolutions, where the kernel of the convolution is a 0-1 valued function. This operation can be quite costly, especially when large kernels are involved. We present an algorithm for computing convolutions of this form, where the kernel of the binary convolution is derived from a convex polygon. Because the kernel is a geometric object, we allow the algorithm some flexibility in how it elects to digitize the convex kernel at each placement, as long as the digitization satisfies certain reasonable requirements. We say that such a convolution is valid. Given this flexibility we show that it is possible to compute binary convolutions more efficiently than would normally be possible for large kernels. Our main result is an algorithm which, given an m x n image and a k-sided convex polygonal kernel K, computes a valid convolution in O(kmn) time. Unlike standard algorithms for computing correlations and convolutions, the running time is independent of the area or perimeter of K, and our techniques do not rely on computing fast Fourier transforms. Our algorithm is based on a novel use of Bresenham's (1965) line-drawing algorithm and prefix-sums to update the convolution incrementally as the kernel is moved from one position to another across the image. PMID:18255522
Ultrasonic field modeling for immersed components using Gaussian beam superposition.
Spies, Martin
2007-05-01
The Gaussian beam (GB) superposition approach can be applied to model ultrasound propagation in complex-structured materials and components. In this article, progress made in extending and applying the Gaussian beam superposition technique to model the beam fields generated by transducers with flat and focused rectangular apertures as well as with circular focused apertures is addressed. The refraction of transducer beam fields through curved surfaces is illustrated by calculation results for beam fields generated in curved components during immersion testing. In particular, the following developments are put forward: (i) the use of individually determined sets of GBs to model transducer beam fields with a number of less than ten beams; (ii) the application of the GB representation of rectangular transducers to focusing probes, as well as to the problem of transmission through interfaces; and (iii) computationally efficient transient modeling by superposition of 'temporally limited' GBs. PMID:17335863
Student Ability to Distinguish between Superposition States and Mixed States in Quantum Mechanics
ERIC Educational Resources Information Center
Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.
2015-01-01
Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the…
Creating a Superposition of Unknown Quantum States.
Oszmaniec, Michał; Grudka, Andrzej; Horodecki, Michał; Wójcik, Antoni
2016-03-18
The superposition principle is one of the landmarks of quantum mechanics. The importance of quantum superpositions provokes questions about the limitations that quantum mechanics itself imposes on the possibility of their generation. In this work, we systematically study the problem of the creation of superpositions of unknown quantum states. First, we prove a no-go theorem that forbids the existence of a universal probabilistic quantum protocol producing a superposition of two unknown quantum states. Second, we provide an explicit probabilistic protocol generating a superposition of two unknown states, each having a fixed overlap with the known referential pure state. The protocol can be applied to generate coherent superposition of results of independent runs of subroutines in a quantum computer. Moreover, in the context of quantum optics it can be used to efficiently generate highly nonclassical states or non-Gaussian states. PMID:27035290
Creating a Superposition of Unknown Quantum States
NASA Astrophysics Data System (ADS)
Oszmaniec, Michał; Grudka, Andrzej; Horodecki, Michał; Wójcik, Antoni
2016-03-01
The superposition principle is one of the landmarks of quantum mechanics. The importance of quantum superpositions provokes questions about the limitations that quantum mechanics itself imposes on the possibility of their generation. In this work, we systematically study the problem of the creation of superpositions of unknown quantum states. First, we prove a no-go theorem that forbids the existence of a universal probabilistic quantum protocol producing a superposition of two unknown quantum states. Second, we provide an explicit probabilistic protocol generating a superposition of two unknown states, each having a fixed overlap with the known referential pure state. The protocol can be applied to generate coherent superposition of results of independent runs of subroutines in a quantum computer. Moreover, in the context of quantum optics it can be used to efficiently generate highly nonclassical states or non-Gaussian states.
Mesoscopic Superposition States in Relativistic Landau Levels
Bermudez, A.; Martin-Delgado, M. A.; Solano, E.
2007-09-21
We show that a linear superposition of mesoscopic states in relativistic Landau levels can be built when an external magnetic field couples to a relativistic spin 1/2 charged particle. Under suitable initial conditions, the associated Dirac equation produces unitarily superpositions of coherent states involving the particle orbital quanta in a well-defined mesoscopic regime. We demonstrate that these mesoscopic superpositions have a purely relativistic origin and disappear in the nonrelativistic limit.
Rotational superposition: a review of methods.
Flower, D R
1999-01-01
Rotational superposition is one of the most commonly used algorithms in molecular modelling. Many different methods of solving superposition have been suggested. Of these, methods based on the quaternion parameterization of rotation are fast, accurate, and robust. Quaternion parameterization-based methods cannot result in rotation inversion and do not have special cases such as co-linearity or co-planarity of points. Thus, quaternion parameterization-based methods are the best choice for rotational superposition applications. PMID:10736782
Superposition flows of entangled polymeric solutions
NASA Astrophysics Data System (ADS)
Ianniruberto, Giovanni; Unidad, Herwin Jerome
2015-12-01
Parallel and orthogonal superposition experiments by Vermant et al. (1998) on a polydisperse, entangled polymeric solution are here analyzed by using a simple, multi-mode differential constitutive equation based on the tube model, and also accounting for convective constraint release effects. Model predictions are in very good qualitative and quantitative agreement with parallel superposition data, while some discrepancies are found with orthogonal data, thus suggesting that orthogonal superposition experiments represent a more severe test for molecularly-based constitutive equations.
Convolution-deconvolution in DIGES
Philippacopoulos, A.J.; Simos, N.
1995-05-01
Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities.
Macroscopic optomechanical superposition via periodic qubit flipping
NASA Astrophysics Data System (ADS)
Ge, Wenchao; Zubairy, M. Suhail
2015-01-01
We propose a scheme to generate macroscopic superpositions of well-distinguishable coherent states in an optomechanical system via periodic qubit flipping. Our scheme does not require the single-photon strong-coupling rate of an optomechanical system. The generated mechanical superposition state can be reconstructed using mechanical quantum-state reconstruction. The proposed scheme relies on recycling of an atom, fast atomic qubit flipping, and coherent state mapping between a single-photon superposition state and an atomic superposition state. We discuss the experimental feasibility of our proposal under current technology.
The trellis complexity of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Lin, W.
1995-01-01
It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.
Runge-Kutta based generalized convolution quadrature
NASA Astrophysics Data System (ADS)
Lopez-Fernandez, Maria; Sauter, Stefan
2016-06-01
We present the Runge-Kutta generalized convolution quadrature (gCQ) with variable time steps for the numerical solution of convolution equations for time and space-time problems. We present the main properties of the method and a convergence result.
Symbol synchronization in convolutionally coded systems
NASA Technical Reports Server (NTRS)
Baumert, L. D.; Mceliece, R. J.; Van Tilborg, H. C. A.
1979-01-01
Alternate symbol inversion is sometimes applied to the output of convolutional encoders to guarantee sufficient richness of symbol transition for the receiver symbol synchronizer. A bound is given for the length of the transition-free symbol stream in such systems, and those convolutional codes are characterized in which arbitrarily long transition free runs occur.
Rolling-Convolute Joint For Pressurized Glove
NASA Technical Reports Server (NTRS)
Kosmo, Joseph J.; Bassick, John W.
1994-01-01
Rolling-convolute metacarpal/finger joint enhances mobility and flexibility of pressurized glove. Intended for use in space suit to increase dexterity and decrease wearer's fatigue. Also useful in diving suits and other pressurized protective garments. Two ring elements plus bladder constitute rolling-convolute joint balancing torques caused by internal pressurization of glove. Provides comfortable grasp of various pieces of equipment.
The general theory of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Stanley, R. P.
1993-01-01
This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.
Search for optimal distance spectrum convolutional codes
NASA Technical Reports Server (NTRS)
Connor, Matthew C.; Perez, Lance C.; Costello, Daniel J., Jr.
1993-01-01
In order to communicate reliably and to reduce the required transmitter power, NASA uses coded communication systems on most of their deep space satellites and probes (e.g. Pioneer, Voyager, Galileo, and the TDRSS network). These communication systems use binary convolutional codes. Better codes make the system more reliable and require less transmitter power. However, there are no good construction techniques for convolutional codes. Thus, to find good convolutional codes requires an exhaustive search over the ensemble of all possible codes. In this paper, an efficient convolutional code search algorithm was implemented on an IBM RS6000 Model 580. The combination of algorithm efficiency and computational power enabled us to find, for the first time, the optimal rate 1/2, memory 14, convolutional code.
Achieving unequal error protection with convolutional codes
NASA Technical Reports Server (NTRS)
Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.
1994-01-01
This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.
Adaptive decoding of convolutional codes
NASA Astrophysics Data System (ADS)
Hueske, K.; Geldmacher, J.; Götze, J.
2007-06-01
Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.
The M&M Superposition Principle.
ERIC Educational Resources Information Center
Miller, John B.
2000-01-01
Describes a physical system for demonstrating operators, eigenvalues, and superposition of states for a set of unusual wave functions. Uses candy to provide students with a visual and concrete picture of a superposition of states rather than an abstract plot of several overlaid mathematical states. (WRM)
Many-Body Basis Set Superposition Effect.
Ouyang, John F; Bettens, Ryan P A
2015-11-10
The basis set superposition effect (BSSE) arises in electronic structure calculations of molecular clusters when questions relating to interactions between monomers within the larger cluster are asked. The binding energy, or total energy, of the cluster may be broken down into many smaller subcluster calculations and the energies of these subsystems linearly combined to, hopefully, produce the desired quantity of interest. Unfortunately, BSSE can plague these smaller fragment calculations. In this work, we carefully examine the major sources of error associated with reproducing the binding energy and total energy of a molecular cluster. In order to do so, we decompose these energies in terms of a many-body expansion (MBE), where a "body" here refers to the monomers that make up the cluster. In our analysis, we found it necessary to introduce something we designate here as a many-ghost many-body expansion (MGMBE). The work presented here produces some surprising results, but perhaps the most significant of all is that BSSE effects up to the order of truncation in a MBE of the total energy cancel exactly. In the case of the binding energy, the only BSSE correction terms remaining arise from the removal of the one-body monomer total energies. Nevertheless, our earlier work indicated that BSSE effects continued to remain in the total energy of the cluster up to very high truncation order in the MBE. We show in this work that the vast majority of these high-order many-body effects arise from BSSE associated with the one-body monomer total energies. Also, we found that, remarkably, the complete basis set limit values for the three-body and four-body interactions differed very little from that at the MP2/aug-cc-pVDZ level for the respective subclusters embedded within a larger cluster. PMID:26574311
Bernoulli convolutions and 1D dynamics
NASA Astrophysics Data System (ADS)
Kempton, Tom; Persson, Tomas
2015-10-01
We describe a family {φλ} of dynamical systems on the unit interval which preserve Bernoulli convolutions. We show that if there are parameter ranges for which these systems are piecewise convex, then the corresponding Bernoulli convolution will be absolutely continuous with bounded density. We study the systems {φλ} and give some numerical evidence to suggest values of λ for which {φλ} may be piecewise convex.
Coset Codes Viewed as Terminated Convolutional Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1996-01-01
In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.
Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity
Sales, J. S.; Silva, L. F. da; Almeida, N. G. de
2011-03-15
We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.
Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity
NASA Astrophysics Data System (ADS)
Sales, J. S.; da Silva, L. F.; de Almeida, N. G.
2011-03-01
We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.
On the Use of Material-Dependent Damping in ANSYS for Mode Superposition Transient Analysis
Nie, J.; Wei, X.
2011-07-17
The mode superposition method is often used for dynamic analysis of complex structures, such as the seismic Category I structures in nuclear power plants, in place of the less efficient full method, which uses the full system matrices for calculation of the transient responses. In such applications, specification of material-dependent damping is usually desirable because complex structures can consist of multiple types of materials that may have different energy dissipation capabilities. A recent review of the ANSYS manual for several releases found that the use of material-dependent damping is not clearly explained for performing a mode superposition transient dynamic analysis. This paper includes several mode superposition transient dynamic analyses using different ways to specify damping in ANSYS, in order to determine how material-dependent damping can be specified conveniently in a mode superposition transient dynamic analysis.
Experimental superposition of orders of quantum gates.
Procopio, Lorenzo M; Moqanaki, Amir; Araújo, Mateus; Costa, Fabio; Alonso Calafell, Irati; Dowd, Emma G; Hamel, Deny R; Rozema, Lee A; Brukner, Časlav; Walther, Philip
2015-01-01
Quantum computers achieve a speed-up by placing quantum bits (qubits) in superpositions of different states. However, it has recently been appreciated that quantum mechanics also allows one to 'superimpose different operations'. Furthermore, it has been shown that using a qubit to coherently control the gate order allows one to accomplish a task--determining if two gates commute or anti-commute--with fewer gate uses than any known quantum algorithm. Here we experimentally demonstrate this advantage, in a photonic context, using a second qubit to control the order in which two gates are applied to a first qubit. We create the required superposition of gate orders by using additional degrees of freedom of the photons encoding our qubits. The new resource we exploit can be interpreted as a superposition of causal orders, and could allow quantum algorithms to be implemented with an efficiency unlikely to be achieved on a fixed-gate-order quantum computer. PMID:26250107
An approximate CPHD filter for superpositional sensors
NASA Astrophysics Data System (ADS)
Mahler, Ronald; El-Fallah, Adel
2012-06-01
Most multitarget tracking algorithms, such as JPDA, MHT, and the PHD and CPHD filters, presume the following measurement model: (a) targets are point targets, (b) every target generates at most a single measurement, and (c) any measurement is generated by at most a single target. However, the most familiar sensors, such as surveillance and imaging radars, violate assumption (c). This is because they are actually superpositional-that is, any measurement is a sum of signals generated by all of the targets in the scene. At this conference in 2009, the first author derived exact formulas for PHD and CPHD filters that presume general superpositional measurement models. Unfortunately, these formulas are computationally intractable. In this paper, we modify and generalize a Gaussian approximation technique due to Thouin, Nannuru, and Coates to derive a computationally tractable superpositional-CPHD filter. Implementation requires sequential Monte Carlo (particle filter) techniques.
Macroscopic Quantum Superposition in Cavity Optomechanics.
Liao, Jie-Qiao; Tian, Lin
2016-04-22
Quantum superposition in mechanical systems is not only key evidence for macroscopic quantum coherence, but can also be utilized in modern quantum technology. Here we propose an efficient approach for creating macroscopically distinct mechanical superposition states in a two-mode optomechanical system. Photon hopping between the two cavity modes is modulated sinusoidally. The modulated photon tunneling enables an ultrastrong radiation-pressure force acting on the mechanical resonator, and hence significantly increases the mechanical displacement induced by a single photon. We study systematically the generation of the Yurke-Stoler-like states in the presence of system dissipations. We also discuss the experimental implementation of this scheme. PMID:27152802
Macroscopic Quantum Superposition in Cavity Optomechanics
NASA Astrophysics Data System (ADS)
Liao, Jie-Qiao; Tian, Lin
2016-04-01
Quantum superposition in mechanical systems is not only key evidence for macroscopic quantum coherence, but can also be utilized in modern quantum technology. Here we propose an efficient approach for creating macroscopically distinct mechanical superposition states in a two-mode optomechanical system. Photon hopping between the two cavity modes is modulated sinusoidally. The modulated photon tunneling enables an ultrastrong radiation-pressure force acting on the mechanical resonator, and hence significantly increases the mechanical displacement induced by a single photon. We study systematically the generation of the Yurke-Stoler-like states in the presence of system dissipations. We also discuss the experimental implementation of this scheme.
Large energy superpositions via Rydberg dressing
NASA Astrophysics Data System (ADS)
Khazali, Mohammadsadegh; Lau, Hon Wai; Humeniuk, Adam; Simon, Christoph
2016-08-01
We propose to create superposition states of over 100 strontium atoms in a ground state or metastable optical clock state using the Kerr-type interaction due to Rydberg state dressing in an optical lattice. The two components of the superposition can differ by an order of 300 eV in energy, allowing tests of energy decoherence models with greatly improved sensitivity. We take into account the effects of higher-order nonlinearities, spatial inhomogeneity of the interaction, decay from the Rydberg state, collective many-body decoherence, atomic motion, molecular formation, and diminishing Rydberg level separation for increasing principal number.
The principle of superposition in human prehension
Zatsiorsky, Vladimir M.; Latash, Mark L.; Gao, Fan; Shim, Jae Kun
2010-01-01
SUMMARY The experimental evidence supports the validity of the principle of superposition for multi-finger prehension in humans. Forces and moments of individual digits are defined by two independent commands: “Grasp the object stronger/weaker to prevent slipping” and “Maintain the rotational equilibrium of the object”. The effects of the two commands are summed up. PMID:20186284
The Evolution and Development of Neural Superposition
Agi, Egemen; Langen, Marion; Altschuler, Steven J.; Wu, Lani F.; Zimmermann, Timo
2014-01-01
Visual systems have a rich history as model systems for the discovery and understanding of basic principles underlying neuronal connectivity. The compound eyes of insects consist of up to thousands of small unit eyes that are connected by photoreceptor axons to set up a visual map in the brain. The photoreceptor axon terminals thereby represent neighboring points seen in the environment in neighboring synaptic units in the brain. Neural superposition is a special case of such a wiring principle, where photoreceptors from different unit eyes that receive the same input converge upon the same synaptic units in the brain. This wiring principle is remarkable, because each photoreceptor in a single unit eye receives different input and each individual axon, among thousands others in the brain, must be sorted together with those few axons that have the same input. Key aspects of neural superposition have been described as early as 1907. Since then neuroscientists, evolutionary and developmental biologists have been fascinated by how such a complicated wiring principle could evolve, how it is genetically encoded, and how it is developmentally realized. In this review article, we will discuss current ideas about the evolutionary origin and developmental program of neural superposition. Our goal is to identify in what way the special case of neural superposition can help us answer more general questions about the evolution and development of genetically “hard-wired” synaptic connectivity in the brain. PMID:24912630
Superposition of Polytropes in the Inner Heliosheath
NASA Astrophysics Data System (ADS)
Livadiotis, G.
2016-03-01
This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density-temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log-log scale is now generalized to a concave-downward parabola that is able to describe the observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ˜ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.
Sequential Syndrome Decoding of Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.
Number-Theoretic Functions via Convolution Rings.
ERIC Educational Resources Information Center
Berberian, S. K.
1992-01-01
Demonstrates the number theory property that the number of divisors of an integer n times the number of positive integers k, less than or equal to and relatively prime to n, equals the sum of the divisors of n using theory developed about multiplicative functions, the units of a convolution ring, and the Mobius Function. (MDH)
About closedness by convolution of the Tsallis maximizers
NASA Astrophysics Data System (ADS)
Vignat, C.; Hero, A. O., III; Costa, J. A.
2004-09-01
In this paper, we study the stability under convolution of the maximizing distributions of the Tsallis entropy under energy constraint (called hereafter Tsallis distributions). These distributions are shown to obey three important properties: a stochastic representation property, an orthogonal invariance property and a duality property. As a consequence of these properties, the behavior of Tsallis distributions under convolution is characterized. At last, a special random convolution, called Kingman convolution, is shown to ensure the stability of Tsallis distributions.
Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution
Akabani, G.; Hawkins, W.G.; Eckblade, M.B.; Leichner, P.K.
1999-01-01
The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transform (CHT) algorithm.
Macroscopic Quantum Superposition in Cavity Optomechanics
NASA Astrophysics Data System (ADS)
Liao, Jie-Qiao; Tian, Lin
Quantum superposition in mechanical systems is not only a key evidence of macroscopic quantum coherence, but can also be utilized in modern quantum technology. Here we propose an efficient approach for creating macroscopically distinct mechanical superposition states in a two-mode optomechanical system. Photon hopping between the two cavity-modes is modulated sinusoidally. The modulated photon tunneling enables an ultrastrong radiation-pressure force acting on the mechanical resonator, and hence significantly increases the mechanical displacement induced by a single photon. We present systematic studies on the generation of the Yurke-Stoler-like states in the presence of system dissipations. The state generation method is general and it can be implemented with either optomechanical or electromechanical systems. The authors are supported by the National Science Foundation under Award No. NSF-DMR-0956064 and the DARPA ORCHID program through AFOSR.
Concurrence of superpositions of many states
Akhtarshenas, Seyed Javad
2011-04-15
In this paper, we use the concurrence vector as a measure of entanglement, and investigate lower and upper bounds on the concurrence of a superposition of bipartite states as a function of the concurrence of the superposed states. We show that the amount of entanglement quantified by the concurrence vector is exactly the same as that quantified by I concurrence, so that our results can be compared to those given in Phys. Rev. A 76, 042328 (2007). We obtain a tighter lower bound in the case in which the two superposed states are orthogonal. We also show that when the two superposed states are not necessarily orthogonal, both lower and upper bounds are, in general, tighter than the bounds given in terms of the I concurrence. An extension of the results to the case with more than two states in the superpositions is also given.
Deep Learning with Hierarchical Convolutional Factor Analysis
Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence
2013-01-01
Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342
A convolutional neural network neutrino event classifier
Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.
2016-09-01
Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less
A Mathematical Motivation for Complex-Valued Convolutional Networks.
Tygert, Mark; Bruna, Joan; Chintala, Soumith; LeCun, Yann; Piantino, Serkan; Szlam, Arthur
2016-05-01
A complex-valued convolutional network (convnet) implements the repeated application of the following composition of three operations, recursively applying the composition to an input vector of nonnegative real numbers: (1) convolution with complex-valued vectors, followed by (2) taking the absolute value of every entry of the resulting vectors, followed by (3) local averaging. For processing real-valued random vectors, complex-valued convnets can be viewed as data-driven multiscale windowed power spectra, data-driven multiscale windowed absolute spectra, data-driven multiwavelet absolute values, or (in their most general configuration) data-driven nonlinear multiwavelet packets. Indeed, complex-valued convnets can calculate multiscale windowed spectra when the convnet filters are windowed complex-valued exponentials. Standard real-valued convnets, using rectified linear units (ReLUs), sigmoidal (e.g., logistic or tanh) nonlinearities, or max pooling, for example, do not obviously exhibit the same exact correspondence with data-driven wavelets (whereas for complex-valued convnets, the correspondence is much more than just a vague analogy). Courtesy of the exact correspondence, the remarkably rich and rigorous body of mathematical analysis for wavelets applies directly to (complex-valued) convnets. PMID:26890348
Quantum convolutional codes derived from constacyclic codes
NASA Astrophysics Data System (ADS)
Yan, Tingsu; Huang, Xinmei; Tang, Yuansheng
2014-12-01
In this paper, three families of quantum convolutional codes are constructed. The first one and the second one can be regarded as a generalization of Theorems 3, 4, 7 and 8 [J. Chen, J. Li, F. Yang and Y. Huang, Int. J. Theor. Phys., doi:10.1007/s10773-014-2214-6 (2014)], in the sense that we drop the constraint q ≡ 1 (mod 4). Furthermore, the second one and the third one attain the quantum generalized Singleton bound.
Satellite image classification using convolutional learning
NASA Astrophysics Data System (ADS)
Nguyen, Thao; Han, Jiho; Park, Dong-Chul
2013-10-01
A satellite image classification method using Convolutional Neural Network (CNN) architecture is proposed in this paper. As a special case of deep learning, CNN classifies classes of images without any feature extraction step while other existing classification methods utilize rather complex feature extraction processes. Experiments on a set of satellite image data and the preliminary results show that the proposed classification method can be a promising alternative over existing feature extraction-based schemes in terms of classification accuracy and classification speed.
Spatio-spectral concentration of convolutions
NASA Astrophysics Data System (ADS)
Hanasoge, Shravan M.
2016-05-01
Differential equations may possess coefficients that vary on a spectrum of scales. Because coefficients are typically multiplicative in real space, they turn into convolution operators in spectral space, mixing all wavenumbers. However, in many applications, only the largest scales of the solution are of interest and so the question turns to whether it is possible to build effective coarse-scale models of the coefficients in such a manner that the large scales of the solution are left intact. Here we apply the method of numerical homogenisation to deterministic linear equations to generate sub-grid-scale models of coefficients at desired frequency cutoffs. We use the Fourier basis to project, filter and compute correctors for the coefficients. The method is tested in 1D and 2D scenarios and found to reproduce the coarse scales of the solution to varying degrees of accuracy depending on the cutoff. We relate this method to mode-elimination Renormalisation Group (RG) and discuss the connection between accuracy and the cutoff wavenumber. The tradeoff is governed by a form of the uncertainty principle for convolutions, which states that as the convolution operator is squeezed in the spectral domain, it broadens in real space. As a consequence, basis sparsity is a high virtue and the choice of the basis can be critical.
Convolutional Neural Network Based dem Super Resolution
NASA Astrophysics Data System (ADS)
Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang
2016-06-01
DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.
Blind Identification of Convolutional Encoder Parameters
Su, Shaojing; Zhou, Jing; Huang, Zhiping; Liu, Chunwu; Zhang, Yimeng
2014-01-01
This paper gives a solution to the blind parameter identification of a convolutional encoder. The problem can be addressed in the context of the noncooperative communications or adaptive coding and modulations (ACM) for cognitive radio networks. We consider an intelligent communication receiver which can blindly recognize the coding parameters of the received data stream. The only knowledge is that the stream is encoded using binary convolutional codes, while the coding parameters are unknown. Some previous literatures have significant contributions for the recognition of convolutional encoder parameters in hard-decision situations. However, soft-decision systems are applied more and more as the improvement of signal processing techniques. In this paper we propose a method to utilize the soft information to improve the recognition performances in soft-decision communication systems. Besides, we propose a new recognition method based on correlation attack to meet low signal-to-noise ratio situations. Finally we give the simulation results to show the efficiency of the proposed methods. PMID:24982997
Toward quantum superposition of living organisms
NASA Astrophysics Data System (ADS)
Romero-Isart, Oriol; Juan, Mathieu L.; Quidant, Romain; Cirac, J. Ignacio
2010-03-01
The most striking feature of quantum mechanics is the existence of superposition states, where an object appears to be in different situations at the same time. The existence of such states has been previously tested with small objects, such as atoms, ions, electrons and photons (Zoller et al 2005 Eur. Phys. J. D 36 203-28), and even with molecules (Arndt et al 1999 Nature 401 680-2). More recently, it has been shown that it is possible to create superpositions of collections of photons (Deléglise et al 2008 Nature 455 510-14), atoms (Hammerer et al 2008 arXiv:0807.3358) or Cooper pairs (Friedman et al 2000 Nature 406 43-6). Very recent progress in optomechanical systems may soon allow us to create superpositions of even larger objects, such as micro-sized mirrors or cantilevers (Marshall et al 2003 Phys. Rev. Lett. 91 130401; Kippenberg and Vahala 2008 Science 321 1172-6 Marquardt and Girvin 2009 Physics 2 40; Favero and Karrai 2009 Nature Photon. 3 201-5), and thus to test quantum mechanical phenomena at larger scales. Here we propose a method to cool down and create quantum superpositions of the motion of sub-wavelength, arbitrarily shaped dielectric objects trapped inside a high-finesse cavity at a very low pressure. Our method is ideally suited for the smallest living organisms, such as viruses, which survive under low-vacuum pressures (Rothschild and Mancinelli 2001 Nature 406 1092-101) and optically behave as dielectric objects (Ashkin and Dziedzic 1987 Science 235 1517-20). This opens up the possibility of testing the quantum nature of living organisms by creating quantum superposition states in very much the same spirit as the original Schrödinger's cat 'gedanken' paradigm (Schrödinger 1935 Naturwissenschaften 23 807-12, 823-8, 844-9). We anticipate that our paper will be a starting point for experimentally addressing fundamental questions, such as the role of life and consciousness in quantum mechanics.
Profile of CT scan output dose in axial and helical modes using convolution
NASA Astrophysics Data System (ADS)
Anam, C.; Haryanto, F.; Widita, R.; Arif, I.; Dougherty, G.
2016-03-01
The profile of the CT scan output dose is crucial for establishing the patient dose profile. The purpose of this study is to investigate the profile of the CT scan output dose in both axial and helical modes using convolution. A single scan output dose profile (SSDP) in the center of a head phantom was measured using a solid-state detector. The multiple scan output dose profile (MSDP) in the axial mode was calculated using convolution between SSDP and delta function, whereas for the helical mode MSDP was calculated using convolution between SSDP and the rectangular function. MSDPs were calculated for a number of scans (5, 10, 15, 20 and 25). The multiple scan average dose (MSAD) for differing numbers of scans was compared to the value of CT dose index (CTDI). Finally, the edge values of MSDP for every scan number were compared to the corresponding MSAD values. MSDPs were successfully generated by using convolution between a SSDP and the appropriate function. We found that CTDI only accurately estimates MSAD when the number of scans was more than 10. We also found that the edge values of the profiles were 42% to 93% lower than that the corresponding MSADs.
X-ray optics simulation using Gaussian superposition technique.
Idir, Mourad; Cywiak, Moisés; Morales, Arquímedes; Modi, Mohammed H
2011-09-26
We present an efficient method to perform x-ray optics simulation with high or partially coherent x-ray sources using Gaussian superposition technique. In a previous paper, we have demonstrated that full characterization of optical systems, diffractive and geometric, is possible by using the Fresnel Gaussian Shape Invariant (FGSI) previously reported in the literature. The complex amplitude distribution in the object plane is represented by a linear superposition of complex Gaussians wavelets and then propagated through the optical system by means of the referred Gaussian invariant. This allows ray tracing through the optical system and at the same time allows calculating with high precision the complex wave-amplitude distribution at any plane of observation. This technique can be applied in a wide spectral range where the Fresnel diffraction integral applies including visible, x-rays, acoustic waves, etc. We describe the technique and include some computer simulations as illustrative examples for x-ray optical component. We show also that this method can be used to study partial or total coherence illumination problem. PMID:21996845
X-ray optics simulation using Gaussian superposition technique
Idir, M.; Cywiak, M.; Morales, A. and Modi, M.H.
2011-09-15
We present an efficient method to perform x-ray optics simulation with high or partially coherent x-ray sources using Gaussian superposition technique. In a previous paper, we have demonstrated that full characterization of optical systems, diffractive and geometric, is possible by using the Fresnel Gaussian Shape Invariant (FGSI) previously reported in the literature. The complex amplitude distribution in the object plane is represented by a linear superposition of complex Gaussians wavelets and then propagated through the optical system by means of the referred Gaussian invariant. This allows ray tracing through the optical system and at the same time allows calculating with high precision the complex wave-amplitude distribution at any plane of observation. This technique can be applied in a wide spectral range where the Fresnel diffraction integral applies including visible, x-rays, acoustic waves, etc. We describe the technique and include some computer simulations as illustrative examples for x-ray optical component. We show also that this method can be used to study partial or total coherence illumination problem.
Dubrovsky, V. G.; Topovsky, A. V.
2013-03-15
New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u{sup (n)}, n= 1, Horizontal-Ellipsis , N are constructed via Zakharov and Manakov {partial_derivative}-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u{sup (n)} and calculated by {partial_derivative}-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schroedinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums of special solutions u{sup (n)}. It is shown that the sums u=u{sup (k{sub 1})}+...+u{sup (k{sub m})}, 1 Less-Than-Or-Slanted-Equal-To k{sub 1} < k{sub 2} < Horizontal-Ellipsis < k{sub m} Less-Than-Or-Slanted-Equal-To N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schroedinger equation and can serve as model potentials for electrons in planar structures of modern electronics.
NASA Astrophysics Data System (ADS)
Dubrovsky, V. G.; Topovsky, A. V.
2013-03-01
New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u(n), n = 1, …, N are constructed via Zakharov and Manakov overline{partial }-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u(n) and calculated by overline{partial }-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schrödinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums of special solutions u(n). It is shown that the sums u= u^{(k_1)}+ldots + u^{(k_m)}, 1 ⩽ k1 < k2 < … < km ⩽ N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schrödinger equation and can serve as model potentials for electrons in planar structures of modern electronics.
Transient Response of Shells of Revolution by Direct Integration and Modal Superposition Methods
NASA Technical Reports Server (NTRS)
Stephens, W. B.; Adelman, H. M.
1974-01-01
The results of an analytical effort to obtain and evaluate transient response data for a cylindrical and a conical shell by use of two different approaches: direct integration and modal superposition are described. The inclusion of nonlinear terms is more important than the inclusion of secondary linear effects (transverse shear deformation and rotary inertia) although there are thin-shell structures where these secondary effects are important. The advantages of the direct integration approach are that geometric nonlinear and secondary effects are easy to include and high-frequency response may be calculated. In comparison to the modal superposition technique the computer storage requirements are smaller. The advantages of the modal superposition approach are that the solution is independent of the previous time history and that once the modal data are obtained, the response for repeated cases may be efficiently computed. Also, any admissible set of initial conditions can be applied.
Maximum predictive power and the superposition principle
NASA Technical Reports Server (NTRS)
Summhammer, Johann
1994-01-01
In quantum physics the direct observables are probabilities of events. We ask how observed probabilities must be combined to achieve what we call maximum predictive power. According to this concept the accuracy of a prediction must only depend on the number of runs whose data serve as input for the prediction. We transform each probability to an associated variable whose uncertainty interval depends only on the amount of data and strictly decreases with it. We find that for a probability which is a function of two other probabilities maximum predictive power is achieved when linearly summing their associated variables and transforming back to a probability. This recovers the quantum mechanical superposition principle.
Atom Microscopy via Dual Resonant Superposition
NASA Astrophysics Data System (ADS)
Abdul Jabar, M. S.; Bakht, Amin Bacha; Jalaluddin, M.; Iftikhar, Ahmad
2015-12-01
An M-type Rb87 atomic system is proposed for one-dimensional atom microscopy under the condition of Electromagnetically Induced Transparency. Super-localization of the atom in the absorption spectrum while its delocalization in the dispersion spectrum is observed due to the dual superposition effect of the resonant fields. The observed minimum uncertainty peaks will find important applications in Laser cooling, creating focused atom beams, atom nanolithography, and in measurement of the center-of-mass wave function of moving atoms.
Design of artificial spherical superposition compound eye
NASA Astrophysics Data System (ADS)
Cao, Zhaolou; Zhai, Chunjie; Wang, Keyi
2015-12-01
In this research, design of artificial spherical superposition compound eye is presented. The imaging system consists of three layers of lens arrays. In each channel, two lenses are designed to control the angular magnification and a field lens is added to improve the image quality and extend the field of view. Aspherical surfaces are introduced to improve the image quality. Ray tracing results demonstrate that the light from the same object point is focused at the same imaging point through different channels. Therefore the system has much higher energy efficiency than conventional spherical apposition compound eye.
On Kolmogorov's superpositions and Boolean functions
Beiu, V.
1998-12-31
The paper overviews results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on an explicit numerical (i.e., constructive) algorithm for Kolmogorov's superpositions they will show that for obtaining minimum size neutral networks for implementing any Boolean function, the activation function of the neurons is the identity function. Because classical AND-OR implementations, as well as threshold gate implementations require exponential size (in the worst case), it will follow that size-optimal solutions for implementing arbitrary Boolean functions require analog circuitry. Conclusions and several comments on the required precision are ending the paper.
A convolution model of rock bed thermal storage units
NASA Astrophysics Data System (ADS)
Sowell, E. F.; Curry, R. L.
1980-01-01
A method is presented whereby a packed-bed thermal storage unit is dynamically modeled for bi-directional flow and arbitrary input flow stream temperature variations. The method is based on the principle of calculating the output temperature as the sum of earlier input temperatures, each multiplied by a predetermined 'response factor', i.e., discrete convolution. A computer implementation of the scheme, in the form of a subroutine for a widely used solar simulation program (TRNSYS) is described and numerical results compared with other models. Also, a method for efficient computation of the required response factors is described; this solution is for a triangular input pulse, previously unreported, although the solution method is also applicable for other input functions. This solution requires a single integration of a known function which is easily carried out numerically to the required precision.
The analysis of convolutional codes via the extended Smith algorithm
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Onyszchuk, I.
1993-01-01
Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.
Resampling of data between arbitrary grids using convolution interpolation.
Rasche, V; Proksa, R; Sinkus, R; Börnert, P; Eggers, H
1999-05-01
For certain medical applications resampling of data is required. In magnetic resonance tomography (MRT) or computer tomography (CT), e.g., data may be sampled on nonrectilinear grids in the Fourier domain. For the image reconstruction a convolution-interpolation algorithm, often called gridding, can be applied for resampling of the data onto a rectilinear grid. Resampling of data from a rectilinear onto a nonrectilinear grid are needed, e.g., if projections of a given rectilinear data set are to be obtained. In this paper we introduce the application of the convolution interpolation for resampling of data from one arbitrary grid onto another. The basic algorithm can be split into two steps. First, the data are resampled from the arbitrary input grid onto a rectilinear grid and second, the rectilinear data is resampled onto the arbitrary output grid. Furthermore, we like to introduce a new technique to derive the sampling density function needed for the first step of our algorithm. For fast, sampling-pattern-independent determination of the sampling density function the Voronoi diagram of the sample distribution is calculated. The volume of the Voronoi cell around each sample is used as a measure for the sampling density. It is shown that the introduced resampling technique allows fast resampling of data between arbitrary grids. Furthermore, it is shown that the suggested approach to derive the sampling density function is suitable even for arbitrary sampling patterns. Examples are given in which the proposed technique has been applied for the reconstruction of data acquired along spiral, radial, and arbitrary trajectories and for the fast calculation of projections of a given rectilinearly sampled image. PMID:10416800
Convolutional coding combined with continuous phase modulation
NASA Technical Reports Server (NTRS)
Pizzi, S. V.; Wilson, S. G.
1985-01-01
Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.
Convolution neural networks for ship type recognition
NASA Astrophysics Data System (ADS)
Rainey, Katie; Reeder, John D.; Corelli, Alexander G.
2016-05-01
Algorithms to automatically recognize ship type from satellite imagery are desired for numerous maritime applications. This task is difficult, and example imagery accurately labeled with ship type is hard to obtain. Convolutional neural networks (CNNs) have shown promise in image recognition settings, but many of these applications rely on the availability of thousands of example images for training. This work attempts to under- stand for which types of ship recognition tasks CNNs might be well suited. We report the results of baseline experiments applying a CNN to several ship type classification tasks, and discuss many of the considerations that must be made in approaching this problem.
Geometric multi-resolution analysis and data-driven convolutions
NASA Astrophysics Data System (ADS)
Strawn, Nate
2015-09-01
We introduce a procedure for learning discrete convolutional operators for generic datasets which recovers the standard block convolutional operators when applied to sets of natural images. They key observation is that the standard block convolutional operators on images are intuitive because humans naturally understand the grid structure of the self-evident functions over images spaces (pixels). This procedure first constructs a Geometric Multi-Resolution Analysis (GMRA) on the set of variables giving rise to a dataset, and then leverages the details of this data structure to identify subsets of variables upon which convolutional operators are supported, as well as a space of functions that can be shared coherently amongst these supports.
Convolutional fountain distribution over fading wireless channels
NASA Astrophysics Data System (ADS)
Usman, Mohammed
2012-08-01
Mobile broadband has opened the possibility of a rich variety of services to end users. Broadcast/multicast of multimedia data is one such service which can be used to deliver multimedia to multiple users economically. However, the radio channel poses serious challenges due to its time-varying properties, resulting in each user experiencing different channel characteristics, independent of other users. Conventional methods of achieving reliability in communication, such as automatic repeat request and forward error correction do not scale well in a broadcast/multicast scenario over radio channels. Fountain codes, being rateless and information additive, overcome these problems. Although the design of fountain codes makes it possible to generate an infinite sequence of encoded symbols, the erroneous nature of radio channels mandates the need for protecting the fountain-encoded symbols, so that the transmission is feasible. In this article, the performance of fountain codes in combination with convolutional codes, when used over radio channels, is presented. An investigation of various parameters, such as goodput, delay and buffer size requirements, pertaining to the performance of fountain codes in a multimedia broadcast/multicast environment is presented. Finally, a strategy for the use of 'convolutional fountain' over radio channels is also presented.
Convolution Inequalities for the Boltzmann Collision Operator
NASA Astrophysics Data System (ADS)
Alonso, Ricardo J.; Carneiro, Emanuel; Gamba, Irene M.
2010-09-01
We study integrability properties of a general version of the Boltzmann collision operator for hard and soft potentials in n-dimensions. A reformulation of the collisional integrals allows us to write the weak form of the collision operator as a weighted convolution, where the weight is given by an operator invariant under rotations. Using a symmetrization technique in L p we prove a Young’s inequality for hard potentials, which is sharp for Maxwell molecules in the L 2 case. Further, we find a new Hardy-Littlewood-Sobolev type of inequality for Boltzmann collision integrals with soft potentials. The same method extends to radially symmetric, non-increasing potentials that lie in some {Ls_{weak}} or L s . The method we use resembles a Brascamp, Lieb and Luttinger approach for multilinear weighted convolution inequalities and follows a weak formulation setting. Consequently, it is closely connected to the classical analysis of Young and Hardy-Littlewood-Sobolev inequalities. In all cases, the inequality constants are explicitly given by formulas depending on integrability conditions of the angular cross section (in the spirit of Grad cut-off). As an additional application of the technique we also obtain estimates with exponential weights for hard potentials in both conservative and dissipative interactions.
Convolution formulations for non-negative intensity.
Williams, Earl G
2013-08-01
Previously unknown spatial convolution formulas for a variant of the active normal intensity in planar coordinates have been derived that use measured pressure or normal velocity near-field holograms to construct a positive-only (outward) intensity distribution in the plane, quantifying the areas of the vibrating structure that produce radiation to the far-field. This is an extension of the outgoing-only (unipolar) intensity technique recently developed for arbitrary geometries by Steffen Marburg. The method is applied independently to pressure and velocity data measured in a plane close to the surface of a point-driven, unbaffled rectangular plate in the laboratory. It is demonstrated that the sound producing regions of the structure are clearly revealed using the derived formulas and that the spatial resolution is limited to a half-wavelength. A second set of formulas called the hybrid-intensity formulas are also derived which yield a bipolar intensity using a different spatial convolution operator, again using either the measured pressure or velocity. It is demonstrated from the experiment results that the velocity formula yields the classical active intensity and the pressure formula an interesting hybrid intensity that may be useful for source localization. Computations are fast and carried out in real space without Fourier transforms into wavenumber space. PMID:23927105
Quantifying the interplay effect in prostate IMRT delivery using a convolution-based method
Li, Haisen S.; Chetty, Indrin J.; Solberg, Timothy D.
2008-05-15
The authors present a segment-based convolution method to account for the interplay effect between intrafraction organ motion and the multileaf collimator position for each particular segment in intensity modulated radiation therapy (IMRT) delivered in a step-and-shoot manner. In this method, the static dose distribution attributed to each segment is convolved with the probability density function (PDF) of motion during delivery of the segment, whereas in the conventional convolution method (''average-based convolution''), the static dose distribution is convolved with the PDF averaged over an entire fraction, an entire treatment course, or even an entire patient population. In the case of IMRT delivered in a step-and-shoot manner, the average-based convolution method assumes that in each segment the target volume experiences the same motion pattern (PDF) as that of population. In the segment-based convolution method, the dose during each segment is calculated by convolving the static dose with the motion PDF specific to that segment, allowing both intrafraction motion and the interplay effect to be accounted for in the dose calculation. Intrafraction prostate motion data from a population of 35 patients tracked using the Calypso system (Calypso Medical Technologies, Inc., Seattle, WA) was used to generate motion PDFs. These were then convolved with dose distributions from clinical prostate IMRT plans. For a single segment with a small number of monitor units, the interplay effect introduced errors of up to 25.9% in the mean CTV dose compared against the planned dose evaluated by using the PDF of the entire fraction. In contrast, the interplay effect reduced the minimum CTV dose by 4.4%, and the CTV generalized equivalent uniform dose by 1.3%, in single fraction plans. For entire treatment courses delivered in either a hypofractionated (five fractions) or conventional (>30 fractions) regimen, the discrepancy in total dose due to interplay effect was negligible.
Phase properties of multicomponent superposition states in various amplifiers
NASA Technical Reports Server (NTRS)
Lee, Kang-Soo; Kim, M. S.
1994-01-01
There have been theoretical studies for generation of optical coherent superposition states. Once the superposition state is generated it is natural to ask if it is possible to amplify it without losing the nonclassical properties of the field state. We consider amplification of the superposition state in various amplifiers such as a sub-Poissonian amplifier, a phase-sensitive amplifier and a classical amplifier. We show the evolution of phase probability distribution functions in the amplifier.
Authentication Protocol using Quantum Superposition States
Kanamori, Yoshito; Yoo, Seong-Moo; Gregory, Don A.; Sheldon, Frederick T
2009-01-01
When it became known that quantum computers could break the RSA (named for its creators - Rivest, Shamir, and Adleman) encryption algorithm within a polynomial-time, quantum cryptography began to be actively studied. Other classical cryptographic algorithms are only secure when malicious users do not have sufficient computational power to break security within a practical amount of time. Recently, many quantum authentication protocols sharing quantum entangled particles between communicators have been proposed, providing unconditional security. An issue caused by sharing quantum entangled particles is that it may not be simple to apply these protocols to authenticate a specific user in a group of many users. An authentication protocol using quantum superposition states instead of quantum entangled particles is proposed. The random number shared between a sender and a receiver can be used for classical encryption after the authentication has succeeded. The proposed protocol can be implemented with the current technologies we introduce in this paper.
On the superposition principle in interference experiments
Sinha, Aninda; H. Vijay, Aravind; Sinha, Urbasi
2015-01-01
The superposition principle is usually incorrectly applied in interference experiments. This has recently been investigated through numerics based on Finite Difference Time Domain (FDTD) methods as well as the Feynman path integral formalism. In the current work, we have derived an analytic formula for the Sorkin parameter which can be used to determine the deviation from the application of the principle. We have found excellent agreement between the analytic distribution and those that have been earlier estimated by numerical integration as well as resource intensive FDTD simulations. The analytic handle would be useful for comparing theory with future experiments. It is applicable both to physics based on classical wave equations as well as the non-relativistic Schrödinger equation. PMID:25973948
Image statistics decoding for convolutional codes
NASA Technical Reports Server (NTRS)
Pitt, G. H., III; Swanson, L.; Yuen, J. H.
1987-01-01
It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.
Some partial-unit-memory convolutional codes
NASA Technical Reports Server (NTRS)
Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.
1991-01-01
The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.
Bacterial colony counting by Convolutional Neural Networks.
Ferrari, Alessandro; Lombardi, Stefano; Signoroni, Alberto
2015-08-01
Counting bacterial colonies on microbiological culture plates is a time-consuming, error-prone, nevertheless fundamental task in microbiology. Computer vision based approaches can increase the efficiency and the reliability of the process, but accurate counting is challenging, due to the high degree of variability of agglomerated colonies. In this paper, we propose a solution which adopts Convolutional Neural Networks (CNN) for counting the number of colonies contained in confluent agglomerates, that scored an overall accuracy of the 92.8% on a large challenging dataset. The proposed CNN-based technique for estimating the cardinality of colony aggregates outperforms traditional image processing approaches, becoming a promising approach to many related applications. PMID:26738016
Nonclassical Properties of Q-Deformed Superposition Light Field State
NASA Technical Reports Server (NTRS)
Ren, Min; Shenggui, Wang; Ma, Aiqun; Jiang, Zhuohong
1996-01-01
In this paper, the squeezing effect, the bunching effect and the anti-bunching effect of the superposition light field state which involving q-deformation vacuum state and q-Glauber coherent state are studied, the controllable q-parameter of the squeezing effect, the bunching effect and the anti-bunching effect of q-deformed superposition light field state are obtained.
Dose discrepancies in the buildup region and their impact on dose calculations for IMRT fields
Hsu, Shu-Hui; Moran, Jean M.; Chen Yu; Kulasekere, Ravi; Roberson, Peter L.
2010-05-15
Purpose: Dose accuracy in the buildup region for radiotherapy treatment planning suffers from challenges in both measurement and calculation. This study investigates the dosimetry in the buildup region at normal and oblique incidences for open and IMRT fields and assesses the quality of the treatment planning calculations. Methods: This study was divided into three parts. First, percent depth doses and profiles (for 5x5, 10x10, 20x20, and 30x30 cm{sup 2} field sizes at 0 deg., 45 deg., and 70 deg. incidences) were measured in the buildup region in Solid Water using an Attix parallel plate chamber and Kodak XV film, respectively. Second, the parameters in the empirical contamination (EC) term of the convolution/superposition (CVSP) calculation algorithm were fitted based on open field measurements. Finally, seven segmental head-and-neck IMRT fields were measured on a flat phantom geometry and compared to calculations using {gamma} and dose-gradient compensation (C) indices to evaluate the impact of residual discrepancies and to assess the adequacy of the contamination term for IMRT fields. Results: Local deviations between measurements and calculations for open fields were within 1% and 4% in the buildup region for normal and oblique incidences, respectively. The C index with 5%/1 mm criteria for IMRT fields ranged from 89% to 99% and from 96% to 98% at 2 mm and 10 cm depths, respectively. The quality of agreement in the buildup region for open and IMRT fields is comparable to that in nonbuildup regions. Conclusions: The added EC term in CVSP was determined to be adequate for both open and IMRT fields. Due to the dependence of calculation accuracy on (1) EC modeling, (2) internal convolution and density grid sizes, (3) implementation details in the algorithm, and (4) the accuracy of measurements used for treatment planning system commissioning, the authors recommend an evaluation of the accuracy of near-surface dose calculations as a part of treatment planning
Sperling, Edit; Bunner, Anne E.; Sykes, Michael T.; Williamson, James R.
2008-01-01
Quantitative proteomic mass spectrometry involves comparison of the amplitudes of peaks resulting from different isotope labeling patterns, including fractional atomic labeling and fractional residue labeling. We have developed a general and flexible analytical treatment of the complex isotope distributions that arise in these experiments, using Fourier transform convolution to calculate labeled isotope distributions and least-squares for quantitative comparison with experimental peaks. The degree of fractional atomic and fractional residue labeling can be determined from experimental peaks at the same time as the integrated intensity of all of the isotopomers in the isotope distribution. The approach is illustrated using data with fractional 15N-labeling and fractional 13C-isoleucine labeling. The least-squares Fourier transform convolution approach can be applied to many types of quantitive proteomic data, including data from stable isotope labeling by amino acids in cell culture and pulse labeling experiments. PMID:18522437
Accelerated unsteady flow line integral convolution.
Liu, Zhanping; Moorhead, Robert J
2005-01-01
Unsteady flow line integral convolution (UFLIC) is a texture synthesis technique for visualizing unsteady flows with high temporal-spatial coherence. Unfortunately, UFLIC requires considerable time to generate each frame due to the huge amount of pathline integration that is computed for particle value scattering. This paper presents Accelerated UFLIC (AUFLIC) for near interactive (1 frame/second) visualization with 160,000 particles per frame. AUFLIC reuses pathlines in the value scattering process to reduce computationally expensive pathline integration. A flow-driven seeding strategy is employed to distribute seeds such that only a few of them need pathline integration while most seeds are placed along the pathlines advected at earlier times by other seeds upstream and, therefore, the known pathlines can be reused for fast value scattering. To maintain a dense scattering coverage to convey high temporal-spatial coherence while keeping the expense of pathline integration low, a dynamic seeding controller is designed to decide whether to advect, copy, or reuse a pathline. At a negligible memory cost, AUFLIC is 9 times faster than UFLIC with comparable image quality. PMID:15747635
Blind source separation of convolutive mixtures
NASA Astrophysics Data System (ADS)
Makino, Shoji
2006-04-01
This paper introduces the blind source separation (BSS) of convolutive mixtures of acoustic signals, especially speech. A statistical and computational technique, called independent component analysis (ICA), is examined. By achieving nonlinear decorrelation, nonstationary decorrelation, or time-delayed decorrelation, we can find source signals only from observed mixed signals. Particular attention is paid to the physical interpretation of BSS from the acoustical signal processing point of view. Frequency-domain BSS is shown to be equivalent to two sets of frequency domain adaptive microphone arrays, i.e., adaptive beamformers (ABFs). Although BSS can reduce reverberant sounds to some extent in the same way as ABF, it mainly removes the sounds from the jammer direction. This is why BSS has difficulties with long reverberation in the real world. If sources are not "independent," the dependence results in bias noise when obtaining the correct separation filter coefficients. Therefore, the performance of BSS is limited by that of ABF. Although BSS is upper bounded by ABF, BSS has a strong advantage over ABF. BSS can be regarded as an intelligent version of ABF in the sense that it can adapt without any information on the array manifold or the target direction, and sources can be simultaneously active in BSS.
Metaheuristic Algorithms for Convolution Neural Network.
Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni
2016-01-01
A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738
Metaheuristic Algorithms for Convolution Neural Network
Fanany, Mohamad Ivan; Arymurthy, Aniati Murni
2016-01-01
A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738
Superposition rules for higher order systems and their applications
NASA Astrophysics Data System (ADS)
Cariñena, J. F.; Grabowski, J.; de Lucas, J.
2012-05-01
Superposition rules form a class of functions that describe general solutions of systems of first-order ordinary differential equations in terms of generic families of particular solutions and certain constants. In this work, we extend this notion and other related ones to systems of higher order differential equations and analyse their properties. Several results concerning the existence of various types of superposition rules for higher order systems are proved and illustrated with examples extracted from the physics and mathematics literature. In particular, two new superposition rules for the second- and third-order Kummer-Schwarz equations are derived.
Nonclassical properties and quantum resources of hierarchical photonic superposition states
Volkoff, T. J.
2015-11-15
We motivate and introduce a class of “hierarchical” quantum superposition states of N coupled quantum oscillators. Unlike other well-known multimode photonic Schrödinger-cat states such as entangled coherent states, the hierarchical superposition states are characterized as two-branch superpositions of tensor products of single-mode Schrödinger-cat states. In addition to analyzing the photon statistics and quasiprobability distributions of prominent examples of these nonclassical states, we consider their usefulness for highprecision quantum metrology of nonlinear optical Hamiltonians and quantify their mode entanglement. We propose two methods for generating hierarchical superpositions in N = 2 coupled microwave cavities, exploiting currently existing quantum optical technology for generating entanglement between spatially separated electromagnetic field modes.
A Galois connection approach to superposition and inaccessibility
NASA Astrophysics Data System (ADS)
Butterfield, Jeremy; Melia, Joseph
1993-12-01
Working in a quantum logic framework and using the idea of Galois connections, we give a natural sufficient condition for superposition and inaccessibility to give the same closure map on sets of states.
Quantum State Engineering Via Coherent-State Superpositions
NASA Technical Reports Server (NTRS)
Janszky, Jozsef; Adam, P.; Szabo, S.; Domokos, P.
1996-01-01
The quantum interference between the two parts of the optical Schrodinger-cat state makes possible to construct a wide class of quantum states via discrete superpositions of coherent states. Even a small number of coherent states can approximate the given quantum states at a high accuracy when the distance between the coherent states is optimized, e. g. nearly perfect Fock state can be constructed by discrete superpositions of n + 1 coherent states lying in the vicinity of the vacuum state.
Noise-enhanced convolutional neural networks.
Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart
2016-06-01
Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. PMID:26700535
NASA Technical Reports Server (NTRS)
Platnick, S.
1999-01-01
Photon transport in a multiple scattering medium is critically dependent on scattering statistics, in particular the average number of scatterings. A superposition technique is derived to accurately determine the average number of scatterings encountered by reflected and transmitted photons within arbitrary layers in plane-parallel, vertically inhomogeneous clouds. As expected, the resulting scattering number profiles are highly dependent on cloud particle absorption and solar/viewing geometry. The technique uses efficient adding and doubling radiative transfer procedures, avoiding traditional time-intensive Monte Carlo methods. Derived superposition formulae are applied to a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Cloud remote sensing techniques that use solar reflectance or transmittance measurements generally assume a homogeneous plane-parallel cloud structure. The scales over which this assumption is relevant, in both the vertical and horizontal, can be obtained from the superposition calculations. Though the emphasis is on photon transport in clouds, the derived technique is applicable to any scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers in the atmosphere.
NASA Astrophysics Data System (ADS)
Xiong, Jun; Liu, J. G.; Cao, Li
2015-12-01
This paper presents hardware efficient designs for implementing the one-dimensional (1D) discrete Fourier transform (DFT). Once DFT is formulated as the cyclic convolution form, the improved first-order moments-based cyclic convolution structure can be used as the basic computing unit for the DFT computation, which only contains a control module, a barrel shifter and (N-1)/2 accumulation units. After decomposing and reordering the twiddle factors, all that remains to do is shifting the input data sequence and accumulating them under the control of the statistical results on the twiddle factors. The whole calculation process only contains shift operations and additions with no need for multipliers and large memory. Compared with the previous first-order moments-based structure for DFT, the proposed designs have the advantages of less hardware consumption, lower power consumption and the flexibility to achieve better performance in certain cases. A series of experiments have proven the high performance of the proposed designs in terms of the area time product and power consumption. Similar efficient designs can be obtained for other computations, such as DCT/IDCT, DST/IDST, digital filter and correlation by transforming them into the forms of the first-order moments based cyclic convolution.
Vehicle detection based on visual saliency and deep sparse convolution hierarchical model
NASA Astrophysics Data System (ADS)
Cai, Yingfeng; Wang, Hai; Chen, Xiaobo; Gao, Li; Chen, Long
2016-06-01
Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification. These types of methods generally have high processing times and low vehicle detection performance. To address this issue, a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed. A visual saliency calculation is firstly used to generate a small vehicle candidate area. The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection. The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group, which outperforms the existing state-of-the-art algorithms. More importantly, high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.
Quantum superposition at the half-metre scale.
Kovachy, T; Asenbaum, P; Overstreet, C; Donnelly, C A; Dickerson, S M; Sugarbaker, A; Hogan, J M; Kasevich, M A
2015-12-24
The quantum superposition principle allows massive particles to be delocalized over distant positions. Though quantum mechanics has proved adept at describing the microscopic world, quantum superposition runs counter to intuitive conceptions of reality and locality when extended to the macroscopic scale, as exemplified by the thought experiment of Schrödinger's cat. Matter-wave interferometers, which split and recombine wave packets in order to observe interference, provide a way to probe the superposition principle on macroscopic scales and explore the transition to classical physics. In such experiments, large wave-packet separation is impeded by the need for long interaction times and large momentum beam splitters, which cause susceptibility to dephasing and decoherence. Here we use light-pulse atom interferometry to realize quantum interference with wave packets separated by up to 54 centimetres on a timescale of 1 second. These results push quantum superposition into a new macroscopic regime, demonstrating that quantum superposition remains possible at the distances and timescales of everyday life. The sub-nanokelvin temperatures of the atoms and a compensation of transverse optical forces enable a large separation while maintaining an interference contrast of 28 per cent. In addition to testing the superposition principle in a new regime, large quantum superposition states are vital to exploring gravity with atom interferometers in greater detail. We anticipate that these states could be used to increase sensitivity in tests of the equivalence principle, measure the gravitational Aharonov-Bohm effect, and eventually detect gravitational waves and phase shifts associated with general relativity. PMID:26701053
Quantum superposition at the half-metre scale
NASA Astrophysics Data System (ADS)
Kovachy, T.; Asenbaum, P.; Overstreet, C.; Donnelly, C. A.; Dickerson, S. M.; Sugarbaker, A.; Hogan, J. M.; Kasevich, M. A.
2015-12-01
The quantum superposition principle allows massive particles to be delocalized over distant positions. Though quantum mechanics has proved adept at describing the microscopic world, quantum superposition runs counter to intuitive conceptions of reality and locality when extended to the macroscopic scale, as exemplified by the thought experiment of Schrödinger’s cat. Matter-wave interferometers, which split and recombine wave packets in order to observe interference, provide a way to probe the superposition principle on macroscopic scales and explore the transition to classical physics. In such experiments, large wave-packet separation is impeded by the need for long interaction times and large momentum beam splitters, which cause susceptibility to dephasing and decoherence. Here we use light-pulse atom interferometry to realize quantum interference with wave packets separated by up to 54 centimetres on a timescale of 1 second. These results push quantum superposition into a new macroscopic regime, demonstrating that quantum superposition remains possible at the distances and timescales of everyday life. The sub-nanokelvin temperatures of the atoms and a compensation of transverse optical forces enable a large separation while maintaining an interference contrast of 28 per cent. In addition to testing the superposition principle in a new regime, large quantum superposition states are vital to exploring gravity with atom interferometers in greater detail. We anticipate that these states could be used to increase sensitivity in tests of the equivalence principle, measure the gravitational Aharonov-Bohm effect, and eventually detect gravitational waves and phase shifts associated with general relativity.
Tissue Heterogeneity in IMRT Dose Calculation for Lung Cancer
Pasciuti, Katia; Iaccarino, Giuseppe; Strigari, Lidia; Malatesta, Tiziana; Benassi, Marcello; Di Nallo, Anna Maria; Mirri, Alessandra; Pinzi, Valentina; Landoni, Valeria
2011-07-01
The aim of this study was to evaluate the differences in accuracy of dose calculation between 3 commonly used algorithms, the Pencil Beam algorithm (PB), the Anisotropic Analytical Algorithm (AAA), and the Collapsed Cone Convolution Superposition (CCCS) for intensity-modulated radiation therapy (IMRT). The 2D dose distributions obtained with the 3 algorithms were compared on each CT slice pixel by pixel, using the MATLAB code (The MathWorks, Natick, MA) and the agreement was assessed with the {gamma} function. The effect of the differences on dose-volume histograms (DVHs), tumor control, and normal tissue complication probability (TCP and NTCP) were also evaluated, and its significance was quantified by using a nonparametric test. In general PB generates regions of over-dosage both in the lung and in the tumor area. These differences are not always in DVH of the lung, although the Wilcoxon test indicated significant differences in 2 of 4 patients. Disagreement in the lung region was also found when the {Gamma} analysis was performed. The effect on TCP is less important than for NTCP because of the slope of the curve at the level of the dose of interest. The effect of dose calculation inaccuracy is patient-dependent and strongly related to beam geometry and to the localization of the tumor. When multiple intensity-modulated beams are used, the effect of the presence of the heterogeneity on dose distribution may not always be easily predictable.
Tissue heterogeneity in IMRT dose calculation for lung cancer.
Pasciuti, Katia; Iaccarino, Giuseppe; Strigari, Lidia; Malatesta, Tiziana; Benassi, Marcello; Di Nallo, Anna Maria; Mirri, Alessandra; Pinzi, Valentina; Landoni, Valeria
2011-01-01
The aim of this study was to evaluate the differences in accuracy of dose calculation between 3 commonly used algorithms, the Pencil Beam algorithm (PB), the Anisotropic Analytical Algorithm (AAA), and the Collapsed Cone Convolution Superposition (CCCS) for intensity-modulated radiation therapy (IMRT). The 2D dose distributions obtained with the 3 algorithms were compared on each CT slice pixel by pixel, using the MATLAB code (The MathWorks, Natick, MA) and the agreement was assessed with the γ function. The effect of the differences on dose-volume histograms (DVHs), tumor control, and normal tissue complication probability (TCP and NTCP) were also evaluated, and its significance was quantified by using a nonparametric test. In general PB generates regions of over-dosage both in the lung and in the tumor area. These differences are not always in DVH of the lung, although the Wilcoxon test indicated significant differences in 2 of 4 patients. Disagreement in the lung region was also found when the Γ analysis was performed. The effect on TCP is less important than for NTCP because of the slope of the curve at the level of the dose of interest. The effect of dose calculation inaccuracy is patient-dependent and strongly related to beam geometry and to the localization of the tumor. When multiple intensity-modulated beams are used, the effect of the presence of the heterogeneity on dose distribution may not always be easily predictable. PMID:20970989
A Geometric Construction of Cyclic Cocycles on Twisted Convolution Algebras
NASA Astrophysics Data System (ADS)
Angel, Eitan
2010-09-01
In this thesis we give a construction of cyclic cocycles on convolution algebras twisted by gerbes over discrete translation groupoids. In his seminal book, Connes constructs a map from the equivariant cohomology of a manifold carrying the action of a discrete group into the periodic cyclic cohomology of the associated convolution algebra. Furthermore, for proper étale groupoids, J.-L. Tu and P. Xu provide a map between the periodic cyclic cohomology of a gerbe twisted convolution algebra and twisted cohomology groups. Our focus will be the convolution algebra with a product defined by a gerbe over a discrete translation groupoid. When the action is not proper, we cannot construct an invariant connection on the gerbe; therefore to study this algebra, we instead develop simplicial notions related to ideas of J. Dupont to construct a simplicial form representing the Dixmier-Douady class of the gerbe. Then by using a JLO formula we define a morphism from a simplicial complex twisted by this simplicial Dixmier-Douady form to the mixed bicomplex of certain matrix algebras. Finally, we define a morphism from this complex to the mixed bicomplex computing the periodic cyclic cohomology of the twisted convolution algebras.
Observing a coherent superposition of an atom and a molecule
Dowling, Mark R.; Bartlett, Stephen D.; Rudolph, Terry; Spekkens, Robert W.
2006-11-15
We demonstrate that it is possible, in principle, to perform a Ramsey-type interference experiment to exhibit a coherent superposition of a single atom and a diatomic molecule. This gedanken experiment, based on the techniques of Aharonov and Susskind [Phys. Rev. 155, 1428 (1967)], explicitly violates the commonly accepted superselection rule that forbids coherent superpositions of eigenstates of differing atom number. A Bose-Einstein condensate plays the role of a reference frame that allows for coherent operations analogous to Ramsey pulses. We also investigate an analogous gedanken experiment to exhibit a coherent superposition of a single boson and a fermion, violating the commonly accepted superselection rule forbidding coherent superpositions of states of differing particle statistics. In this case, the reference frame is realized by a multimode state of many fermions. This latter case reproduces all of the relevant features of Ramsey interferometry, including Ramsey fringes over many repetitions of the experiment. However, the apparent inability of this proposed experiment to produce well-defined relative phases between two distinct systems each described by a coherent superposition of a boson and a fermion demonstrates that there are additional, outstanding requirements to fully 'lift' the univalence superselection rule.
NASA Astrophysics Data System (ADS)
Sanchez-Parcerisa, D.; Cortés-Giraldo, M. A.; Dolney, D.; Kondrla, M.; Fager, M.; Carabe, A.
2016-02-01
In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm-1) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.
Error-trellis syndrome decoding techniques for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1985-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Error-trellis Syndrome Decoding Techniques for Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
ASIC-based architecture for the real-time computation of 2D convolution with large kernel size
NASA Astrophysics Data System (ADS)
Shao, Rui; Zhong, Sheng; Yan, Luxin
2015-12-01
Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.
NASA Astrophysics Data System (ADS)
Mercan, Kadir; Demir, Çiğdem; Civalek, Ömer
2016-01-01
In the present manuscript, free vibration response of circular cylindrical shells with functionally graded material (FGM) is investigated. The method of discrete singular convolution (DSC) is used for numerical solution of the related governing equation of motion of FGM cylindrical shell. The constitutive relations are based on the Love's first approximation shell theory. The material properties are graded in the thickness direction according to a volume fraction power law indexes. Frequency values are calculated for different types of boundary conditions, material and geometric parameters. In general, close agreement between the obtained results and those of other researchers has been found.
Output-sensitive 3D line integral convolution.
Falk, Martin; Weiskopf, Daniel
2008-01-01
We propose an output-sensitive visualization method for 3D line integral convolution (LIC) whose rendering speed is largely independent of the data set size and mostly governed by the complexity of the output on the image plane. Our approach of view-dependent visualization tightly links the LIC generation with the volume rendering of the LIC result in order to avoid the computation of unnecessary LIC points: early-ray termination and empty-space leaping techniques are used to skip the computation of the LIC integral in a lazy-evaluation approach; both ray casting and texture slicing can be used as volume-rendering techniques. The input noise is modeled in object space to allow for temporal coherence under object and camera motion. Different noise models are discussed, covering dense representations based on filtered white noise all the way to sparse representations similar to oriented LIC. Aliasing artifacts are avoided by frequency control over the 3D noise and by employing a 3D variant of MIPmapping. A range of illumination models is applied to the LIC streamlines: different codimension-2 lighting models and a novel gradient-based illumination model that relies on precomputed gradients and does not require any direct calculation of gradients after the LIC integral is evaluated. We discuss the issue of proper sampling of the LIC and volume-rendering integrals by employing a frequency-space analysis of the noise model and the precomputed gradients. Finally, we demonstrate that our visualization approach lends itself to a fast graphics processing unit (GPU) implementation that supports both steady and unsteady flow. Therefore, this 3D LIC method allows users to interactively explore 3D flow by means of high-quality, view-dependent, and adaptive LIC volume visualization. Applications to flow visualization in combination with feature extraction and focus-and-context visualization are described, a comparison to previous methods is provided, and a detailed performance
relline: Relativistic line profiles calculation
NASA Astrophysics Data System (ADS)
Dauser, Thomas
2015-05-01
relline calculates relativistic line profiles; it is compatible with the common X-ray data analysis software XSPEC (ascl:9910.005) and ISIS (ascl:1302.002). The two basic forms are an additive line model (RELLINE) and a convolution model to calculate relativistic smearing (RELCONV).
Non-coaxial superposition of vector vortex beams.
Aadhi, A; Vaity, Pravin; Chithrabhanu, P; Reddy, Salla Gangi; Prabakar, Shashi; Singh, R P
2016-02-10
Vector vortex beams are classified into four types depending upon spatial variation in their polarization vector. We have generated all four of these types of vector vortex beams by using a modified polarization Sagnac interferometer with a vortex lens. Further, we have studied the non-coaxial superposition of two vector vortex beams. It is observed that the superposition of two vector vortex beams with same polarization singularity leads to a beam with another kind of polarization singularity in their interaction region. The results may be of importance in ultrahigh security of the polarization-encrypted data that utilizes vector vortex beams and multiple optical trapping with non-coaxial superposition of vector vortex beams. We verified our experimental results with theory. PMID:26906384
Dissipative Optomechanical Preparation of Macroscopic Quantum Superposition States.
Abdi, M; Degenfeld-Schonburg, P; Sameti, M; Navarrete-Benlloch, C; Hartmann, M J
2016-06-10
The transition from quantum to classical physics remains an intensely debated question even though it has been investigated for more than a century. Further clarifications could be obtained by preparing macroscopic objects in spatial quantum superpositions and proposals for generating such states for nanomechanical devices either in a transient or a probabilistic fashion have been put forward. Here, we introduce a method to deterministically obtain spatial superpositions of arbitrary lifetime via dissipative state preparation. In our approach, we engineer a double-well potential for the motion of the mechanical element and drive it towards the ground state, which shows the desired spatial superposition, via optomechanical sideband cooling. We propose a specific implementation based on a superconducting circuit coupled to the mechanical motion of a lithium-decorated monolayer graphene sheet, introduce a method to verify the mechanical state by coupling it to a superconducting qubit, and discuss its prospects for testing collapse models for the quantum to classical transition. PMID:27341233
Robust mesoscopic superposition of strongly correlated ultracold atoms
Hallwood, David W.; Ernst, Thomas; Brand, Joachim
2010-12-15
We propose a scheme to create coherent superpositions of annular flow of strongly interacting bosonic atoms in a one-dimensional ring trap. The nonrotating ground state is coupled to a vortex state with mesoscopic angular momentum by means of a narrow potential barrier and an applied phase that originates from either rotation or a synthetic magnetic field. We show that superposition states in the Tonks-Girardeau regime are robust against single-particle loss due to the effects of strong correlations. The coupling between the mesoscopically distinct states scales much more favorably with particle number than in schemes relying on weak interactions, thus making particle numbers of hundreds or thousands feasible. Coherent oscillations induced by time variation of parameters may serve as a 'smoking gun' signature for detecting superposition states.
Dissipative Optomechanical Preparation of Macroscopic Quantum Superposition States
NASA Astrophysics Data System (ADS)
Abdi, M.; Degenfeld-Schonburg, P.; Sameti, M.; Navarrete-Benlloch, C.; Hartmann, M. J.
2016-06-01
The transition from quantum to classical physics remains an intensely debated question even though it has been investigated for more than a century. Further clarifications could be obtained by preparing macroscopic objects in spatial quantum superpositions and proposals for generating such states for nanomechanical devices either in a transient or a probabilistic fashion have been put forward. Here, we introduce a method to deterministically obtain spatial superpositions of arbitrary lifetime via dissipative state preparation. In our approach, we engineer a double-well potential for the motion of the mechanical element and drive it towards the ground state, which shows the desired spatial superposition, via optomechanical sideband cooling. We propose a specific implementation based on a superconducting circuit coupled to the mechanical motion of a lithium-decorated monolayer graphene sheet, introduce a method to verify the mechanical state by coupling it to a superconducting qubit, and discuss its prospects for testing collapse models for the quantum to classical transition.
Robustly optimal rate one-half binary convolutional codes
NASA Technical Reports Server (NTRS)
Johannesson, R.
1975-01-01
Three optimality criteria for convolutional codes are considered in this correspondence: namely, free distance, minimum distance, and distance profile. Here we report the results of computer searches for rate one-half binary convolutional codes that are 'robustly optimal' in the sense of being optimal for one criterion and optimal or near-optimal for the other two criteria. Comparisons with previously known codes are made. The results of a computer simulation are reported to show the importance of the distance profile to computational performance with sequential decoding.
Seeing lens imaging as a superposition of multiple views
NASA Astrophysics Data System (ADS)
Grusche, Sascha
2016-01-01
In the conventional approach to lens imaging, rays are used to map object points to image points. However, many students want to think of the image as a whole. To answer this need, Kepler’s ray drawing is reinterpreted in terms of shifted camera obscura images. These images are uncovered by covering the lens with pinholes. Thus, lens imaging is seen as a superposition of sharp images from different viewpoints, so-called elemental images. This superposition is simulated with projectors, and with transparencies. Lens ray diagrams are constructed based on elemental images; the conventional construction method is included as a special case.
Tight bounds on the concurrence of quantum superpositions
Niset, J.; Cerf, N. J.
2007-10-15
The entanglement content of superpositions of quantum states is investigated based on a measure called concurrence. Given a bipartite pure state in arbitrary dimension written as the quantum superposition of two other such states, we find simple inequalities relating the concurrence of the state to that of its components. We derive an exact expression for the concurrence when the component states are biorthogonal and provide elegant upper and lower bounds in all other cases. For quantum bits, our upper bound is tighter than the previously derived bound [N. Linden et al., Phys. Rev. Lett. 97, 100502 (2006)].
Orbital angular momentum of superposition of identical shifted vortex beams.
Kovalev, A A; Kotlyar, V V
2015-10-01
We have formulated and proven the following theorem: the superposition of an arbitrary number of arbitrarily off-axis, identical nonparaxial optical vortex beams of arbitrary radially symmetric shape, integer topological charge n, and arbitrary real weight coefficients has the normalized orbital angular momentum (OAM) equal to that of individual constituent identical beams. This theorem enables generating vortex laser beams with different (not necessarily radially symmetric) intensity profiles but identical OAM. Superpositions of Bessel, Hankel-Bessel, Bessel-Gaussian, and Laguerre-Gaussian beams with the same OAM are discussed. PMID:26479934
Entanglement and discord of the superposition of Greenberger-Horne-Zeilinger states
Parashar, Preeti; Rana, Swapan
2011-03-15
We calculate the analytic expression for geometric measure of entanglement for arbitrary superposition of two N-qubit canonical orthonormal Greenberger-Horne-Zeilinger (GHZ) states and the same for two W states. In the course of characterizing all kinds of nonclassical correlations, an explicit formula for quantum discord (via relative entropy) for the former class of states has been presented. Contrary to the GHZ state, the closest separable state to the W state is not classical. Therefore, in this case, the discord is different from the relative entropy of entanglement. We conjecture that the discord for the N-qubit W state is log{sub 2}N.
Die and telescoping punch form convolutions in thin diaphragm
NASA Technical Reports Server (NTRS)
1965-01-01
Die and punch set forms convolutions in thin dished metal diaphragm without stretching the metal too thin at sharp curvatures. The die corresponds to the metal shape to be formed, and the punch consists of elements that progressively slide against one another under the restraint of a compressed-air cushion to mate with the die.
Convolutional virtual electric field for image segmentation using active contours.
Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden
2014-01-01
Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images. PMID:25360586
Maximum-likelihood estimation of circle parameters via convolution.
Zelniker, Emanuel E; Clarkson, I Vaughan L
2006-04-01
The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images. PMID:16579374
Vysotsky, Yu B; Belyaeva, E A; Fomina, E S; Fainerman, V B; Aksenenko, E V; Vollhardt, D; Miller, R
2011-12-21
The applicability of the superposition-additive approach for the calculation of the thermodynamic parameters of formation and atomization of conjugate systems, their dipole electric polarisabilities, molecular diamagnetic susceptibilities, π-electron circular currents, as well as for the estimation of the thermodynamic parameters of substituted alkanes, was demonstrated earlier. Now the applicability of the superposition-additive approach for the description of clusterization of fatty alcohols, thioalcohols, amines, carboxylic acids at the air/water interface is studied. Two superposition-additive schemes are used that ensure the maximum superimposition of the graphs of the considered molecular structures including the intermolecular CH-HC interactions within the clusters. The thermodynamic parameters of clusterization are calculated for dimers, trimers and tetramers. The calculations are based on the values of enthalpy, entropy and Gibbs' energy of clusterization calculated earlier using the semiempirical quantum chemical PM3 method. It is shown that the proposed approach is capable of the reproduction with sufficiently enough accuracy of the values calculated previously. PMID:22042000
A high-order fast method for computing convolution integral with smooth kernel
Qiang, Ji
2009-09-28
In this paper we report on a high-order fast method to numerically calculate convolution integral with smooth non-periodic kernel. This method is based on the Newton-Cotes quadrature rule for the integral approximation and an FFT method for discrete summation. The method can have an arbitrarily high-order accuracy in principle depending on the number of points used in the integral approximation and a computational cost of O(Nlog(N)), where N is the number of grid points. For a three-point Simpson rule approximation, the method has an accuracy of O(h{sup 4}), where h is the size of the computational grid. Applications of the Simpson rule based algorithm to the calculation of a one-dimensional continuous Gauss transform and to the calculation of a two-dimensional electric field from a charged beam are also presented.
Evaluation of convolutional neural networks for visual recognition.
Nebauer, C
1998-01-01
Convolutional neural networks provide an efficient method to constrain the complexity of feedforward neural networks by weight sharing and restriction to local connections. This network topology has been applied in particular to image classification when sophisticated preprocessing is to be avoided and raw images are to be classified directly. In this paper two variations of convolutional networks--neocognitron and a modification of neocognitron--are compared with classifiers based on fully connected feedforward layers (i.e., multilayer perceptron, nearest neighbor classifier, auto-encoding network) with respect to their visual recognition performance. Beside the original neocognitron a modification of the neocognitron is proposed which combines neurons from perceptron with the localized network structure of neocognitron. Instead of training convolutional networks by time-consuming error backpropagation, in this work a modular procedure is applied whereby layers are trained sequentially from the input to the output layer in order to recognize features of increasing complexity. For a quantitative experimental comparison with standard classifiers two very different recognition tasks have been chosen: handwritten digit recognition and face recognition. In the first example on handwritten digit recognition the generalization of convolutional networks is compared to fully connected networks. In several experiments the influence of variations of position, size, and orientation of digits is determined and the relation between training sample size and validation error is observed. In the second example recognition of human faces is investigated under constrained and variable conditions with respect to face orientation and illumination and the limitations of convolutional networks are discussed. PMID:18252491
De-convoluting mixed crude oil in Prudhoe Bay Field, North Slope, Alaska
Peters, K.E.; Scott, Ramos L.; Zumberge, J.E.; Valin, Z.C.; Bird, K.J.
2008-01-01
Seventy-four crude oil samples from the Barrow arch on the North Slope of Alaska were studied to assess the relative volumetric contributions from different source rocks to the giant Prudhoe Bay Field. We applied alternating least squares to concentration data (ALS-C) for 46 biomarkers in the range C19-C35 to de-convolute mixtures of oil generated from carbonate rich Triassic Shublik Formation and clay rich Jurassic Kingak Shale and Cretaceous Hue Shale-gamma ray zone (Hue-GRZ) source rocks. ALS-C results for 23 oil samples from the prolific Ivishak Formation reservoir of the Prudhoe Bay Field indicate approximately equal contributions from Shublik Formation and Hue-GRZ source rocks (37% each), less from the Kingak Shale (26%), and little or no contribution from other source rocks. These results differ from published interpretations that most oil in the Prudhoe Bay Field originated from the Shublik Formation source rock. With few exceptions, the relative contribution of oil from the Shublik Formation decreases, while that from the Hue-GRZ increases in reservoirs along the Barrow arch from Point Barrow in the northwest to Point Thomson in the southeast (???250 miles or 400 km). The Shublik contribution also decreases to a lesser degree between fault blocks within the Ivishak pool from west to east across the Prudhoe Bay Field. ALS-C provides a robust means to calculate the relative amounts of two or more oil types in a mixture. Furthermore, ALS-C does not require that pure end member oils be identified prior to analysis or that laboratory mixtures of these oils be prepared to evaluate mixing. ALS-C of biomarkers reliably de-convolutes mixtures because the concentrations of compounds in mixtures vary as linear functions of the amount of each oil type. ALS of biomarker ratios (ALS-R) cannot be used to de-convolute mixtures because compound ratios vary as nonlinear functions of the amount of each oil type.
Carrasco, P.; Jornet, N.; Duch, M. A.; Panettieri, V.; Weber, L.; Eudaldo, T.; Ginjaume, M.; Ribas, M.
2007-08-15
To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10x10, 5x5, and 2x2 cm{sup 2}) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2x2 cm{sup 2} field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values
Carrasco, P; Jornet, N; Duch, M A; Panettieri, V; Weber, L; Eudaldo, T; Ginjaume, M; Ribas, M
2007-08-01
To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10 x 10, 5 x 5, and 2 x 2 cm2) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2 x 2 cm2 field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values
Dose-calculation algorithms in the context of inhomogeneity corrections for high energy photon beams
Papanikolaou, Niko; Stathakis, Sotirios
2009-10-15
Radiation therapy has witnessed a plethora of innovations and developments in the past 15 years. Since the introduction of computed tomography for treatment planning there has been a steady introduction of new methods to refine treatment delivery. Imaging continues to be an integral part of the planning, but also the delivery, of modern radiotherapy. However, all the efforts of image guided radiotherapy, intensity-modulated planning and delivery, adaptive radiotherapy, and everything else that we pride ourselves in having in the armamentarium can fall short, unless there is an accurate dose-calculation algorithm. The agreement between the calculated and delivered doses is of great significance in radiation therapy since the accuracy of the absorbed dose as prescribed determines the clinical outcome. Dose-calculation algorithms have evolved greatly over the years in an effort to be more inclusive of the effects that govern the true radiation transport through the human body. In this Vision 20/20 paper, we look back to see how it all started and where things are now in terms of dose algorithms for photon beams and the inclusion of tissue heterogeneities. Convolution-superposition algorithms have dominated the treatment planning industry for the past few years. Monte Carlo techniques have an inherent accuracy that is superior to any other algorithm and as such will continue to be the gold standard, along with measurements, and maybe one day will be the algorithm of choice for all particle treatment planning in radiation therapy.
NASA Astrophysics Data System (ADS)
Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.
2016-03-01
Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.
Elsayed, M.; Fathalah, K.A.
1996-05-01
In a previous work, the separation of a variable/superposition technique was used to predict the flux density distribution on the receiver surfaces of solar central receiver plants. In this paper further developments of the technique are given. A numerical technique is derived to carry out the convolution of the sunshape and error density functions. Also, a simplified numerical procedure is presented to determine the basic flux density function on which the technique depends. The technique is used to predict the receiver solar flux distribution using two sunshapes, polynomial and Gaussian distributions. The results predicted with the technique are validated by comparison with experimental results from mirrors both with and without partial shading/blocking of their surfaces.
Multidimensional detonation propagation modeled via nonlinear shock wave superposition
NASA Astrophysics Data System (ADS)
Higgins, Andrew; Mehrjoo, Navid
2010-11-01
Detonation waves in gases are inherently multidimensional due to their cellular structure, and detonations in liquids and heterogeneous solids are often associated with instabilities and stochastic, localized reaction centers (i.e., hot spots). To explore the statistical nature of detonation dynamics in such systems, a simple model that idealizes detonation propagation as an ensemble of interacting blast waves originating from spatially random point sources has been proposed. Prior results using this model exhibited features that have been observed in real detonating systems, such as anomalous scaling between axisymmetric and two-dimensional geometries. However, those efforts used simple linear superposition of the blast waves. The present work uses a model of blast wave superposition developed for multiple-source explosions (the LAMB approximation) that incorporates the nonlinear interaction of shock waves analytically, permitting the effect of a more physical model of blast wave interaction to be explored. The results are suggestive of a universal behavior in systems of spatially randomized energy sources.
Nonclassicality tests and entanglement witnesses for macroscopic mechanical superposition states
NASA Astrophysics Data System (ADS)
Gittsovich, Oleg; Moroder, Tobias; Asadian, Ali; Gühne, Otfried; Rabl, Peter
2015-02-01
We describe a set of measurement protocols for performing nonclassicality tests and the verification of entangled superposition states of macroscopic continuous variable systems, such as nanomechanical resonators. Following earlier works, we first consider a setup where a two-level system is used to indirectly probe the motion of the mechanical system via Ramsey measurements and discuss the application of this method for detecting nonclassical mechanical states. We then show that the generalization of this technique to multiple resonator modes allows the conditioned preparation and the detection of entangled mechanical superposition states. The proposed measurement protocols can be implemented in various qubit-resonator systems that are currently under experimental investigation and find applications in future tests of quantum mechanics at a macroscopic scale.
Quantum Delayed-Choice Experiment and Wave-Particle Superposition
NASA Astrophysics Data System (ADS)
Guo, Qi; Cheng, Liu-Yong; Wang, Hong-Fu; Zhang, Shou
2015-08-01
We propose a simple implementation scheme of quantum delayed-choice experiment in linear optical system without initial entanglement resource. By choosing different detecting devices, one can selectively observe the photon's different behaviors after the photon has been passed the Mach-Zehnder interferometer. The scheme shows that the photon's wave behavior and particle behavior can be observed with a single experimental setup by postselection, that is, the photon can show the superposition behavior of wave and particle. Especially, we compare the wave-particle superposition behavior and the wave-particle mixture behavior in detail, and find the quantum interference effect between wave and particle behavior, which may be helpful to reveal the nature of photonessentially.
NASA Astrophysics Data System (ADS)
Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk; Brink, Carsten; Clivio, Alessandro; Knöös, Tommy; Nicolini, Giorgia; Cozzi, Luca
2007-03-01
A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam with modified Batho correction; Nucletron Helax-TMS: the collapsed cone and the pencil beam with equivalent path length correction; CMS XiO: the multigrid superposition and the fast Fourier transform convolution; Philips Pinnacle: the collapsed cone. Monte Carlo simulations (MC) performed with the EGSnrc codes BEAMnrc and DOSxyznrc from NRCC in Ottawa were used as a benchmark. The study was carried out in simple geometrical water phantoms (ρ = 1.00 g cm-3) with inserts of different densities simulating light lung tissue (ρ = 0.035 g cm-3), normal lung (ρ = 0.20 g cm-3) and cortical bone tissue (ρ = 1.80 g cm-3). Experiments were performed for low- and high-energy photon beams (6 and 15 MV) and for square (13 × 13 cm2) and elongated rectangular (2.8 × 13 cm2) fields. Analysis was carried out on the basis of depth dose curves and transverse profiles at several depths. Assuming the MC data as reference, γ index analysis was carried out distinguishing between regions inside the non-water inserts or inside the uniform water. For this study, a distance to agreement was set to 3 mm while the dose difference varied from 2% to 10%. In general all algorithms based on pencil-beam convolutions showed a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (ρ = 0.035 g cm-3), enhanced for the most energetic beam. For denser, and more clinical, densities a better agreement among the sophisticated algorithms with respect to MC was observed.
UFLIC: A Line Integral Convolution Algorithm for Visualizing Unsteady Flows
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Kao, David L.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
This paper presents an algorithm, UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Using the Line Integral Convolution (LIC) as the underlying method, a new convolution algorithm is proposed that can effectively trace the flow's global features over time. The new algorithm consists of a time-accurate value depositing scheme and a successive feed-forward method. The value depositing scheme accurately models the flow advection, and the successive feed-forward method maintains the coherence between animation frames. Our new algorithm can produce time-accurate, highly coherent flow animations to highlight global features in unsteady flow fields. CFD scientists, for the first time, are able to visualize unsteady surface flows using our algorithm.
Deep learning for steganalysis via convolutional neural networks
NASA Astrophysics Data System (ADS)
Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu
2015-03-01
Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.
Two-dimensional convolute integers for analytical instrumentation
NASA Technical Reports Server (NTRS)
Edwards, T. R.
1982-01-01
As new analytical instruments and techniques emerge with increased dimensionality, a corresponding need is seen for data processing logic which can appropriately address the data. Two-dimensional measurements reveal enhanced unknown mixture analysis capability as a result of the greater spectral information content over two one-dimensional methods taken separately. It is noted that two-dimensional convolute integers are merely an extension of the work by Savitzky and Golay (1964). It is shown that these low-pass, high-pass and band-pass digital filters are truly two-dimensional and that they can be applied in a manner identical with their one-dimensional counterpart, that is, a weighted nearest-neighbor, moving average with zero phase shifting, convoluted integer (universal number) weighting coefficients.
a Convolutional Network for Semantic Facade Segmentation and Interpretation
NASA Astrophysics Data System (ADS)
Schmitz, Matthias; Mayer, Helmut
2016-06-01
In this paper we present an approach for semantic interpretation of facade images based on a Convolutional Network. Our network processes the input images in a fully convolutional way and generates pixel-wise predictions. We show that there is no need for large datasets to train the network when transfer learning is employed, i. e., a part of an already existing network is used and fine-tuned, and when the available data is augmented by using deformed patches of the images for training. The network is trained end-to-end with patches of the images and each patch is augmented independently. To undo the downsampling for the classification, we add deconvolutional layers to the network. Outputs of different layers of the network are combined to achieve more precise pixel-wise predictions. We demonstrate the potential of our network based on results for the eTRIMS (Korč and Förstner, 2009) dataset reduced to facades.
Study on Expansion of Convolutional Compactors over Galois Field
NASA Astrophysics Data System (ADS)
Arai, Masayuki; Fukumoto, Satoshi; Iwasaki, Kazuhiko
Convolutional compactors offer a promising technique of compacting test responses. In this study we expand the architecture of convolutional compactor onto a Galois field in order to improve compaction ratio as well as reduce X-masking probability, namely, the probability that an error is masked by unknown values. While each scan chain is independently connected by EOR gates in the conventional arrangement, the proposed scheme treats q signals as an element over GF(2q), and the connections are configured on the same field. We show the arrangement of the proposed compactors and the equivalent expression over GF(2). We then evaluate the effectiveness of the proposed expansion in terms of X-masking probability by simulations with uniform distribution of X-values, as well as reduction of hardware overheads. Furthermore, we evaluate a multi-weight arrangement of the proposed compactors for non-uniform X distributions.
Image Super-Resolution Using Deep Convolutional Networks.
Dong, Chao; Loy, Chen Change; He, Kaiming; Tang, Xiaoou
2016-02-01
We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality. PMID:26761735
Tailoring quantum superpositions with linearly polarized amplitude-modulated light
Pustelny, S.; Koczwara, M.; Cincio, L.; Gawlik, W.
2011-04-15
Amplitude-modulated nonlinear magneto-optical rotation is a powerful technique that offers a possibility of controllable generation of given quantum states. In this paper, we demonstrate creation and detection of specific ground-state magnetic-sublevel superpositions in {sup 87}Rb. By appropriate tuning of the modulation frequency and magnetic-field induction the efficiency of a given coherence generation is controlled. The processes are analyzed versus different experimental parameters.
Quantum Superposition, Collapse, and the Default Specification Principle
NASA Astrophysics Data System (ADS)
Nikkhah Shirazi, Armin
2014-03-01
Quantum Superposition and collapse lie at the heart of the difficulty in understanding what quantum mechanics is exactly telling us about reality. We present here a principle which permits one to formulate a simple and general mathematical model that abstracts these features out of quantum theory. A precise formulation of this principle in terms of a set-theoretic axiom added to standard set theory may directly connect the foundations of physics to the foundations of mathematics.
Macroscopic superposition of ultracold atoms with orbital degrees of freedom
Garcia-March, M. A.; Carr, L. D.; Dounas-Frazer, D. R.
2011-04-15
We introduce higher dimensions into the problem of Bose-Einstein condensates in a double-well potential, taking into account orbital angular momentum. We completely characterize the eigenstates of this system, delineating new regimes via both analytical high-order perturbation theory and numerical exact diagonalization. Among these regimes are mixed Josephson- and Fock-like behavior, crossings in both excited and ground states, and shadows of macroscopic superposition states.
Sensing Super-position: Visual Instrument Sensor Replacement
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2006-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an
Interplay of gravitation and linear superposition of different mass eigenstates
NASA Astrophysics Data System (ADS)
Ahluwalia, D. V.; Burgard, C.
1998-04-01
The interplay of gravitation and the quantum-mechanical principle of linear superposition induces a new set of neutrino oscillation phases. These ensure that the flavor-oscillation clocks, inherent in the phenomenon of neutrino oscillations, redshift precisely as required by Einstein's theory of gravitation. The physical observability of these phases in the context of the solar neutrino anomaly, type-II supernova, and certain atomic systems is briefly discussed.
Face Detection Using GPU-Based Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Nasse, Fabian; Thurau, Christian; Fink, Gernot A.
In this paper, we consider the problem of face detection under pose variations. Unlike other contributions, a focus of this work resides within efficient implementation utilizing the computational powers of modern graphics cards. The proposed system consists of a parallelized implementation of convolutional neural networks (CNNs) with a special emphasize on also parallelizing the detection process. Experimental validation in a smart conference room with 4 active ceiling-mounted cameras shows a dramatic speed-gain under real-life conditions.
New syndrome decoder for (n, 1) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.
Single-Atom Gating of Quantum State Superpositions
Moon, Christopher
2010-04-28
The ultimate miniaturization of electronic devices will likely require local and coherent control of single electronic wavefunctions. Wavefunctions exist within both physical real space and an abstract state space with a simple geometric interpretation: this state space - or Hilbert space - is spanned by mutually orthogonal state vectors corresponding to the quantized degrees of freedom of the real-space system. Measurement of superpositions is akin to accessing the direction of a vector in Hilbert space, determining an angle of rotation equivalent to quantum phase. Here we show that an individual atom inside a designed quantum corral1 can control this angle, producing arbitrary coherent superpositions of spatial quantum states. Using scanning tunnelling microscopy and nanostructures assembled atom-by-atom we demonstrate how single spins and quantum mirages can be harnessed to image the superposition of two electronic states. We also present a straightforward method to determine the atom path enacting phase rotations between any desired state vectors. A single atom thus becomes a real-space handle for an abstract Hilbert space, providing a simple technique for coherent quantum state manipulation at the spatial limit of condensed matter.
Fine-grained representation learning in convolutional autoencoders
NASA Astrophysics Data System (ADS)
Luo, Chang; Wang, Jie
2016-03-01
Convolutional autoencoders (CAEs) have been widely used as unsupervised feature extractors for high-resolution images. As a key component in CAEs, pooling is a biologically inspired operation to achieve scale and shift invariances, and the pooled representation directly affects the CAEs' performance. Fine-grained pooling, which uses small and dense pooling regions, encodes fine-grained visual cues and enhances local characteristics. However, it tends to be sensitive to spatial rearrangements. In most previous works, pooled features were obtained by empirically modulating parameters in CAEs. We see the CAE as a whole and propose a fine-grained representation learning law to extract better fine-grained features. This representation learning law suggests two directions for improvement. First, we probabilistically evaluate the discrimination-invariance tradeoff with fine-grained granularity in the pooled feature maps, and suggest the proper filter scale in the convolutional layer and appropriate whitening parameters in preprocessing step. Second, pooling approaches are combined with the sparsity degree in pooling regions, and we propose the preferable pooling approach. Experimental results on two independent benchmark datasets demonstrate that our representation learning law could guide CAEs to extract better fine-grained features and performs better in multiclass classification task. This paper also provides guidance for selecting appropriate parameters to obtain better fine-grained representation in other convolutional neural networks.
Automatic localization of vertebrae based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Shen, Wei; Yang, Feng; Mu, Wei; Yang, Caiyun; Yang, Xin; Tian, Jie
2015-03-01
Localization of the vertebrae is of importance in many medical applications. For example, the vertebrae can serve as the landmarks in image registration. They can also provide a reference coordinate system to facilitate the localization of other organs in the chest. In this paper, we propose a new vertebrae localization method using convolutional neural networks (CNN). The main advantage of the proposed method is the removal of hand-crafted features. We construct two training sets to train two CNNs that share the same architecture. One is used to distinguish the vertebrae from other tissues in the chest, and the other is aimed at detecting the centers of the vertebrae. The architecture contains two convolutional layers, both of which are followed by a max-pooling layer. Then the output feature vector from the maxpooling layer is fed into a multilayer perceptron (MLP) classifier which has one hidden layer. Experiments were performed on ten chest CT images. We used leave-one-out strategy to train and test the proposed method. Quantitative comparison between the predict centers and ground truth shows that our convolutional neural networks can achieve promising localization accuracy without hand-crafted features.
A Discriminative Representation of Convolutional Features for Indoor Scene Recognition
NASA Astrophysics Data System (ADS)
Khan, Salman H.; Hayat, Munawar; Bennamoun, Mohammed; Togneri, Roberto; Sohel, Ferdous A.
2016-07-01
Indoor scene recognition is a multi-faceted and challenging problem due to the diverse intra-class variations and the confusing inter-class similarities. This paper presents a novel approach which exploits rich mid-level convolutional features to categorize indoor scenes. Traditionally used convolutional features preserve the global spatial structure, which is a desirable property for general object recognition. However, we argue that this structuredness is not much helpful when we have large variations in scene layouts, e.g., in indoor scenes. We propose to transform the structured convolutional activations to another highly discriminative feature space. The representation in the transformed space not only incorporates the discriminative aspects of the target dataset, but it also encodes the features in terms of the general object categories that are present in indoor scenes. To this end, we introduce a new large-scale dataset of 1300 object categories which are commonly present in indoor scenes. Our proposed approach achieves a significant performance boost over previous state of the art approaches on five major scene classification datasets.
On the growth and form of cortical convolutions
NASA Astrophysics Data System (ADS)
Tallinen, Tuomas; Chung, Jun Young; Rousseau, François; Girard, Nadine; Lefèvre, Julien; Mahadevan, L.
2016-06-01
The rapid growth of the human cortex during development is accompanied by the folding of the brain into a highly convoluted structure. Recent studies have focused on the genetic and cellular regulation of cortical growth, but understanding the formation of the gyral and sulcal convolutions also requires consideration of the geometry and physical shaping of the growing brain. To study this, we use magnetic resonance images to build a 3D-printed layered gel mimic of the developing smooth fetal brain; when immersed in a solvent, the outer layer swells relative to the core, mimicking cortical growth. This relative growth puts the outer layer into mechanical compression and leads to sulci and gyri similar to those in fetal brains. Starting with the same initial geometry, we also build numerical simulations of the brain modelled as a soft tissue with a growing cortex, and show that this also produces the characteristic patterns of convolutions over a realistic developmental course. All together, our results show that although many molecular determinants control the tangential expansion of the cortex, the size, shape, placement and orientation of the folds arise through iterations and variations of an elementary mechanical instability modulated by early fetal brain geometry.
Entanglement of electronic subbands and coherent superposition of spin states in a Rashba nanoloop
NASA Astrophysics Data System (ADS)
Safaiee, R.; Golshan, M. M.
2011-10-01
The present work is concerned with an analysis of the entanglement between the electronic coherent superpositions of spin states and subbands in a quasi-one-dimensional Rashba nanoloop acted upon by a strong perpendicular magnetic field. We explicitly include the confining potential and the Rashba spin-orbit coupling into the Hamiltonian and then proceed to calculate the von Neumann entropy, a measure of entanglement, as a function of time. An analysis of the von Neumann entropy demonstrates that, as expected, the dynamics of entanglement strongly depends upon the initial state and electronic subband excitations. When the initial state is a pure one formed by a subband excitation and the z-component of spin states, the entanglement exhibits periodic oscillations with local minima (dips). On the other hand, when the initial state is formed by the subband states and a coherent superposition of spin states, the entanglement still periodically oscillates, exhibiting stronger correlations, along with elimination of the dips. Moreover, in the long run, the entanglement for the latter case undergoes the phenomenon of collapse-revivals. This behaviour is absent for the first case of the initial states. We also show that the degree of entanglement strongly depends upon the electronic subband excitations in both cases.
NASA Astrophysics Data System (ADS)
An, Nguyen Ba
2009-04-01
Three novel probabilistic yet conclusive schemes are proposed to teleport a general two-mode coherent-state superposition via attenuated quantum channels with ideal and/or threshold detectors. The calculated total success probability is highest (lowest) when only ideal (threshold) detectors are used.
NASA Astrophysics Data System (ADS)
Xu, Zhigang
2015-12-01
In this study, a new method of storm surge modeling is proposed. This method is orders of magnitude faster than the traditional method within the linear dynamics framework. The tremendous enhancement of the computational efficiency results from the use of a pre-calculated all-source Green's function (ASGF), which connects a point of interest (POI) to the rest of the world ocean. Once the ASGF has been pre-calculated, it can be repeatedly used to quickly produce a time series of a storm surge at the POI. Using the ASGF, storm surge modeling can be simplified as its convolution with an atmospheric forcing field. If the ASGF is prepared with the global ocean as the model domain, the output of the convolution is free of the effects of artificial open-water boundary conditions. Being the first part of this study, this paper presents mathematical derivations from the linearized and depth-averaged shallow-water equations to the ASGF convolution, establishes various auxiliary concepts that will be useful throughout the study, and interprets the meaning of the ASGF from different perspectives. This paves the way for the ASGF convolution to be further developed as a data-assimilative regression model in part II. Five Appendixes provide additional details about the algorithm and the MATLAB functions.
Kan, Monica W. K.; Yu, Peter K. N.; Leung, Lucullus H. T.
2013-01-01
Deterministic linear Boltzmann transport equation (D-LBTE) solvers have recently been developed, and one of the latest available software codes, Acuros XB, has been implemented in a commercial treatment planning system for radiotherapy photon beam dose calculation. One of the major limitations of most commercially available model-based algorithms for photon dose calculation is the ability to account for the effect of electron transport. This induces some errors in patient dose calculations, especially near heterogeneous interfaces between low and high density media such as tissue/lung interfaces. D-LBTE solvers have a high potential of producing accurate dose distributions in and near heterogeneous media in the human body. Extensive previous investigations have proved that D-LBTE solvers were able to produce comparable dose calculation accuracy as Monte Carlo methods with a reasonable speed good enough for clinical use. The current paper reviews the dosimetric evaluations of D-LBTE solvers for external beam photon radiotherapy. This content summarizes and discusses dosimetric validations for D-LBTE solvers in both homogeneous and heterogeneous media under different circumstances and also the clinical impact on various diseases due to the conversion of dose calculation from a conventional convolution/superposition algorithm to a recently released D-LBTE solver. PMID:24066294
Accelerated Superposition State Molecular Dynamics for Condensed Phase Systems.
Ceotto, Michele; Ayton, Gary S; Voth, Gregory A
2008-04-01
An extension of superposition state molecular dynamics (SSMD) [Venkatnathan and Voth J. Chem. Theory Comput. 2005, 1, 36] is presented with the goal to accelerate timescales and enable the study of "long-time" phenomena for condensed phase systems. It does not require any a priori knowledge about final and transition state configurations, or specific topologies. The system is induced to explore new configurations by virtue of a fictitious (free-particle-like) accelerating potential. The acceleration method can be applied to all degrees of freedom in the system and can be applied to condensed phases and fluids. PMID:26620930
Scaling of macroscopic superpositions close to a quantum phase transition
NASA Astrophysics Data System (ADS)
Abad, Tahereh; Karimipour, Vahid
2016-05-01
It is well known that in a quantum phase transition (QPT), entanglement remains short ranged [Osterloh et al., Nature (London) 416, 608 (2005), 10.1038/416608a]. We ask if there is a quantum property entailing the whole system which diverges near this point. Using the recently proposed measures of quantum macroscopicity, we show that near a quantum critical point, it is the effective size of macroscopic superposition between the two symmetry breaking states which grows to the scale of system size, and its derivative with respect to the coupling shows both singular behavior and scaling properties.
Push-Pull Optical Pumping of Pure Superposition States
NASA Astrophysics Data System (ADS)
Jau, Y.-Y.; Miron, E.; Post, A. B.; Kuzma, N. N.; Happer, W.
2004-10-01
A new optical pumping method, “push-pull pumping,” can produce very nearly pure, coherent superposition states between the initial and the final sublevels of the important field-independent 0-0 clock resonance of alkali-metal atoms. The key requirement for push-pull pumping is the use of D1 resonant light which alternates between left and right circular polarization at the Bohr frequency of the state. The new pumping method works for a wide range of conditions, including atomic beams with almost no collisions, and atoms in buffer gases with pressures of many atmospheres.
Controllable photon bunching by atomic superpositions in a driven cavity
NASA Astrophysics Data System (ADS)
Guo, Weijie; Wang, Yao; Wei, L. F.
2016-04-01
We propose a feasible approach to generate the desired light with controllable photon bunchings by adjusting the atomic superpositions in a driven cavity. Under the large detuning limit, i.e., the cavity is far resonance with the inside atom(s), we show that the photons in the cavity are always bunchings. Typically, when the effective dispersive interaction equals the detuning between the driving and cavity fields, we find that the value of second-order correlation g(2 )(0 ) inverses to the probability of the superposed atomic state. This suggests that such a value could be arbitrarily large, and thus the bunchings of the photons could be significantly enhanced.
Learning Contextual Dependence With Convolutional Hierarchical Recurrent Neural Networks
NASA Astrophysics Data System (ADS)
Zuo, Zhen; Shuai, Bing; Wang, Gang; Liu, Xiao; Wang, Xingxing; Wang, Bing; Chen, Yushi
2016-07-01
Existing deep convolutional neural networks (CNNs) have shown their great success on image classification. CNNs mainly consist of convolutional and pooling layers, both of which are performed on local image areas without considering the dependencies among different image regions. However, such dependencies are very important for generating explicit image representation. In contrast, recurrent neural networks (RNNs) are well known for their ability of encoding contextual information among sequential data, and they only require a limited number of network parameters. General RNNs can hardly be directly applied on non-sequential data. Thus, we proposed the hierarchical RNNs (HRNNs). In HRNNs, each RNN layer focuses on modeling spatial dependencies among image regions from the same scale but different locations. While the cross RNN scale connections target on modeling scale dependencies among regions from the same location but different scales. Specifically, we propose two recurrent neural network models: 1) hierarchical simple recurrent network (HSRN), which is fast and has low computational cost; and 2) hierarchical long-short term memory recurrent network (HLSTM), which performs better than HSRN with the price of more computational cost. In this manuscript, we integrate CNNs with HRNNs, and develop end-to-end convolutional hierarchical recurrent neural networks (C-HRNNs). C-HRNNs not only make use of the representation power of CNNs, but also efficiently encodes spatial and scale dependencies among different image regions. On four of the most challenging object/scene image classification benchmarks, our C-HRNNs achieve state-of-the-art results on Places 205, SUN 397, MIT indoor, and competitive results on ILSVRC 2012.
Faster GPU-based convolutional gridding via thread coarsening
NASA Astrophysics Data System (ADS)
Merry, B.
2016-07-01
Convolutional gridding is a processor-intensive step in interferometric imaging. While it is possible to use graphics processing units (GPUs) to accelerate this operation, existing methods use only a fraction of the available flops. We apply thread coarsening to improve the efficiency of an existing algorithm, and observe performance gains of up to 3.2 × for single-polarization gridding and 1.9 × for quad-polarization gridding on a GeForce GTX 980, and smaller but still significant gains on a Radeon R9 290X.
Convolution seal for transition duct in turbine system
Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray
2015-03-10
A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface member for interfacing with a turbine section. The turbine system further includes a convolution seal contacting the interface member to provide a seal between the interface member and the turbine section.
Convolution seal for transition duct in turbine system
Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray
2015-05-26
A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface feature for interfacing with an adjacent transition duct. The turbine system further includes a convolution seal contacting the interface feature to provide a seal between the interface feature and the adjacent transition duct.
Convolutional neural networks for mammography mass lesion classification.
Arevalo, John; Gonzalez, Fabio A; Ramos-Pollan, Raul; Oliveira, Jose L; Guevara Lopez, Miguel Angel
2015-08-01
Feature extraction is a fundamental step when mammography image analysis is addressed using learning based approaches. Traditionally, problem dependent handcrafted features are used to represent the content of images. An alternative approach successfully applied in other domains is the use of neural networks to automatically discover good features. This work presents an evaluation of convolutional neural networks to learn features for mammography mass lesions before feeding them to a classification stage. Experimental results showed that this approach is a suitable strategy outperforming the state-of-the-art representation from 79.9% to 86% in terms of area under the ROC curve. PMID:26736382
Continuous speech recognition based on convolutional neural network
NASA Astrophysics Data System (ADS)
Zhang, Qing-qing; Liu, Yong; Pan, Jie-lin; Yan, Yong-hong
2015-07-01
Convolutional Neural Networks (CNNs), which showed success in achieving translation invariance for many image processing tasks, are investigated for continuous speech recognitions in the paper. Compared to Deep Neural Networks (DNNs), which have been proven to be successful in many speech recognition tasks nowadays, CNNs can reduce the NN model sizes significantly, and at the same time achieve even better recognition accuracies. Experiments on standard speech corpus TIMIT showed that CNNs outperformed DNNs in the term of the accuracy when CNNs had even smaller model size.
A digital model for streamflow routing by convolution methods
Doyle, W.H., Jr.; Shearman, H.O.; Stiltner, G.J.; Krug, W.O.
1984-01-01
U.S. Geological Survey computer model, CONROUT, for routing streamflow by unit-response convolution flow-routing techniques from an upstream channel location to a downstream channel location has been developed and documented. Calibration and verification of the flow-routing model and subsequent use of the model for simulation is also documented. Three hypothetical examples and two field applications are presented to illustrate basic flow-routing concepts. Most of the discussion is limited to daily flow routing since, to date, all completed and current studies of this nature involve daily flow routing. However, the model is programmed to accept hourly flow-routing data. (USGS)
New Syndrome Decoding Techniques for the (n, K) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.
New syndrome decoding techniques for the (n, k) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964
Simplified Syndrome Decoding of (n, 1) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.
Is turbulent mixing a self-convolution process?
Venaille, Antoine; Sommeria, Joel
2008-06-13
Experimental results for the evolution of the probability distribution function (PDF) of a scalar mixed by a turbulent flow in a channel are presented. The sequence of PDF from an initial skewed distribution to a sharp Gaussian is found to be nonuniversal. The route toward homogeneization depends on the ratio between the cross sections of the dye injector and the channel. In connection with this observation, advantages, shortcomings, and applicability of models for the PDF evolution based on a self-convolution mechanism are discussed. PMID:18643510
A Fortran 90 code for magnetohydrodynamics. Part 1, Banded convolution
Walker, D.W.
1992-03-01
This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.
Visualizing Vector Fields Using Line Integral Convolution and Dye Advection
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu
1996-01-01
We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.
Evolution of superpositions of quantum states through a level crossing
Torosov, B. T.; Vitanov, N. V.
2011-12-15
The Landau-Zener-Stueckelberg-Majorana (LZSM) model is widely used for estimating transition probabilities in the presence of crossing energy levels in quantum physics. This model, however, makes the unphysical assumption of an infinitely long constant interaction, which introduces a divergent phase in the propagator. This divergence remains hidden when estimating output probabilities for a single input state insofar as the divergent phase cancels out. In this paper we show that, because of this divergent phase, the LZSM model is inadequate to describe the evolution of pure or mixed superposition states across a level crossing. The LZSM model can be used only if the system is initially in a single state or in a completely mixed superposition state. To this end, we show that the more realistic Demkov-Kunike model, which assumes a hyperbolic-tangent level crossing and a hyperbolic-secant interaction envelope, is free of divergences and is a much more adequate tool for describing the evolution through a level crossing for an arbitrary input state. For multiple crossing energies which are reducible to one or more effective two-state systems (e.g., by the Majorana and Morris-Shore decompositions), similar conclusions apply: the LZSM model does not produce definite values of the populations and the coherences, and one should use the Demkov-Kunike model instead.
Experiments testing macroscopic quantum superpositions must be slow.
Mari, Andrea; De Palma, Giacomo; Giovannetti, Vittorio
2016-01-01
We consider a thought experiment where the preparation of a macroscopically massive or charged particle in a quantum superposition and the associated dynamics of a distant test particle apparently allow for superluminal communication. We give a solution to the paradox which is based on the following fundamental principle: any local experiment, discriminating a coherent superposition from an incoherent statistical mixture, necessarily requires a minimum time proportional to the mass (or charge) of the system. For a charged particle, we consider two examples of such experiments, and show that they are both consistent with the previous limitation. In the first, the measurement requires to accelerate the charge, that can entangle with the emitted photons. In the second, the limitation can be ascribed to the quantum vacuum fluctuations of the electromagnetic field. On the other hand, when applied to massive particles our result provides an indirect evidence for the existence of gravitational vacuum fluctuations and for the possibility of entangling a particle with quantum gravitational radiation. PMID:26959656
Experiments testing macroscopic quantum superpositions must be slow
NASA Astrophysics Data System (ADS)
Mari, Andrea; de Palma, Giacomo; Giovannetti, Vittorio
2016-03-01
We consider a thought experiment where the preparation of a macroscopically massive or charged particle in a quantum superposition and the associated dynamics of a distant test particle apparently allow for superluminal communication. We give a solution to the paradox which is based on the following fundamental principle: any local experiment, discriminating a coherent superposition from an incoherent statistical mixture, necessarily requires a minimum time proportional to the mass (or charge) of the system. For a charged particle, we consider two examples of such experiments, and show that they are both consistent with the previous limitation. In the first, the measurement requires to accelerate the charge, that can entangle with the emitted photons. In the second, the limitation can be ascribed to the quantum vacuum fluctuations of the electromagnetic field. On the other hand, when applied to massive particles our result provides an indirect evidence for the existence of gravitational vacuum fluctuations and for the possibility of entangling a particle with quantum gravitational radiation.
Modeling scattering from azimuthally symmetric bathymetric features using wavefield superposition.
Fawcett, John A
2007-12-01
In this paper, an approach for modeling the scattering from azimuthally symmetric bathymetric features is described. These features are useful models for small mounds and indentations on the seafloor at high frequencies and seamounts, shoals, and basins at low frequencies. A bathymetric feature can be considered as a compact closed region, with the same sound speed and density as one of the surrounding media. Using this approach, a number of numerical methods appropriate for a partially buried target or facet problem can be applied. This paper considers the use of wavefield superposition and because of the azimuthal symmetry, the three-dimensional solution to the scattering problem can be expressed as a Fourier sum of solutions to a set of two-dimensional scattering problems. In the case where the surrounding two half spaces have only a density contrast, a semianalytic coupled mode solution is derived. This provides a benchmark solution to scattering from a class of penetrable hemispherical bosses or indentations. The details and problems of the numerical implementation of the wavefield superposition method are described. Example computations using the method for a simple scattering feature on a seabed are presented for a wide band of frequencies. PMID:18247740
Experiments testing macroscopic quantum superpositions must be slow
Mari, Andrea; De Palma, Giacomo; Giovannetti, Vittorio
2016-01-01
We consider a thought experiment where the preparation of a macroscopically massive or charged particle in a quantum superposition and the associated dynamics of a distant test particle apparently allow for superluminal communication. We give a solution to the paradox which is based on the following fundamental principle: any local experiment, discriminating a coherent superposition from an incoherent statistical mixture, necessarily requires a minimum time proportional to the mass (or charge) of the system. For a charged particle, we consider two examples of such experiments, and show that they are both consistent with the previous limitation. In the first, the measurement requires to accelerate the charge, that can entangle with the emitted photons. In the second, the limitation can be ascribed to the quantum vacuum fluctuations of the electromagnetic field. On the other hand, when applied to massive particles our result provides an indirect evidence for the existence of gravitational vacuum fluctuations and for the possibility of entangling a particle with quantum gravitational radiation. PMID:26959656
Runs in superpositions of renewal processes with applications to discrimination
NASA Astrophysics Data System (ADS)
Alsmeyer, Gerold; Irle, Albrecht
2006-02-01
Wald and Wolfowitz [Ann. Math. Statist. 11 (1940) 147-162] introduced the run test for testing whether two samples of i.i.d. random variables follow the same distribution. Here a run means a consecutive subsequence of maximal length from only one of the two samples. In this paper we contribute to the problem of runs and resulting test procedures for the superposition of independent renewal processes which may be interpreted as arrival processes of customers from two different input channels at the same service station. To be more precise, let (Sn)n[greater-or-equal, slanted]1 and (Tn)n[greater-or-equal, slanted]1 be the arrival processes for channel 1 and channel 2, respectively, and (Wn)n[greater-or-equal, slanted]1 their be superposition with counting process . Let further be the number of runs in W1,...,Wn and the number of runs observed up to time t. We study the asymptotic behavior of and Rt, first for the case where (Sn)n[greater-or-equal, slanted]1 and (Tn)n[greater-or-equal, slanted]1 have exponentially distributed increments with parameters [lambda]1 and [lambda]2, and then for the more difficult situation when these increments have an absolutely continuous distribution. These results are used to design asymptotic level [alpha] tests for testing [lambda]1=[lambda]2 against [lambda]1[not equal to][lambda]2 in the first case, and for testing for equal scale parameters in the second.
Time-Temperature Superposition Applied to PBX Mechanical Properties
NASA Astrophysics Data System (ADS)
Thompson, Darla; Deluca, Racci
2011-06-01
The use of plastic-bonded explosives (PBXs) in weapon applications requires a certain level of structural/mechanical integrity. Uniaxial tension and compression experiments characterize the mechanical response of materials over a wide range of temperatures and strain rates, providing the basis for predictive modeling in more complex geometries. After years of data collection on a wide variety of PBX formulations, we have applied time-temperature superposition principles to a mechanical properties database which includes PBX 9501, PBX 9502, PBXN-110, PBXN-9, and HPP (propellant). The results of quasi-static tension and compression, SHPB compression, and cantilever DMA are compared. Time-temperature relationships of maximum stress and corresponding strain values are analyzed in addition to the more conventional analysis of modulus. Our analysis shows adherence to the principles of time-temperature superposition and correlations of mechanical response to the binder glass transition and specimen density. Direct ties relate time-temperature analysis to the underlying basis of existing PBX mechanical models (ViscoSCRAM). Results suggest that, within limits, mechanical response can be predicted at conditions not explicitly measured. LA-UR 11-01096.
Time-temperature superposition applied to PBX mechanical properties
NASA Astrophysics Data System (ADS)
Thompson, Darla; DeLuca, Racci; Wright, Walter J.
2012-03-01
The use of plastic-bonded explosives (PBXs) in weapon applications requires that they possess and maintain a level of structural/mechanical integrity. Uniaxial tension and compression experiments are typically used to characterize the mechanical response of materials over a wide range of temperatures and strain rates, providing the basis for predictive modeling in more complex geometries. After many years of data collection on a variety of PBX formulations, we have here applied the principles of time-temperature superposition to a mechanical properties database which includes PBX 9501, PBX 9502, PBXN-110, PBXN-9, and HPP (propellant). Consistencies are demonstrated between the results of quasi-static tension and compression, dynamic Split-Hopkinson Pressure Bar (SHPB) compression, and cantilever Dynamic Mechanical Analysis (DMA). Timetemperature relationships of maximum stress and corresponding strain values are analyzed, in addition to the more conventional analysis of modulus. The extensive analysis shows adherence to the principles of time-temperature superposition and correlations of mechanical response to binder glasstransition temperature (Tg) and specimen density. Direct ties exist between the time-temperature analysis and the underlying basis of a useful existing PBX mechanical model (ViscoSCRAM). Results give confidence that, with some limitations, mechanical response can be predicted at conditions not explicitly measured.
NASA Astrophysics Data System (ADS)
Galatola, P.
2016-02-01
By means of a perturbative scheme, we determine analytically the capillary energy of a spheroidal colloid floating on a deformed fluid interface in terms of the local curvature tensor of the background deformation. We validate our results, that hold for small ellipticity of the particle and small deformations of the surface, by an exact numerical calculation. As an application of our perturbative approach, we determine the asymptotic interaction, for large separations d , between two different spheroidal particles. The dominant contribution is quadrupolar and proportional to d-4. It coincides with the known superposition approximation and is zero if one of the two particles is spherical. The next to leading approximation, proportional to d-8, is always attractive and independent of the orientation of the two colloids. It is the dominant contribution to the interaction between a spheroidal and a spherical colloid.
NASA Astrophysics Data System (ADS)
Strodel, Birgit; Wales, David J.
2008-12-01
Approximate free energy surfaces and transition rates are presented for alanine dipeptide for a variety of force fields and implicit solvent models. Our calculations are based upon local minima, transition states and pathways characterised for each potential energy surface using geometry optimisation. The superposition approach employing only local minima and harmonic densities of states provides a representation of low-lying regions of the free energy surfaces. However, including contributions from the transition states of the potential energy surface and selected points obtained from displacements along the corresponding reaction vectors produces surfaces that compare quite well with results from replica exchange molecular dynamics. Characterising the local minima, transition states, normal modes, pathways, rate constants and free energy surfaces for each force field within this framework typically requires between one and five minutes cpu time on a single processor.
NASA Astrophysics Data System (ADS)
Balabin, Roman M.
2011-03-01
The quantum chemistry of conformation equilibrium is a field where great accuracy (better than 100 cal mol-1) is needed because the energy difference between molecular conformers rarely exceeds 1000-3000 cal mol-1. The conformation equilibrium of straight-chain (normal) alkanes is of particular interest and importance for modern chemistry. In this paper, an extra error source for high-quality ab initio (first principles) and DFT calculations of the conformation equilibrium of normal alkanes, namely the intramolecular basis set superposition error (BSSE), is discussed. In contrast to out-of-plane vibrations in benzene molecules, diffuse functions on carbon and hydrogen atoms were found to greatly reduce the relative BSSE of n-alkanes. The corrections due to the intramolecular BSSE were found to be almost identical for the MP2, MP4, and CCSD(T) levels of theory. Their cancelation is expected when CCSD(T)/CBS (CBS, complete basis set) energies are evaluated by addition schemes. For larger normal alkanes (N > 12), the magnitude of the BSSE correction was found to be up to three times larger than the relative stability of the conformer; in this case, the basis set superposition error led to a two orders of magnitude difference in conformer abundance. No error cancelation due to the basis set superposition was found. A comparison with amino acid, peptide, and protein data was provided.
NASA Astrophysics Data System (ADS)
Martin, Roland; Komatitsch, Dimitri; Bruthiaux, Emilien; Gedney, Stephen D.
2010-05-01
We present and discuss here two different unsplit formulations of the frequency shift PML based on convolution or non convolution integrations of auxiliary memory variables. Indeed, the Perfectly Matched Layer absorbing boundary condition has proven to be very efficient from a numerical point of view for the elastic wave equation to absorb both body waves with non-grazing incidence and surface waves. However, at grazing incidence the classical discrete Perfectly Matched Layer method suffers from large spurious reflections that make it less efficient for instance in the case of very thin mesh slices, in the case of sources located very close to the edge of the mesh, and/or in the case of receivers located at very large offset. In [1] we improve the Perfectly Matched Layer at grazing incidence for the seismic wave equation based on an unsplit convolution technique. This improved PML has a cost that is similar in terms of memory storage to that of the classical PML. We illustrate the efficiency of this improved Convolutional Perfectly Matched Layer based on numerical benchmarks using a staggered finite-difference method on a very thin mesh slice for an isotropic material and show that results are significantly improved compared with the classical Perfectly Matched Layer technique. We also show that, as the classical model, the technique is intrinsically unstable in the case of some anisotropic materials. In this case, retaining an idea of [2], this has been stabilized by adding correction terms adequately along any coordinate axis [3]. More specifically this has been applied to the spectral-element method based on a hybrid first/second order time integration scheme in which the Newmark time marching scheme allows us to match perfectly at the base of the absorbing layer a velocity-stress formulation in the PML and a second order displacement formulation in the inner computational domain.Our CPML unsplit formulation has the advantage to reduce the memory storage of CPML
Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks.
Dosovitskiy, Alexey; Fischer, Philipp; Springenberg, Jost Tobias; Riedmiller, Martin; Brox, Thomas
2016-09-01
Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101, Caltech-256). While features learned with our approach cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor. PMID:26540673
Convolutional neural network architectures for predicting DNA–protein binding
Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.
2016-01-01
Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608
Classification of Histology Sections via Multispectral Convolutional Sparse Coding*
Zhou, Yin; Barner, Kenneth; Spellman, Paul
2014-01-01
Image-based classification of histology sections plays an important role in predicting clinical outcomes. However this task is very challenging due to the presence of large technical variations (e.g., fixation, staining) and biological heterogeneities (e.g., cell type, cell state). In the field of biomedical imaging, for the purposes of visualization and/or quantification, different stains are typically used for different targets of interest (e.g., cellular/subcellular events), which generates multi-spectrum data (images) through various types of microscopes and, as a result, provides the possibility of learning biological-component-specific features by exploiting multispectral information. We propose a multispectral feature learning model that automatically learns a set of convolution filter banks from separate spectra to efficiently discover the intrinsic tissue morphometric signatures, based on convolutional sparse coding (CSC). The learned feature representations are then aggregated through the spatial pyramid matching framework (SPM) and finally classified using a linear SVM. The proposed system has been evaluated using two large-scale tumor cohorts, collected from The Cancer Genome Atlas (TCGA). Experimental results show that the proposed model 1) outperforms systems utilizing sparse coding for unsupervised feature learning (e.g., PSD-SPM [5]); 2) is competitive with systems built upon features with biological prior knowledge (e.g., SMLSPM [4]). PMID:25554749
Enhancing Neutron Beam Production with a Convoluted Moderator
Iverson, Erik B; Baxter, David V; Muhrer, Guenter; Ansell, Stuart; Gallmeier, Franz X; Dalgliesh, Robert; Lu, Wei; Kaiser, Helmut
2014-10-01
We describe a new concept for a neutron moderating assembly resulting in the more efficient production of slow neutron beams. The Convoluted Moderator, a heterogeneous stack of interleaved moderating material and nearly transparent single-crystal spacers, is a directionally-enhanced neutron beam source, improving beam effectiveness over an angular range comparable to the range accepted by neutron beam lines and guides. We have demonstrated gains of 50% in slow neutron intensity for a given fast neutron production rate while simultaneously reducing the wavelength-dependent emission time dispersion by 25%, both coming from a geometric effect in which the neutron beam lines view a large surface area of moderating material in a relatively small volume. Additionally, we have confirmed a Bragg-enhancement effect arising from coherent scattering within the single-crystal spacers. We have not observed hypothesized refractive effects leading to additional gains at long wavelength. In addition to confirmation of the validity of the Convoluted Moderator concept, our measurements provide a series of benchmark experiments suitable for developing simulation and analysis techniques for practical optimization and eventual implementation at slow neutron source facilities.