Science.gov

Sample records for convolution superposition calculations

  1. Fast convolution-superposition dose calculation on graphics hardware.

    PubMed

    Hissoiny, Sami; Ozell, Benoît; Després, Philippe

    2009-06-01

    The numerical calculation of dose is central to treatment planning in radiation therapy and is at the core of optimization strategies for modern delivery techniques. In a clinical environment, dose calculation algorithms are required to be accurate and fast. The accuracy is typically achieved through the integration of patient-specific data and extensive beam modeling, which generally results in slower algorithms. In order to alleviate execution speed problems, the authors have implemented a modern dose calculation algorithm on a massively parallel hardware architecture. More specifically, they have implemented a convolution-superposition photon beam dose calculation algorithm on a commodity graphics processing unit (GPU). They have investigated a simple porting scenario as well as slightly more complex GPU optimization strategies. They have achieved speed improvement factors ranging from 10 to 20 times with GPU implementations compared to central processing unit (CPU) implementations, with higher values corresponding to larger kernel and calculation grid sizes. In all cases, they preserved the numerical accuracy of the GPU calculations with respect to the CPU calculations. These results show that streaming architectures such as GPUs can significantly accelerate dose calculation algorithms and let envision benefits for numerically intensive processes such as optimizing strategies, in particular, for complex delivery techniques such as IMRT and are therapy.

  2. A convolution-superposition dose calculation engine for GPUs

    SciTech Connect

    Hissoiny, Sami; Ozell, Benoit; Despres, Philippe

    2010-03-15

    Purpose: Graphic processing units (GPUs) are increasingly used for scientific applications, where their parallel architecture and unprecedented computing power density can be exploited to accelerate calculations. In this paper, a new GPU implementation of a convolution/superposition (CS) algorithm is presented. Methods: This new GPU implementation has been designed from the ground-up to use the graphics card's strengths and to avoid its weaknesses. The CS GPU algorithm takes into account beam hardening, off-axis softening, kernel tilting, and relies heavily on raytracing through patient imaging data. Implementation details are reported as well as a multi-GPU solution. Results: An overall single-GPU acceleration factor of 908x was achieved when compared to a nonoptimized version of the CS algorithm implemented in PlanUNC in single threaded central processing unit (CPU) mode, resulting in approximatively 2.8 s per beam for a 3D dose computation on a 0.4 cm grid. A comparison to an established commercial system leads to an acceleration factor of approximately 29x or 0.58 versus 16.6 s per beam in single threaded mode. An acceleration factor of 46x has been obtained for the total energy released per mass (TERMA) calculation and a 943x acceleration factor for the CS calculation compared to PlanUNC. Dose distributions also have been obtained for a simple water-lung phantom to verify that the implementation gives accurate results. Conclusions: These results suggest that GPUs are an attractive solution for radiation therapy applications and that careful design, taking the GPU architecture into account, is critical in obtaining significant acceleration factors. These results potentially can have a significant impact on complex dose delivery techniques requiring intensive dose calculations such as intensity-modulated radiation therapy (IMRT) and arc therapy. They also are relevant for adaptive radiation therapy where dose results must be obtained rapidly.

  3. The denoising of Monte Carlo dose distributions using convolution superposition calculations.

    PubMed

    El Naqa, I; Cui, J; Lindsay, P; Olivera, G; Deasy, J O

    2007-09-07

    Monte Carlo (MC) dose calculations can be accurate but are also computationally intensive. In contrast, convolution superposition (CS) offers faster and smoother results but by making approximations. We investigated MC denoising techniques, which use available convolution superposition results and new noise filtering methods to guide and accelerate MC calculations. Two main approaches were developed to combine CS information with MC denoising. In the first approach, the denoising result is iteratively updated by adding the denoised residual difference between the result and the MC image. Multi-scale methods were used (wavelets or contourlets) for denoising the residual. The iterations are initialized by the CS data. In the second approach, we used a frequency splitting technique by quadrature filtering to combine low frequency components derived from MC simulations with high frequency components derived from CS components. The rationale is to take the scattering tails as well as dose levels in the high-dose region from the MC calculations, which presumably more accurately incorporates scatter; high-frequency details are taken from CS calculations. 3D Butterworth filters were used to design the quadrature filters. The methods were demonstrated using anonymized clinical lung and head and neck cases. The MC dose distributions were calculated by the open-source dose planning method MC code with varying noise levels. Our results indicate that the frequency-splitting technique for incorporating CS-guided MC denoising is promising in terms of computational efficiency and noise reduction.

  4. GPU-accelerated Monte Carlo convolution/superposition implementation for dose calculation.

    PubMed

    Zhou, Bo; Yu, Cedric X; Chen, Danny Z; Hu, X Sharon

    2010-11-01

    Dose calculation is a key component in radiation treatment planning systems. Its performance and accuracy are crucial to the quality of treatment plans as emerging advanced radiation therapy technologies are exerting ever tighter constraints on dose calculation. A common practice is to choose either a deterministic method such as the convolution/superposition (CS) method for speed or a Monte Carlo (MC) method for accuracy. The goal of this work is to boost the performance of a hybrid Monte Carlo convolution/superposition (MCCS) method by devising a graphics processing unit (GPU) implementation so as to make the method practical for day-to-day usage. Although the MCCS algorithm combines the merits of MC fluence generation and CS fluence transport, it is still not fast enough to be used as a day-to-day planning tool. To alleviate the speed issue of MC algorithms, the authors adopted MCCS as their target method and implemented a GPU-based version. In order to fully utilize the GPU computing power, the MCCS algorithm is modified to match the GPU hardware architecture. The performance of the authors' GPU-based implementation on an Nvidia GTX260 card is compared to a multithreaded software implementation on a quad-core system. A speedup in the range of 6.7-11.4x is observed for the clinical cases used. The less than 2% statistical fluctuation also indicates that the accuracy of the authors' GPU-based implementation is in good agreement with the results from the quad-core CPU implementation. This work shows that GPU is a feasible and cost-efficient solution compared to other alternatives such as using cluster machines or field-programmable gate arrays for satisfying the increasing demands on computation speed and accuracy of dose calculation. But there are also inherent limitations of using GPU for accelerating MC-type applications, which are also analyzed in detail in this article.

  5. Accurate convolution/superposition for multi-resolution dose calculation using cumulative tabulated kernels.

    PubMed

    Lu, Weiguo; Olivera, Gustavo H; Chen, Ming-Li; Reckwerdt, Paul J; Mackie, Thomas R

    2005-02-21

    Convolution/superposition (C/S) is regarded as the standard dose calculation method in most modern radiotherapy treatment planning systems. Different implementations of C/S could result in significantly different dose distributions. This paper addresses two major implementation issues associated with collapsed cone C/S: one is how to utilize the tabulated kernels instead of analytical parametrizations and the other is how to deal with voxel size effects. Three methods that utilize the tabulated kernels are presented in this paper. These methods differ in the effective kernels used: the differential kernel (DK), the cumulative kernel (CK) or the cumulative-cumulative kernel (CCK). They result in slightly different computation times but significantly different voxel size effects. Both simulated and real multi-resolution dose calculations are presented. For simulation tests, we use arbitrary kernels and various voxel sizes with a homogeneous phantom, and assume forward energy transportation only. Simulations with voxel size up to 1 cm show that the CCK algorithm has errors within 0.1% of the maximum gold standard dose. Real dose calculations use a heterogeneous slab phantom, both the 'broad' (5 x 5 cm2) and the 'narrow' (1.2 x 1.2 cm2) tomotherapy beams. Various voxel sizes (0.5 mm, 1 mm, 2 mm, 4 mm and 8 mm) are used for dose calculations. The results show that all three algorithms have negligible difference (0.1%) for the dose calculation in the fine resolution (0.5 mm voxels). But differences become significant when the voxel size increases. As for the DK or CK algorithm in the broad (narrow) beam dose calculation, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 10% (7%) of the maximum dose. As for the broad (narrow) beam dose calculation using the CCK algorithm, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 1% of the maximum dose. Among all three methods, the CCK algorithm is

  6. Calculating dose distributions and wedge factors for photon treatment fields with dynamic wedges based on a convolution/superposition method.

    PubMed

    Liu, H H; McCullough, E C; Mackie, T R

    1998-01-01

    A convolution/superposition based method was developed to calculate dose distributions and wedge factors in photon treatment fields generated by dynamic wedges. This algorithm used a dual source photon beam model that accounted for both primary photons from the target and secondary photons scattered from the machine head. The segmented treatment tables (STT) were used to calculate realistic photon fluence distributions in the wedged fields. The inclusion of the extra-focal photons resulted in more accurate dose calculation in high dose gradient regions, particularly in the beam penumbra. The wedge factors calculated using the convolution method were also compared to the measured data and showed good agreement within 0.5%. The wedge factor varied significantly with the field width along the moving jaw direction, but not along the static jaw or the depth direction. This variation was found to be determined by the ending position of the moving jaw, or the STT of the dynamic wedge. In conclusion, the convolution method proposed in this work can be used to accurately compute dose for a dynamic or an intensity modulated treatment based on the fluence modulation in the treatment field.

  7. Convolution/superposition using the Monte Carlo method.

    PubMed

    Naqvi, Shahid A; Earl, Matthew A; Shepard, David M

    2003-07-21

    The convolution/superposition calculations for radiotherapy dose distributions are traditionally performed by convolving polyenergetic energy deposition kernels with TERMA (total energy released per unit mass) precomputed in each voxel of the irradiated phantom. We propose an alternative method in which the TERMA calculation is replaced by random sampling of photon energy, direction and interaction point. Then, a direction is randomly sampled from the angular distribution of the monoenergetic kernel corresponding to the photon energy. The kernel ray is propagated across the phantom, and energy is deposited in each voxel traversed. An important advantage of the explicit sampling of energy is that spectral changes with depth are automatically accounted for. No spectral or kernel hardening corrections are needed. Furthermore, the continuous sampling of photon direction allows us to model sharp changes in fluence, such as those due to collimator tongue-and-groove. The use of explicit photon direction also facilitates modelling of situations where a given voxel is traversed by photons from many directions. Extra-focal radiation, for instance, can therefore be modelled accurately. Our method also allows efficient calculation of a multi-segment/multi-beam IMRT plan by sampling of beam angles and field segments according to their relative weights. For instance, an IMRT plan consisting of seven 14 x 12 cm2 beams with a total of 300 field segments can be computed in 15 min on a single CPU, with 2% statistical fluctuations at the isocentre of the patient's CT phantom divided into 4 x 4 x 4 mm3 voxels. The calculation contains all aperture-specific effects, such as tongue and groove, leaf curvature and head scatter. This contrasts with deterministic methods in which each segment is given equal importance, and the time taken scales with the number of segments. Thus, the Monte Carlo superposition provides a simple, accurate and efficient method for complex radiotherapy dose

  8. Convolution/superposition using the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Naqvi, Shahid A.; Earl, Matthew A.; Shepard, David M.

    2003-07-01

    The convolution/superposition calculations for radiotherapy dose distributions are traditionally performed by convolving polyenergetic energy deposition kernels with TERMA (total energy released per unit mass) precomputed in each voxel of the irradiated phantom. We propose an alternative method in which the TERMA calculation is replaced by random sampling of photon energy, direction and interaction point. Then, a direction is randomly sampled from the angular distribution of the monoenergetic kernel corresponding to the photon energy. The kernel ray is propagated across the phantom, and energy is deposited in each voxel traversed. An important advantage of the explicit sampling of energy is that spectral changes with depth are automatically accounted for. No spectral or kernel hardening corrections are needed. Furthermore, the continuous sampling of photon direction allows us to model sharp changes in fluence, such as those due to collimator tongue-and-groove. The use of explicit photon direction also facilitates modelling of situations where a given voxel is traversed by photons from many directions. Extra-focal radiation, for instance, can therefore be modelled accurately. Our method also allows efficient calculation of a multi-segment/multi-beam IMRT plan by sampling of beam angles and field segments according to their relative weights. For instance, an IMRT plan consisting of seven 14 × 12 cm2 beams with a total of 300 field segments can be computed in 15 min on a single CPU, with 2% statistical fluctuations at the isocentre of the patient's CT phantom divided into 4 × 4 × 4 mm3 voxels. The calculation contains all aperture-specific effects, such as tongue and groove, leaf curvature and head scatter. This contrasts with deterministic methods in which each segment is given equal importance, and the time taken scales with the number of segments. Thus, the Monte Carlo superposition provides a simple, accurate and efficient method for complex radiotherapy dose

  9. Real-time dose computation: GPU-accelerated source modeling and superposition/convolution

    SciTech Connect

    Jacques, Robert; Wong, John; Taylor, Russell; McNutt, Todd

    2011-01-15

    Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for the total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3

  10. Implementation of FFT convolution and multigrid superposition models in the FOCUS RTP system

    NASA Astrophysics Data System (ADS)

    Miften, Moyed; Wiesmeyer, Mark; Monthofer, Suzanne; Krippner, Ken

    2000-04-01

    In radiotherapy treatment planning, convolution/superposition algorithms currently represent the best practical approach for accurate photon dose calculation in heterogeneous tissues. In this work, the implementation, accuracy and performance of the FFT convolution (FFTC) and multigrid superposition (MGS) algorithms are presented. The FFTC and MGS models use the same `TERMA' calculation and are commissioned using the same parameters. Both models use the same spectra, incorporate the same off-axis softening and base incident lateral fluence on the same measurements. In addition, corrections are explicitly applied to the polyenergetic and parallel kernel approximations, and electron contamination is modelled. Spectra generated by Monte Carlo (MC) modelling of treatment heads are used. Calculations using the MC spectra were in excellent agreement with measurements for many linear accelerator types. To speed up the calculations, a number of calculation techniques were implemented, including separate primary and scatter dose calculation, the FFT technique which assumes kernel invariance for the convolution calculation and a multigrid (MG) acceleration technique for the superposition calculation. Timing results show that the FFTC model is faster than MGS by a factor of 4 and 8 for small and large field sizes, respectively. Comparisons with measured data and BEAM MC results for a wide range of clinical beam setups show that (a) FFTC and MGS doses match measurements to better than 2% or 2 mm in homogeneous media; (b) MGS is more accurate than FFTC in lung phantoms where MGS doses are within 3% or 3 mm of BEAM results and (c) FFTC overestimates the dose in lung by a maximum of 9% compared to BEAM.

  11. Performance Evaluation of Algorithms in Lung IMRT: A comparison of Monte Carlo, Pencil Beam, Superposition, Fast Superposition and Convolution Algorithms.

    PubMed

    Verma, T; Painuly, N K; Mishra, S P; Shajahan, M; Singh, N; Bhatt, M L B; Jamal, N; Pant, M C

    2016-09-01

    Inclusion of inhomogeneity corrections in intensity modulated small fields always makes conformal irradiation of lung tumor very complicated in accurate dose delivery. In the present study, the performance of five algorithms via Monte Carlo, Pencil Beam, Convolution, Fast Superposition and Superposition were evaluated in lung cancer Intensity Modulated Radiotherapy planning. Treatment plans for ten lung cancer patients previously planned on Monte Carlo algorithm were re-planned using same treatment planning indices (gantry angel, rank, power etc.) in other four algorithms. The values of radiotherapy planning parameters such as Mean dose, volume of 95% isodose line, Conformity Index, Homogeneity Index for target, Maximum dose, Mean dose; %Volume receiving 20Gy or more by contralateral lung; % volume receiving 30 Gy or more; % volume receiving 25 Gy or more, Mean dose received by heart; %volume receiving 35Gy or more; %volume receiving 50Gy or more, Mean dose to Easophagous; % Volume receiving 45Gy or more, Maximum dose received by Spinal cord and Total monitor unit, Volume of 50 % isodose lines were recorded for all ten patients. Performance of different algorithms was also evaluated statistically. MC and PB algorithms found better as for tumor coverage, dose distribution homogeneity in Planning Target Volume and minimal dose to organ at risks are concerned. Superposition algorithms found to be better than convolution and fast superposition. In the case of tumors located centrally, it is recommended to use Monte Carlo algorithms for the optimal use of radiotherapy.

  12. Performance Evaluation of Algorithms in Lung IMRT: A comparison of Monte Carlo, Pencil Beam, Superposition, Fast Superposition and Convolution Algorithms

    PubMed Central

    Verma, T.; Painuly, N.K.; Mishra, S.P.; Shajahan, M.; Singh, N.; Bhatt, M.L.B.; Jamal, N.; Pant, M.C.

    2016-01-01

    Background: Inclusion of inhomogeneity corrections in intensity modulated small fields always makes conformal irradiation of lung tumor very complicated in accurate dose delivery. Objective: In the present study, the performance of five algorithms via Monte Carlo, Pencil Beam, Convolution, Fast Superposition and Superposition were evaluated in lung cancer Intensity Modulated Radiotherapy planning. Materials and Methods: Treatment plans for ten lung cancer patients previously planned on Monte Carlo algorithm were re-planned using same treatment planning indices (gantry angel, rank, power etc.) in other four algorithms. Results: The values of radiotherapy planning parameters such as Mean dose, volume of 95% isodose line, Conformity Index, Homogeneity Index for target, Maximum dose, Mean dose; %Volume receiving 20Gy or more by contralateral lung; % volume receiving 30 Gy or more; % volume receiving 25 Gy or more, Mean dose received by heart; %volume receiving 35Gy or more; %volume receiving 50Gy or more, Mean dose to Easophagous; % Volume receiving 45Gy or more, Maximum dose received by Spinal cord and Total monitor unit, Volume of 50 % isodose lines were recorded for all ten patients. Performance of different algorithms was also evaluated statistically. Conclusion: MC and PB algorithms found better as for tumor coverage, dose distribution homogeneity in Planning Target Volume and minimal dose to organ at risks are concerned. Superposition algorithms found to be better than convolution and fast superposition. In the case of tumors located centrally, it is recommended to use Monte Carlo algorithms for the optimal use of radiotherapy. PMID:27853720

  13. Towards real-time radiation therapy: GPU accelerated superposition/convolution.

    PubMed

    Jacques, Robert; Taylor, Russell; Wong, John; McNutt, Todd

    2010-06-01

    We demonstrate the use of highly parallel graphics processing units (GPUs) to accelerate the superposition/convolution (S/C) algorithm to interactive rates while reducing the number of approximations. S/C first transports the incident fluence to compute the total energy released per unit mass (TERMA) grid. Dose is then calculated by superimposing the dose deposition kernel at each point in the TERMA grid and summing the contributions to the surrounding voxels. The TERMA algorithm was enhanced with physically correct multi-spectral attenuation and a novel inverse formulation for increased performance, accuracy and simplicity. Dose deposition utilized a tilted poly-energetic inverse cumulative-cumulative kernel, with the novel option of using volumetric mip-maps to approximate solid angle ray casting. Exact radiological path ray casting decreased discretization errors. We achieved a speedup of 34x-98x over a highly optimized CPU implementation. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  14. SU-E-T-508: A Novel Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm.

    PubMed

    Jacques, R; McNutt, T

    2012-06-01

    We developed a better method of accounting for the effects of heterogeneity in convolution algorithms. We integrated this method into our GPU-accelerated, multi-energetic convolution/superposition (C/S) implementation. In doing so, we have created a new dose algorithm: heterogeneity compensated superposition (HCS). Convolution in the spherical density-scaled distance space, a.k.a. C/S, has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to faster fall-off and re-buildup than predicted by C/S. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to traditional C/S. We implemented the effective density function as a multivariate first-order recursive filter. We compared HCS against traditional C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. For the patient cases, we created custom routines capable of using the discrete material mappings used by Monte-Carlo. C/S normally considers each voxel to be a mixture of materials based on a piecewise-linear density look-up table. Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. HCS improved the mean Van Dyk error by 0.79 (% of Dmax or mm) on average for the patient volumes; reducing the mean error from 1.93%|mm to 1.14%|mm. We found a mean error difference of up to 0.30 %|mm between linear and discrete material mappings. Very low densities (i.e. <0.1 g / cm(3) ) remained problematic, but may be solvable with a better filter function. We have developed a novel dose calculation algorithm based on the principals of C/S that better accounts for the electron disequilibrium caused by patient heterogeneity. This work was funded in part by the National Science

  15. Ultrafast convolution/superposition using tabulated and exponential kernels on GPU

    SciTech Connect

    Chen Quan; Chen Mingli; Lu Weiguo

    2011-03-15

    Purpose: Collapsed-cone convolution/superposition (CCCS) dose calculation is the workhorse for IMRT dose calculation. The authors present a novel algorithm for computing CCCS dose on the modern graphic processing unit (GPU). Methods: The GPU algorithm includes a novel TERMA calculation that has no write-conflicts and has linear computation complexity. The CCCS algorithm uses either tabulated or exponential cumulative-cumulative kernels (CCKs) as reported in literature. The authors have demonstrated that the use of exponential kernels can reduce the computation complexity by order of a dimension and achieve excellent accuracy. Special attentions are paid to the unique architecture of GPU, especially the memory accessing pattern, which increases performance by more than tenfold. Results: As a result, the tabulated kernel implementation in GPU is two to three times faster than other GPU implementations reported in literature. The implementation of CCCS showed significant speedup on GPU over single core CPU. On tabulated CCK, speedups as high as 70 are observed; on exponential CCK, speedups as high as 90 are observed. Conclusions: Overall, the GPU algorithm using exponential CCK is 1000-3000 times faster over a highly optimized single-threaded CPU implementation using tabulated CCK, while the dose differences are within 0.5% and 0.5 mm. This ultrafast CCCS algorithm will allow many time-sensitive applications to use accurate dose calculation.

  16. Ultrafast convolution/superposition using tabulated and exponential kernels on GPU.

    PubMed

    Chen, Quan; Chen, Mingli; Lu, Weiguo

    2011-03-01

    Collapsed-cone convolution/superposition (CCCS) dose calculation is the workhorse for IMRT dose calculation. The authors present a novel algorithm for computing CCCS dose on the modern graphic processing unit (GPU). The GPU algorithm includes a novel TERMA calculation that has no write-conflicts and has linear computation complexity. The CCCS algorithm uses either tabulated or exponential cumulative-cumulative kernels (CCKs) as reported in literature. The authors have demonstrated that the use of exponential kernels can reduce the computation complexity by order of a dimension and achieve excellent accuracy. Special attentions are paid to the unique architecture of GPU, especially the memory accessing pattern, which increases performance by more than tenfold. As a result, the tabulated kernel implementation in GPU is two to three times faster than other GPU implementations reported in literature. The implementation of CCCS showed significant speedup on GPU over single core CPU. On tabulated CCK, speedups as high as 70 are observed; on exponential CCK, speedups as high as 90 are observed. Overall, the GPU algorithm using exponential CCK is 1000-3000 times faster over a highly optimized single-threaded CPU implementation using tabulated CCK, while the dose differences are within 0.5% and 0.5 mm. This ultrafast CCCS algorithm will allow many time-sensitive applications to use accurate dose calculation.

  17. Comparative study of convolution, superposition, and fast superposition algorithms in conventional radiotherapy, three-dimensional conformal radiotherapy, and intensity modulated radiotherapy techniques for various sites, done on CMS XIO planning system

    PubMed Central

    Muralidhar, K. R.; Murthy, Narayana P.; Raju, Alluri Krishnam; Sresty, NVNM

    2009-01-01

    The aim of this study is to compare the dosimetry results that are obtained by using Convolution, Superposition and Fast Superposition algorithms in Conventional Radiotherapy, Three-Dimensional Conformal Radiotherapy (3D-CRT), and Intensity Modulated Radiotherapy (IMRT) for different sites, and to study the suitability of algorithms with respect to site and technique. For each of the Conventional, 3D-CRT, and IMRT techniques, four different sites, namely, Lung, Esophagus, Prostate, and Hypopharynx were analyzed. Treatment plans were created using 6MV Photon beam quality using the CMS XiO (Computerized Medical System, St.Louis, MO) treatment planning system. The maximum percentage of variation recorded between algorithms was 3.7% in case of Ca.Lung, for the IMRT Technique. Statistical analysis was performed by comparing the mean relative difference, Conformity Index, and Homogeneity Index for target structures. The fast superposition algorithm showed excellent results for lung and esophagus cases for all techniques. For the prostate, the superposition algorithm showed better results in all techniques. In the conventional case of the hypopharynx, the convolution algorithm was good. In case of Ca. Lung, Ca Prostate, Ca Esophagus, and Ca Hypopharynx, OARs got more doses with the superposition algorithm; this progressively decreased for fast superposition and convolution algorithms, respectively. According to this study the dosimetric results using different algorithms led to significant variation and therefore care had to be taken while evaluating treatment plans. The choice of a dose calculation algorithm may in certain cases even influence clinical results. PMID:20126561

  18. Fluence-convolution broad-beam (FCBB) dose calculation.

    PubMed

    Lu, Weiguo; Chen, Mingli

    2010-12-07

    IMRT optimization requires a fast yet relatively accurate algorithm to calculate the iteration dose with small memory demand. In this paper, we present a dose calculation algorithm that approaches these goals. By decomposing the infinitesimal pencil beam (IPB) kernel into the central axis (CAX) component and lateral spread function (LSF) and taking the beam's eye view (BEV), we established a non-voxel and non-beamlet-based dose calculation formula. Both LSF and CAX are determined by a commissioning procedure using the collapsed-cone convolution/superposition (CCCS) method as the standard dose engine. The proposed dose calculation involves a 2D convolution of a fluence map with LSF followed by ray tracing based on the CAX lookup table with radiological distance and divergence correction, resulting in complexity of O(N(3)) both spatially and temporally. This simple algorithm is orders of magnitude faster than the CCCS method. Without pre-calculation of beamlets, its implementation is also orders of magnitude smaller than the conventional voxel-based beamlet-superposition (VBS) approach. We compared the presented algorithm with the CCCS method using simulated and clinical cases. The agreement was generally within 3% for a homogeneous phantom and 5% for heterogeneous and clinical cases. Combined with the 'adaptive full dose correction', the algorithm is well suitable for calculating the iteration dose during IMRT optimization.

  19. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    PubMed Central

    Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.

    2013-01-01

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we

  20. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    SciTech Connect

    Huang, Jessie Y.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.; Eklund, David; Childress, Nathan L.

    2013-12-15

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm.Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels.Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found

  1. A convolution/superposition method using primary and scatter dose kernels formed for energy bins of X-ray spectra reconstructed as a function of off-axis distance: a theoretical study on 10-MV X-ray dose calculations in thorax-like phantoms.

    PubMed

    Iwasaki, Akira; Kimura, Shigenobu; Sutoh, Kohji; Kamimura, Kazuo; Sasamori, Makoto; Komai, Fumio; Seino, Morio; Terashima, Singo; Kubota, Mamoru; Hirota, Junichi; Hosokawa, Yoichiro

    2011-07-01

    A convolution/superposition method is proposed for use with primary and scatter dose kernels formed for energy bins of X-ray spectra reconstructed as a function of off-axis distance. It should be noted that the number of energy bins is usually about ten, and that the reconstructed X-ray spectra can reasonably be applied to media with a wide range of effective Z numbers, ranging from water to lead. The study was carried out for 10-MV X-ray doses in water and thorax-like phantoms with the use of open-jaw-collimated fields. The dose calculations were made separately for primary, scatter, and electron contamination dose components, for which we used two extended radiation sources: one was on the X-ray target and the other on the flattening filter. To calculate the in-air beam intensities at points on the isocenter plane for a given jaw-collimated field, we introduced an in-air output factor (OPF(in-air)) expressed as the product of the off-center jaw-collimator scatter factor (off-center S (c)), the source off-center ratio factor (OCR(source)), and the jaw-collimator radiation reflection factor (RRF(c)). For more accurate dose calculations, we introduce an electron spread fluctuation factor (F (fwd)) to take into account the angular and spatial spread fluctuation for electrons traveling through different media.

  2. An Improved Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm

    NASA Astrophysics Data System (ADS)

    Jacques, Robert; McNutt, Todd

    2014-03-01

    Purpose: To improve the accuracy of convolution/superposition (C/S) in heterogeneous material by developing a new algorithm: heterogeneity compensated superposition (HCS). Methods: C/S has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to the faster fall-off and re-buildup of dose. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to C/S. We implemented the effective density function as a multivariate first-order recursive filter and incorporated it into GPU-accelerated, multi-energetic C/S implementation. We compared HCS against C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. Results: Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. We defined the per-voxel error, %|mm, as the minimum of the distance to agreement in mm and the dosimetric percentage error relative to the maximum MC dose. HCS improved the average mean error by 0.79 %|mm for the patient volumes; reducing the average mean error from 1.93 %|mm to 1.14 %|mm. Very low densities (i.e. < 0.1 g / cm3) remained problematic, but may be solvable with a better filter function. Conclusions: HCS improved upon C/S's density scaled heterogeneity correction with a position and direction sensitive density filter. This method significantly improved the accuracy of the GPU based algorithm reaching the accuracy levels of Monte Carlo based methods with performance in a few tenths of seconds per beam. Acknowledgement: Funding for this research was provided by the NSF Cooperative Agreement EEC9731748, Elekta / IMPAC Medical Systems, Inc. and the Johns Hopkins University. James Satterthwaite provided the Monte Carlo benchmark simulations.

  3. On the use of a convolution-superposition algorithm for plan checking in lung stereotactic body radiation therapy.

    PubMed

    Hardcastle, Nicholas; Oborn, Bradley M; Haworth, Annette

    2016-09-08

    Stereotactic body radiation therapy (SBRT) aims to deliver a highly conformal ablative dose to a small target. Dosimetric verification of SBRT for lung tumors presents a challenge due to heterogeneities, moving targets, and small fields. Recent software (M3D) designed for dosimetric verification of lung SBRT treatment plans using an advanced convolution-superposition algorithm was evaluated. Ten lung SBRT patients covering a range of tumor volumes were selected. 3D CRT plans were created using the XiO treatment planning system (TPS) with the superposition algorithm. Dose was recalculated in the Eclipse TPS using the AAA algorithm, M3D verification software using the collapsed-cone-convolution algorithm, and in-house Monte Carlo (MC). Target point doses were calculated with RadCalc software. Near-maximum, median, and near-minimum target doses, conformity indices, and lung doses were compared with MC as the reference calculation. M3D 3D gamma passing rates were compared with the XiO and Eclipse. Wilcoxon signed-rank test was used to compare each calculation method with XiO with a threshold of significance of p < 0.05. M3D and RadCalc point dose calculations were greater than MC by up to 7.7% and 13.1%, respectively, with M3D being statistically significant (s.s.). AAA and XiO calculated point doses were less than MC by 11.3% and 5.2%, respectively (AAA s.s.). Median and near-minimum and near-maximum target doses were less than MC when calculated with AAA and XiO (all s.s.). Near-maximum and median target doses were higher with M3D compared with MC (s.s.), but there was no difference in near-minimum M3D doses compared with MC. M3D-calculated ipsilateral lung V20 Gy and V5 Gy were greater than that calculated with MC (s.s.); AAA- and XiO-calculated V20 Gy was lower than that calculated with MC, but not statistically different to MC for V5 Gy. Nine of the 10 plans achieved M3D gamma passing rates greater than 95% and 80%for 5%/1 mm and 3%/1 mm criteria, respectively. M3

  4. Commissioning and verification of the collapsed cone convolution superposition algorithm for SBRT delivery using flattening filter-free beams.

    PubMed

    Foster, Ryan D; Speiser, Michael P; Solberg, Timothy D

    2014-03-06

    Linacs equipped with flattening filter-free (FFF) megavoltage photon beams are now commercially available. However, the commissioning of FFF beams poses challenges that are not shared with traditional flattened megavoltage X-ray beams. The planning system must model a beam that is peaked in the center and has an energy spectrum that is softer than the flattened beam. Removing the flattening filter also increases the maximum possible dose rates from 600 MU/min up to 2400 MU/min in some cases; this increase in dose rate affects the recombination correction factor, P(ion), used during absolute dose calibration with ionization chambers. We present the first reported experience of commissioning, verification, and clinical use of the collapsed cone convolution superposition (CCCS) dose calculation algorithm for commercially available flattening filter-free beams. Our commissioning data are compared to previously reported measurements and Monte Carlo studies of FFF beams. Commissioning was verified by making point-dose measurement of test plans, irradiating the RPC lung phantom, and performing patient-specific QA. The average point-dose difference between calculations and measurements of all test plans and all patient specific QA measurements is 0.80%, and the RPC phantom absolute dose differences for the two thermoluminescent dosimeters (TLDs) in the phantom planning target volume (PTV) were 1% and 2%, respectively. One hundred percent (100%) of points in the RPC phantom films passed the RPC gamma criteria of 5% and 5 mm. Our results show that the CCCS algorithm can accurately model FFF beams and calculate SBRT dose distributions using those beams.

  5. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures

    SciTech Connect

    Neylon, J. Sheng, K.; Yu, V.; Low, D. A.; Kupelian, P.; Santhanam, A.; Chen, Q.

    2014-10-15

    , respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. Results: The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. Conclusions: The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.

  6. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures.

    PubMed

    Neylon, J; Sheng, K; Yu, V; Chen, Q; Low, D A; Kupelian, P; Santhanam, A

    2014-10-01

    . Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.

  7. Stochastic versus deterministic kernel-based superposition approaches for dose calculation of intensity-modulated arcs

    NASA Astrophysics Data System (ADS)

    Tang, Grace; Earl, Matthew A.; Luan, Shuang; Wang, Chao; Cao, Daliang; Yu, Cedric X.; Naqvi, Shahid A.

    2008-09-01

    Dose calculations for radiation arc therapy are traditionally performed by approximating continuous delivery arcs with multiple static beams. For 3D conformal arc treatments, the shape and weight variation per degree is usually small enough to allow arcs to be approximated by static beams separated by 5°-10°. But with intensity-modulated arc therapy (IMAT), the variation in shape and dose per degree can be large enough to require a finer angular spacing. With the increase in the number of beams, a deterministic dose calculation method, such as collapsed-cone convolution/superposition, will require proportionally longer computational times, which may not be practical clinically. We propose to use a homegrown Monte Carlo kernel-superposition technique (MCKS) to compute doses for rotational delivery. The IMAT plans were generated with 36 static beams, which were subsequently interpolated into finer angular intervals for dose calculation to mimic the continuous arc delivery. Since MCKS uses random sampling of photons, the dose computation time only increased insignificantly for the interpolated-static-beam plans that may involve up to 720 beams. Ten past IMRT cases were selected for this study. Each case took approximately 15-30 min to compute on a single CPU running Mac OS X using the MCKS method. The need for a finer beam spacing is dictated by how fast the beam weights and aperture shapes change between the adjacent static planning beam angles. MCKS, however, obviates the concern by allowing hundreds of beams to be calculated in practically the same time as for a few beams. For more than 43 beams, MCKS usually takes less CPU time than the collapsed-cone algorithm used by the Pinnacle3 planning system.

  8. A fluence-convolution method to calculate radiation therapy dose distributions that incorporate random set-up error

    NASA Astrophysics Data System (ADS)

    Beckham, W. A.; Keall, P. J.; Siebers, J. V.

    2002-10-01

    The International Commission on Radiation Units and Measurements Report 62 (ICRU 1999) introduced the concept of expanding the clinical target volume (CTV) to form the planning target volume by a two-step process. The first step is adding a clinically definable internal margin, which produces an internal target volume that accounts for the size, shape and position of the CTV in relation to anatomical reference points. The second is the use of a set-up margin (SM) that incorporates the uncertainties of patient beam positioning, i.e. systematic and random set-up errors. We propose to replace the random set-up error component of the SM by explicitly incorporating the random set-up error into the dose-calculation model by convolving the incident photon beam fluence with a Gaussian set-up error kernel. This fluence-convolution method was implemented into a Monte Carlo (MC) based treatment-planning system. Also implemented for comparison purposes was a dose-matrix-convolution algorithm similar to that described by Leong (1987 Phys. Med. Biol. 32 327-34). Fluence and dose-matrix-convolution agree in homogeneous media. However, for the heterogeneous phantom calculations, discrepancies of up to 5% in the dose profiles were observed with a 0.4 cm set-up error value. Fluence-convolution mimics reality more closely, as dose perturbations at interfaces are correctly predicted (Wang et al 1999 Med. Phys. 26 2626-34, Sauer 1995 Med. Phys. 22 1685-90). Fluence-convolution effectively decouples the treatment beams from the patient, and more closely resembles the reality of particle fluence distributions for many individual beam-patient set-ups. However, dose-matrix-convolution reduces the random statistical noise in MC calculations. Fluence-convolution can easily be applied to convolution/superposition based dose-calculation algorithms.

  9. FAST-PT: a novel algorithm to calculate convolution integrals in cosmological perturbation theory

    NASA Astrophysics Data System (ADS)

    McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.; Blazek, Jonathan A.

    2016-09-01

    We present a novel algorithm, FAST-PT, for performing convolution or mode-coupling integrals that appear in nonlinear cosmological perturbation theory. The algorithm uses several properties of gravitational structure formation—the locality of the dark matter equations and the scale invariance of the problem—as well as Fast Fourier Transforms to describe the input power spectrum as a superposition of power laws. This yields extremely fast performance, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation. We describe the algorithm and demonstrate its application to calculating nonlinear corrections to the matter power spectrum, including one-loop standard perturbation theory and the renormalization group approach. We also describe our public code (in Python) to implement this algorithm. The code, along with a user manual and example implementations, is available at https://github.com/JoeMcEwen/FAST-PT.

  10. Fast convolution method and its application in mask optimization for intensity calculation using basis expansion.

    PubMed

    Sun, Yaping; Zhang, Jinyu; Wang, Yan; Yu, Zhiping

    2014-12-01

    Finer grid representation is required for a more accurate description of mask patterns in inverse lithography techniques, thus resulting in a large-size mask representation and heavy computational cost. To mitigate the computation problem caused by intensive convolutions in mask optimization, a new method called convolution using basis expansion (CBE) is discussed in this paper. Matrices defined in fine grid are projected on coarse gird under a base matrix set. The new matrices formed by the expansion coefficients are used to perform convolution on the coarse grid. The convolution on fine grid can be approximated by the sum of a few convolutions on coarse grid following an interpolation procedure. The CBE is verified by random matrix convolutions and intensity calculation in lithography simulation. Results show that the use of the CBE method results in similar image quality with significant running speed enhancement compared with traditional convolution method.

  11. Calculating Interaction Energies Using First Principle Theories: Consideration of Basis Set Superposition Error and Fragment Relaxation

    ERIC Educational Resources Information Center

    Bowen, J. Philip; Sorensen, Jennifer B.; Kirschner, Karl N.

    2007-01-01

    The analysis explains the basis set superposition error (BSSE) and fragment relaxation involved in calculating the interaction energies using various first principle theories. Interacting the correlated fragment and increasing the size of the basis set can help in decreasing the BSSE to a great extent.

  12. Final Aperture Superposition Technique applied to fast calculation of electron output factors and depth dose curves.

    PubMed

    Faddegon, B A; Villarreal-Barajas, J E

    2005-11-01

    The Final Aperture Superposition Technique (FAST) is described and applied to accurate, near instantaneous calculation of the relative output factor (ROF) and central axis percentage depth dose curve (PDD) for clinical electron beams used in radiotherapy. FAST is based on precalculation of dose at select points for the two extreme situations of a fully open final aperture and a final aperture with no opening (fully shielded). This technique is different than conventional superposition of dose deposition kernels: The precalculated dose is differential in position of the electron or photon at the downstream surface of the insert. The calculation for a particular aperture (x-ray jaws or MLC, insert in electron applicator) is done with superposition of the precalculated dose data, using the open field data over the open part of the aperture and the fully shielded data over the remainder. The calculation takes explicit account of all interactions in the shielded region of the aperture except the collimator effect: Particles that pass from the open part into the shielded part, or visa versa. For the clinical demonstration, FAST was compared to full Monte Carlo simulation of 10 x 10, 2.5 x 2.5, and 2 x 8 cm2 inserts. Dose was calculated to 0.5% precision in 0.4 x 0.4 x 0.2 cm3 voxels, spaced at 0.2 cm depth intervals along the central axis, using detailed Monte Carlo simulation of the treatment head of a commercial linear accelerator for six different electron beams with energies of 6-21 MeV. Each simulation took several hours on a personal computer with a 1.7 Mhz processor. The calculation for the individual inserts, done with superposition, was completed in under a second on the same PC. Since simulations for the pre calculation are only performed once, higher precision and resolution can be obtained without increasing the calculation time for individual inserts. Fully shielded contributions were largest for small fields and high beam energy, at the surface, reaching a

  13. GPU-accelerated Monte Carlo convolution∕superposition implementation for dose calculation

    PubMed Central

    Zhou, Bo; Yu, Cedric X.; Chen, Danny Z.; Hu, X. Sharon

    2010-01-01

    Purpose: Dose calculation is a key component in radiation treatment planning systems. Its performance and accuracy are crucial to the quality of treatment plans as emerging advanced radiation therapy technologies are exerting ever tighter constraints on dose calculation. A common practice is to choose either a deterministic method such as the convolution∕superposition (CS) method for speed or a Monte Carlo (MC) method for accuracy. The goal of this work is to boost the performance of a hybrid Monte Carlo convolution∕superposition (MCCS) method by devising a graphics processing unit (GPU) implementation so as to make the method practical for day-to-day usage. Methods: Although the MCCS algorithm combines the merits of MC fluence generation and CS fluence transport, it is still not fast enough to be used as a day-to-day planning tool. To alleviate the speed issue of MC algorithms, the authors adopted MCCS as their target method and implemented a GPU-based version. In order to fully utilize the GPU computing power, the MCCS algorithm is modified to match the GPU hardware architecture. The performance of the authors’ GPU-based implementation on an Nvidia GTX260 card is compared to a multithreaded software implementation on a quad-core system. Results: A speedup in the range of 6.7–11.4× is observed for the clinical cases used. The less than 2% statistical fluctuation also indicates that the accuracy of the authors’ GPU-based implementation is in good agreement with the results from the quad-core CPU implementation. Conclusions: This work shows that GPU is a feasible and cost-efficient solution compared to other alternatives such as using cluster machines or field-programmable gate arrays for satisfying the increasing demands on computation speed and accuracy of dose calculation. But there are also inherent limitations of using GPU for accelerating MC-type applications, which are also analyzed in detail in this article. PMID:21158271

  14. FAST-PT: Convolution integrals in cosmological perturbation theory calculator

    NASA Astrophysics Data System (ADS)

    McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.; Blazek, Jonathan A.

    2016-03-01

    FAST-PT calculates 1-loop corrections to the matter power spectrum in cosmology. The code utilizes Fourier methods combined with analytic expressions to reduce the computation time down to scale as N log N, where N is the number of grid point in the input linear power spectrum. FAST-PT is extremely fast, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation.

  15. A comparison between anisotropic analytical and multigrid superposition dose calculation algorithms in radiotherapy treatment planning

    SciTech Connect

    Wu, Vincent W.C.; Tse, Teddy K.H.; Ho, Cola L.M.; Yeung, Eric C.Y.

    2013-07-01

    Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time.

  16. Collapsed cone convolution of radiant energy for photon dose calculation in heterogeneous media.

    PubMed

    Ahnesjö, A

    1989-01-01

    A method for photon beam dose calculations is described. The primary photon beam is raytraced through the patient, and the distribution of total radiant energy released into the patient is calculated. Polyenergetic energy deposition kernels are calculated from the spectrum of the beam, using a database of monoenergetic kernels. It is shown that the polyenergetic kernels can be analytically described with high precision by (A exp( -ar) + B exp( -br)/r2, where A, a, B, and b depend on the angle with respect to the impinging photons and the accelerating potential, and r is the radial distance. Numerical values of A, a, B, and b are derived and used to convolve energy deposition kernels with the total energy released per unit mass (TERMA) to yield dose distributions. The convolution is facilitated by the introduction of the collapsed cone approximation. In this approximation, all energy released into coaxial cones of equal solid angle, from volume elements on the cone axis, is rectilinearly transported, attenuated, and deposited in elements on the axis. Scaling of the kernels is implicitly done during the convolution procedure to fully account for inhomogeneities present in the irradiated volume. The number of computational operations needed to compute the dose with the method is proportional to the number of calculation points. The method is tested for five accelerating potentials; 4, 6, 10, 15, and 24 MV, and applied to two geometries; one is a stack of slabs of tissue media, and the other is a mediastinum-like phantom of cork and water. In these geometries, the EGS4 Monte Carlo system has been used to generate reference dose distributions with which the dose computed with the collapsed cone convolution method is compared. Generally, the agreement between the methods is excellent. Deviations are observed in situations of lateral charged particle disequilibrium in low density media, however, but the result is superior compared to that of the generalized Batho method.

  17. Influence of the superposition approximation on calculated effective dose rates from galactic cosmic rays at aerospace-related altitudes

    NASA Astrophysics Data System (ADS)

    Copeland, Kyle

    2015-07-01

    The superposition approximation was commonly employed in atmospheric nuclear transport modeling until recent years and is incorporated into flight dose calculation codes such as CARI-6 and EPCARD. The useful altitude range for this approximation is investigated using Monte Carlo transport techniques. CARI-7A simulates atmospheric radiation transport of elements H-Fe using a database of precalculated galactic cosmic radiation showers calculated with MCNPX 2.7.0 and is employed here to investigate the influence of the superposition approximation on effective dose rates, relative to full nuclear transport of galactic cosmic ray primary ions. Superposition is found to produce results less than 10% different from nuclear transport at current commercial and business aviation altitudes while underestimating dose rates at higher altitudes. The underestimate sometimes exceeds 20% at approximately 23 km and exceeds 40% at 50 km. Thus, programs employing this approximation should not be used to estimate doses or dose rates for high-altitude portions of the commercial space and near-space manned flights that are expected to begin soon.

  18. Superposition technique for MOV-protected series capacitors in short-circuit calculations

    SciTech Connect

    Mahseredjian, J.; Lagace, P.J.; Lefebvre, S.; Chartrand, A.

    1995-07-01

    A new method for modeling series capacitors protected by metal oxide varistors is developed using superposition on sequence networks. This technique is incorporated in a short-circuit program within an iterative process which allows the consideration of the nonlinear characteristic of metal oxide varistor (MOV)protected series capacitors. The iterative process is rendered dynamic by evaluating the voltage and current of the series capacitors and by adjusting the state of the varistors at each iteration without reformulating the network nodal admittance matrix. The 60 Hz impedance characteristic of the nonlinear MOV protected series capacitors is simulated by fictive shunt current sources across varistors. Results show that the proposed procedure can adequately and efficiently model MOV protected series capacitors in a standard short-circuit program.

  19. Iron-oxygen vacancy defect centers in PbTi O3 : Newman superposition model analysis and density functional calculations

    NASA Astrophysics Data System (ADS)

    Meštrić, H.; Eichel, R.-A.; Kloss, T.; Dinse, K.-P.; Laubach, So.; Laubach, St.; Schmidt, P. C.; Schönau, K. A.; Knapp, M.; Ehrenberg, H.

    2005-04-01

    The Fe3+ center in ferroelectric PbTiO3 together with an oxygen vacancy forms a charged defect associate, oriented along the crystallographic c axis. Its microscopic structure has been analyzed in detail comparing results from a semiempirical Newman superposition model analysis based on fine-structure data and from calculations using density functional theory. Both methods give evidence for a substitution of Fe3+ for Ti4+ as an acceptor center. The position of the iron ion in the ferroelectric phase is found to be similar to the B site in the paraelectric phase. Partial charge compensation is locally provided by a directly coordinated oxygen vacancy. Using high-resolution synchrotron powder diffraction, it was verified that lead titanate remains tetragonal down to 12K , exhibiting a c/a ratio of 1.0721.

  20. SU-E-T-423: Fast Photon Convolution Calculation with a 3D-Ideal Kernel On the GPU

    SciTech Connect

    Moriya, S; Sato, M; Tachibana, H

    2015-06-15

    Purpose: The calculation time is a trade-off for improving the accuracy of convolution dose calculation with fine calculation spacing of the KERMA kernel. We investigated to accelerate the convolution calculation using an ideal kernel on the Graphic Processing Units (GPU). Methods: The calculation was performed on the AMD graphics hardware of Dual FirePro D700 and our algorithm was implemented using the Aparapi that convert Java bytecode to OpenCL. The process of dose calculation was separated with the TERMA and KERMA steps. The dose deposited at the coordinate (x, y, z) was determined in the process. In the dose calculation running on the central processing unit (CPU) of Intel Xeon E5, the calculation loops were performed for all calculation points. On the GPU computation, all of the calculation processes for the points were sent to the GPU and the multi-thread computation was done. In this study, the dose calculation was performed in a water equivalent homogeneous phantom with 150{sup 3} voxels (2 mm calculation grid) and the calculation speed on the GPU to that on the CPU and the accuracy of PDD were compared. Results: The calculation time for the GPU and the CPU were 3.3 sec and 4.4 hour, respectively. The calculation speed for the GPU was 4800 times faster than that for the CPU. The PDD curve for the GPU was perfectly matched to that for the CPU. Conclusion: The convolution calculation with the ideal kernel on the GPU was clinically acceptable for time and may be more accurate in an inhomogeneous region. Intensity modulated arc therapy needs dose calculations for different gantry angles at many control points. Thus, it would be more practical that the kernel uses a coarse spacing technique if the calculation is faster while keeping the similar accuracy to a current treatment planning system.

  1. Dissociative electron transfer in polychlorinated aromatics. Reduction potentials from convolution analysis and quantum chemical calculations.

    PubMed

    Romańczyk, Piotr P; Rotko, Grzegorz; Kurek, Stefan S

    2016-08-10

    Formal potentials of the first reduction leading to dechlorination in dimethylformamide were obtained from convolution analysis of voltammetric data and confirmed by quantum chemical calculations for a series of polychlorinated benzenes: hexachlorobenzene (-2.02 V vs. Fc(+)/Fc), pentachloroanisole (-2.14 V), and 2,4-dichlorophenoxy- and 2,4,5-trichlorophenoxyacetic acids (-2.35 V and -2.34 V, respectively). The key parameters required to calculate the reduction potential, electron affinity and/or C-Cl bond dissociation energy, were computed at both DFT-D and CCSD(T)-F12 levels. Comparison of the obtained gas-phase energies and redox potentials with experiment enabled us to verify the relative energetics and the performance of various implicit solvent models. Good agreement with the experiment was achieved for redox potentials computed at the DFT-D level, but only for the stepwise mechanism owing to the error compensation. For the concerted electron transfer/C-Cl bond cleavage process, the application of a high level coupled cluster method is required. Quantum chemical calculations have also demonstrated the significant role of the π*ring and σ*C-Cl orbital mixing. It brings about the stabilisation of the non-planar, C2v-symmetric C6Cl6˙(-) radical anion, explains the experimentally observed low energy barrier and the transfer coefficient close to 0.5 for C6Cl5OCH3 in an electron transfer process followed by immediate C-Cl bond cleavage in solution, and an increase in the probability of dechlorination of di- and trichlorophenoxyacetic acids due to substantial population of the vibrational excited states corresponding to the out-of-plane C-Cl bending at ambient temperatures.

  2. GPU-Q-J, a fast method for calculating root mean square deviation (RMSD) after optimal superposition.

    PubMed

    Hung, Ling-Hong; Guerquin, Michal; Samudrala, Ram

    2011-04-01

    Calculation of the root mean square deviation (RMSD) between the atomic coordinates of two optimally superposed structures is a basic component of structural comparison techniques. We describe a quaternion based method, GPU-Q-J, that is stable with single precision calculations and suitable for graphics processor units (GPUs). The application was implemented on an ATI 4770 graphics card in C/C++ and Brook+ in Linux where it was 260 to 760 times faster than existing unoptimized CPU methods. Source code is available from the Compbio website http://software.compbio.washington.edu/misc/downloads/st_gpu_fit/ or from the author LHH. The Nutritious Rice for the World Project (NRW) on World Community Grid predicted de novo, the structures of over 62,000 small proteins and protein domains returning a total of 10 billion candidate structures. Clustering ensembles of structures on this scale requires calculation of large similarity matrices consisting of RMSDs between each pair of structures in the set. As a real-world test, we calculated the matrices for 6 different ensembles from NRW. The GPU method was 260 times faster that the fastest existing CPU based method and over 500 times faster than the method that had been previously used. GPU-Q-J is a significant advance over previous CPU methods. It relieves a major bottleneck in the clustering of large numbers of structures for NRW. It also has applications in structure comparison methods that involve multiple superposition and RMSD determination steps, particularly when such methods are applied on a proteome and genome wide scale.

  3. Calculation of reflectance distribution using angular spectrum convolution in mesh-based computer generated hologram.

    PubMed

    Yeom, Han-Ju; Park, Jae-Hyeung

    2016-08-22

    We propose a method to obtain a computer-generated hologram that renders reflectance distributions of individual mesh surfaces of three-dimensional objects. Unlike previous methods which find phase distribution inside each mesh, the proposed method performs convolution of angular spectrum of the mesh to obtain desired reflectance distribution. Manipulation in the angular spectrum domain enables its application to fully-analytic mesh based computer generated hologram, removing the necessity for resampling of the spatial frequency grid. It is also computationally inexpensive as the convolution can be performed efficiently using Fourier transform. In this paper, we present principle, error analysis, simulation, and experimental verification results of the proposed method.

  4. Geometrical correction for the inter- and intramolecular basis set superposition error in periodic density functional theory calculations.

    PubMed

    Brandenburg, Jan Gerit; Alessio, Maristella; Civalleri, Bartolomeo; Peintinger, Michael F; Bredow, Thomas; Grimme, Stefan

    2013-09-26

    We extend the previously developed geometrical correction for the inter- and intramolecular basis set superposition error (gCP) to periodic density functional theory (DFT) calculations. We report gCP results compared to those from the standard Boys-Bernardi counterpoise correction scheme and large basis set calculations. The applicability of the method to molecular crystals as the main target is tested for the benchmark set X23. It consists of 23 noncovalently bound crystals as introduced by Johnson et al. (J. Chem. Phys. 2012, 137, 054103) and refined by Tkatchenko et al. (J. Chem. Phys. 2013, 139, 024705). In order to accurately describe long-range electron correlation effects, we use the standard atom-pairwise dispersion correction scheme DFT-D3. We show that a combination of DFT energies with small atom-centered basis sets, the D3 dispersion correction, and the gCP correction can accurately describe van der Waals and hydrogen-bonded crystals. Mean absolute deviations of the X23 sublimation energies can be reduced by more than 70% and 80% for the standard functionals PBE and B3LYP, respectively, to small residual mean absolute deviations of about 2 kcal/mol (corresponding to 13% of the average sublimation energy). As a further test, we compute the interlayer interaction of graphite for varying distances and obtain a good equilibrium distance and interaction energy of 6.75 Å and -43.0 meV/atom at the PBE-D3-gCP/SVP level. We fit the gCP scheme for a recently developed pob-TZVP solid-state basis set and obtain reasonable results for the X23 benchmark set and the potential energy curve for water adsorption on a nickel (110) surface.

  5. Compressed convolution

    NASA Astrophysics Data System (ADS)

    Elsner, Franz; Wandelt, Benjamin D.

    2014-01-01

    We introduce the concept of compressed convolution, a technique to convolve a given data set with a large number of non-orthogonal kernels. In typical applications our technique drastically reduces the effective number of computations. The new method is applicable to convolutions with symmetric and asymmetric kernels and can be easily controlled for an optimal trade-off between speed and accuracy. It is based on linear compression of the collection of kernels into a small number of coefficients in an optimal eigenbasis. The final result can then be decompressed in constant time for each desired convolved output. The method is fully general and suitable for a wide variety of problems. We give explicit examples in the context of simulation challenges for upcoming multi-kilo-detector cosmic microwave background (CMB) missions. For a CMB experiment with detectors with similar beam properties, we demonstrate that the algorithm can decrease the costs of beam convolution by two to three orders of magnitude with negligible loss of accuracy. Likewise, it has the potential to allow the reduction of disk space required to store signal simulations by a similar amount. Applications in other areas of astrophysics and beyond are optimal searches for a large number of templates in noisy data, e.g. from a parametrized family of gravitational wave templates; or calculating convolutions with highly overcomplete wavelet dictionaries, e.g. in methods designed to uncover sparse signal representations.

  6. A novel algorithm for the calculation of physical and biological irradiation quantities in scanned ion beam therapy: the beamlet superposition approach

    NASA Astrophysics Data System (ADS)

    Russo, G.; Attili, A.; Battistoni, G.; Bertrand, D.; Bourhaleb, F.; Cappucci, F.; Ciocca, M.; Mairani, A.; Milian, F. M.; Molinelli, S.; Morone, M. C.; Muraro, S.; Orts, T.; Patera, V.; Sala, P.; Schmitt, E.; Vivaldo, G.; Marchetto, F.

    2016-01-01

    The calculation algorithm of a modern treatment planning system for ion-beam radiotherapy should ideally be able to deal with different ion species (e.g. protons and carbon ions), to provide relative biological effectiveness (RBE) evaluations and to describe different beam lines. In this work we propose a new approach for ion irradiation outcomes computations, the beamlet superposition (BS) model, which satisfies these requirements. This model applies and extends the concepts of previous fluence-weighted pencil-beam algorithms to quantities of radiobiological interest other than dose, i.e. RBE- and LET-related quantities. It describes an ion beam through a beam-line specific, weighted superposition of universal beamlets. The universal physical and radiobiological irradiation effect of the beamlets on a representative set of water-like tissues is evaluated once, coupling the per-track information derived from FLUKA Monte Carlo simulations with the radiobiological effectiveness provided by the microdosimetric kinetic model and the local effect model. Thanks to an extension of the superposition concept, the beamlet irradiation action superposition is applicable for the evaluation of dose, RBE and LET distributions. The weight function for the beamlets superposition is derived from the beam phase space density at the patient entrance. A general beam model commissioning procedure is proposed, which has successfully been tested on the CNAO beam line. The BS model provides the evaluation of different irradiation quantities for different ions, the adaptability permitted by weight functions and the evaluation speed of analitical approaches. Benchmarking plans in simple geometries and clinical plans are shown to demonstrate the model capabilities.

  7. A novel algorithm for the calculation of physical and biological irradiation quantities in scanned ion beam therapy: the beamlet superposition approach.

    PubMed

    Russo, G; Attili, A; Battistoni, G; Bertrand, D; Bourhaleb, F; Cappucci, F; Ciocca, M; Mairani, A; Milian, F M; Molinelli, S; Morone, M C; Muraro, S; Orts, T; Patera, V; Sala, P; Schmitt, E; Vivaldo, G; Marchetto, F

    2016-01-07

    The calculation algorithm of a modern treatment planning system for ion-beam radiotherapy should ideally be able to deal with different ion species (e.g. protons and carbon ions), to provide relative biological effectiveness (RBE) evaluations and to describe different beam lines. In this work we propose a new approach for ion irradiation outcomes computations, the beamlet superposition (BS) model, which satisfies these requirements. This model applies and extends the concepts of previous fluence-weighted pencil-beam algorithms to quantities of radiobiological interest other than dose, i.e. RBE- and LET-related quantities. It describes an ion beam through a beam-line specific, weighted superposition of universal beamlets. The universal physical and radiobiological irradiation effect of the beamlets on a representative set of water-like tissues is evaluated once, coupling the per-track information derived from FLUKA Monte Carlo simulations with the radiobiological effectiveness provided by the microdosimetric kinetic model and the local effect model. Thanks to an extension of the superposition concept, the beamlet irradiation action superposition is applicable for the evaluation of dose, RBE and LET distributions. The weight function for the beamlets superposition is derived from the beam phase space density at the patient entrance. A general beam model commissioning procedure is proposed, which has successfully been tested on the CNAO beam line. The BS model provides the evaluation of different irradiation quantities for different ions, the adaptability permitted by weight functions and the evaluation speed of analitical approaches. Benchmarking plans in simple geometries and clinical plans are shown to demonstrate the model capabilities.

  8. Clinical implications of different calculation algorithms in breast radiotherapy: a comparison between pencil beam and collapsed cone convolution.

    PubMed

    Cilla, S; Digesù, C; Macchia, G; Deodato, F; Sallustio, G; Piermattei, A; Morganti, A G

    2014-06-01

    This investigation focused on the clinical implications of the use of the Collapsed Cone Convolution algorithm (CCC) in breast radiotherapy and investigated the dosimetric differences as respect to Pencil Beam Convolution algorithm (PBC). 15 breast treatment plans produced using the PBC algorithm were re-calculated using the CCC algorithm with the same MUs. In a second step, plans were re-optimized using CCC algorithm with modification of wedges and beam weightings to achieve optimal coverage (CCCr plans). For each patient, dosimetric comparison was performed using the standard tangential technique (SWT) and a forward-planned IMRT technique (f-IMRT). The CCC algorithm showed significant increased dose inhomogeneity. Mean and minimum PTV doses decreased by 1.4% and 2.8% (both techniques). Mean V95% decreased to 83.7% and 90.3%, respectively for the SWT and f-IMRT. V95% was correlated to the ratio of PTV and lung volumes into the treatment field. The re-optimized CCCr plans achieved similar target coverage, but high-dose volume was significantly larger (V107%: 7.6% vs 2.3% (SWT), 7.1% vs 2.1% (f-IMRT). There was a significantly increase in the ipsilateral lung volume receiving low doses (V5 Gy: 31.3% vs 26.2% in SWT, 27.0% vs 23.0% in f-IMRT). MUs needed for PTV coverage in CCCr plans were higher by 3%. The PBC algorithm overestimated PTV coverage in terms of all important dosimetric metrics. If previous clinical experience are based on the use of PBC model, especially needed is discussion between medical physicists and radiation oncologists to fully understand the dosimetric changes. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  9. Theoretical calculation on ICI reduction using digital coherent superposition of optical OFDM subcarrier pairs in the presence of laser phase noise.

    PubMed

    Yi, Xingwen; Xu, Bo; Zhang, Jing; Lin, Yun; Qiu, Kun

    2014-12-15

    Digital coherent superposition (DCS) of optical OFDM subcarrier pairs with Hermitian symmetry can reduce the inter-carrier-interference (ICI) noise resulted from phase noise. In this paper, we show two different implementations of DCS-OFDM that have the same performance in the presence of laser phase noise. We complete the theoretical calculation on ICI reduction by using the model of pure Wiener phase noise. By Taylor expansion of the ICI, we show that the ICI power is cancelled to the second order by DCS. The fourth order term is further derived out and only decided by the ratio of laser linewidth to OFDM subcarrier symbol rate, which can greatly simplify the system design. Finally, we verify our theoretical calculations in simulations and use the analytical results to predict the system performance. DCS-OFDM is expected to be beneficial to certain optical fiber transmissions.

  10. A 3D superposition pencil beam dose calculation algorithm for a 60Co therapy unit and its verification by MC simulation

    NASA Astrophysics Data System (ADS)

    Koncek, O.; Krivonoska, J.

    2014-11-01

    The MCNP Monte Carlo code was used to simulate the collimating system of the 60Co therapy unit to calculate the primary and scattered photon fluences as well as the electron contamination incident to the isocentric plane as the functions of the irradiation field size. Furthermore, a Monte Carlo simulation for the polyenergetic Pencil Beam Kernels (PBKs) generation was performed using the calculated photon and electron spectra. The PBK was analytically fitted to speed up the dose calculation using the convolution technique in the homogeneous media. The quality of the PBK fit was verified by comparing the calculated and simulated 60Co broad beam profiles and depth dose curves in a homogeneous water medium. The inhomogeneity correction coefficients were derived from the PBK simulation of an inhomogeneous slab phantom consisting of various materials. The inhomogeneity calculation model is based on the changes in the PBK radial displacement and on the change of the forward and backward electron scattering. The inhomogeneity correction is derived from the electron density values gained from a complete 3D CT array and considers different electron densities through which the pencil beam is propagated as well as the electron density values located between the interaction point and the point of dose deposition. Important aspects and details of the algorithm implementation are also described in this study.

  11. The Effect of the Basis-Set Superposition Error on the Calculation of Dispersion Interactions:  A Test Study on the Neon Dimer.

    PubMed

    Monari, Antonio; Bendazzoli, Gian Luigi; Evangelisti, Stefano; Angeli, Celestino; Ben Amor, Nadia; Borini, Stefano; Maynau, Daniel; Rossi, Elda

    2007-03-01

    The dispersion interactions of the Ne2 dimer were studied using both the long-range perturbative and supramolecular approaches:  for the long-range approach, full CI or string-truncated CI methods were used, while for the supramolecular treatments, the energy curves were computed by using configuration interaction with single and double excitation (CISD), coupled cluster with single and double excitation, and coupled-cluster with single and double (and perturbative) triple excitations. From the interatomic potential-energy curves obtained by the supramolecular approach, the C6 and C8 dispersion coefficients were computed via an interpolation scheme, and they were compared with the corresponding values obtained within the long-range perturbative treatment. We found that the lack of size consistency of the CISD approach makes this method completely useless to compute dispersion coefficients even when the effect of the basis-set superposition error on the dimer curves is considered. The largest full-CI space we were able to use contains more than 1 billion symmetry-adapted Slater determinants, and it is, to our knowledge, the largest calculation of second-order properties ever done at the full-CI level so far. Finally, a new data format and libraries (Q5Cost) have been used in order to interface different codes used in the present study.

  12. Accounting for center-of-mass target motion using convolution methods in Monte Carlo-based dose calculations of the lung.

    PubMed

    Chetty, Indrin J; Rosu, Mihaela; McShan, Daniel L; Fraass, Benedick A; Balter, James M; Ten Haken, Randall K

    2004-04-01

    We have applied convolution methods to account for some of the effects of respiratory induced motion in clinical treatment planning of the lung. The 3-D displacement of the GTV center-of-mass (COM) as determined from breath-hold exhale and inhale CT scans was used to approximate the breathing induced motion. The time-course of the GTV-COM was estimated using a probability distribution function (PDF) previously derived from diaphragmatic motion [Med. Phys. 26, 715-720 (1990)] but also used by others for treatment planning in the lung [Int. J. Radiat. Oncol., Biol., Phys. 53, 822-834 (2002); Med. Phys. 30, 1086-1095 (2003)]. We have implemented fluence and dose convolution methods within a Monte Carlo based dose calculation system with the intent of comparing these approaches for planning in the lung. All treatment plans in this study have been calculated with Monte Carlo using the breath-hold exhale CT data sets. An analysis of treatment plans for 3 patients showed substantial differences (hot and cold spots consistently greater than +/- 15%) between the motion convolved and static treatment plans. As fluence convolution accounts for the spatial variance of the dose distribution in the presence of tissue inhomogeneities, the doses were approximately 5% greater than those calculated with dose convolution in the vicinity of the lung. DVH differences between the static, fluence and dose convolved distributions for the CTV were relatively small, however, larger differences were observed for the PTV. An investigation of the effect of the breathing PDF asymmetry on the motion convolved dose distributions showed that reducing the asymmetry resulted in increased hot and cold spots in the motion convolved distributions relative to the static cases. In particular, changing from an asymmetric breathing function to one that is symmetric results in an increase in the hot/cold spots of +/- 15% relative to the static plan. This increase is not unexpected considering that the

  13. A geometrical correction for the inter- and intra-molecular basis set superposition error in Hartree-Fock and density functional theory calculations for large systems

    NASA Astrophysics Data System (ADS)

    Kruse, Holger; Grimme, Stefan

    2012-04-01

    A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model

  14. Dealiased convolutions for pseudospectral simulations

    NASA Astrophysics Data System (ADS)

    Roberts, Malcolm; Bowman, John C.

    2011-12-01

    Efficient algorithms have recently been developed for calculating dealiased linear convolution sums without the expense of conventional zero-padding or phase-shift techniques. For one-dimensional in-place convolutions, the memory requirements are identical with the zero-padding technique, with the important distinction that the additional work memory need not be contiguous with the input data. This decoupling of data and work arrays dramatically reduces the memory and computation time required to evaluate higher-dimensional in-place convolutions. The memory savings is achieved by computing the in-place Fourier transform of the data in blocks, rather than all at once. The technique also allows one to dealias the n-ary convolutions that arise on Fourier transforming cubic and higher powers. Implicitly dealiased convolutions can be built on top of state-of-the-art adaptive fast Fourier transform libraries like FFTW. Vectorized multidimensional implementations for the complex and centered Hermitian (pseudospectral) cases have already been implemented in the open-source software FFTW++. With the advent of this library, writing a high-performance dealiased pseudospectral code for solving nonlinear partial differential equations has now become a relatively straightforward exercise. New theoretical estimates of computational complexity and memory use are provided, including corrected timing results for 3D pruned convolutions and further consideration of higher-order convolutions.

  15. SU-E-T-08: A Convolution Model for Head Scatter Fluence in the Intensity Modulated Field

    SciTech Connect

    Chen, M; Mo, X; Chen, Y; Parnell, D; Key, S; Olivera, G; Galmarini, W; Lu, W

    2014-06-01

    Purpose: To efficiently calculate the head scatter fluence for an arbitrary intensity-modulated field with any source distribution using the source occlusion model. Method: The source occlusion model with focal and extra focal radiation (Jaffray et al, 1993) can be used to account for LINAC head scatter. In the model, the fluence map of any field shape at any point can be calculated via integration of the source distribution within the visible range, as confined by each segment, using the detector eye's view. A 2D integration would be required for each segment and each fluence plane point, which is time-consuming, as an intensity-modulated field contains typically tens to hundreds of segments. In this work, we prove that the superposition of the segmental integrations is equivalent to a simple convolution regardless of what the source distribution is. In fact, for each point, the detector eye's view of the field shape can be represented as a function with the origin defined at the point's pinhole reflection through the center of the collimator plane. We were thus able to reduce hundreds of source plane integration to one convolution. We calculated the fluence map for various 3D and IMRT beams and various extra-focal source distributions using both the segmental integration approach and the convolution approach and compared the computation time and fluence map results of both approaches. Results: The fluence maps calculated using the convolution approach were the same as those calculated using the segmental approach, except for rounding errors (<0.1%). While it took considerably longer time to calculate all segmental integrations, the fluence map calculation using the convolution approach took only ∼1/3 of the time for typical IMRT fields with ∼100 segments. Conclusions: The convolution approach for head scatter fluence calculation is fast and accurate and can be used to enhance the online process.

  16. Network Class Superposition Analyses

    PubMed Central

    Pearson, Carl A. B.; Zeng, Chen; Simha, Rahul

    2013-01-01

    Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., for the yeast cell cycle process [1]), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix , which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for derived from Boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with . We show how to generate Derrida plots based on . We show that -based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on . We motivate all of these results in terms of a popular molecular biology Boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for , for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses. PMID:23565141

  17. Network class superposition analyses.

    PubMed

    Pearson, Carl A B; Zeng, Chen; Simha, Rahul

    2013-01-01

    Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30) for the yeast cell cycle process), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses.

  18. Superpositions of probability distributions

    NASA Astrophysics Data System (ADS)

    Jizba, Petr; Kleinert, Hagen

    2008-09-01

    Probability distributions which can be obtained from superpositions of Gaussian distributions of different variances v=σ2 play a favored role in quantum theory and financial markets. Such superpositions need not necessarily obey the Chapman-Kolmogorov semigroup relation for Markovian processes because they may introduce memory effects. We derive the general form of the smearing distributions in v which do not destroy the semigroup property. The smearing technique has two immediate applications. It permits simplifying the system of Kramers-Moyal equations for smeared and unsmeared conditional probabilities, and can be conveniently implemented in the path integral calculus. In many cases, the superposition of path integrals can be evaluated much easier than the initial path integral. Three simple examples are presented, and it is shown how the technique is extended to quantum mechanics.

  19. Commissioning and initial acceptance tests for a commercial convolution dose calculation algorithm for radiotherapy treatment planning in comparison with Monte Carlo simulation and measurement.

    PubMed

    Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen

    2012-07-01

    In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues.

  20. Superposition rendering: Increased realism for interactive walkthroughs

    NASA Astrophysics Data System (ADS)

    Bastos, Rui M. R. De

    1999-11-01

    The light transport equation, conventionally known as the rendering equation in a slightly different form, is an implicit integral equation, which represents the interactions of light with matter and the distribution of light in a scene. This research describes a signals-and- systems approach to light transport and casts the light transport equation in terms of convolution. Additionally, the light transport problem is linearly decomposed into simpler problems with simpler solutions, which are then recombined to approximate the full solution. The central goal is to provide interactive photorealistic rendering of virtual environments. We show how the light transport problem can be cast in terms of signals-and-systems. The light is the signal and the materials are the systems. The outgoing light from a light transfer at a surface point is given by convolving the incoming light with the material's impulse response (the material's BRDF/BTDF). Even though the theoretical approach is presented in directional-space, we present an approximation in screen-space, which enables the exploitation of graphics hardware convolution for approximating the light transport equation. The convolution approach to light transport is not enough to fully solve the light transport problem at interactive rates with current machines. We decompose the light transport problem into simpler problems. The decomposition of the light transport problem is based on distinct characteristics of different parts of the problem: the ideally diffuse, the ideally specular, and the glossy transfers. A technique for interactive rendering of each of these components is presented as well a technique for superposing the independent components in a multipass manner in real time. Given the extensive use of the superposition principle in this research, we name our approach superposition rendering to distinguish it from other standard hardware-aided multipass rendering approaches.

  1. Full Waveform Modeling of Transient Electromagnetic Response Based on Temporal Interpolation and Convolution Method

    NASA Astrophysics Data System (ADS)

    Qi, Youzheng; Huang, Ling; Wu, Xin; Zhu, Wanhua; Fang, Guangyou; Yu, Gang

    2017-07-01

    Quantitative modeling of the transient electromagnetic (TEM) response requires consideration of the full transmitter waveform, i.e., not only the specific current waveform in a half cycle but also the bipolar repetition. In this paper, we present a novel temporal interpolation and convolution (TIC) method to facilitate the accurate TEM modeling. We first calculate the temporal basis response on a logarithmic scale using the fast digital-filter-based methods. Then, we introduce a function named hamlogsinc in the framework of discrete signal processing theory to reconstruct the basis function and to make the convolution with the positive half of the waveform. Finally, a superposition procedure is used to take account of the effect of previous bipolar waveforms. Comparisons with the established fast Fourier transform method demonstrate that our TIC method can get the same accuracy with a shorter computing time.

  2. The Convolution Method in Neutrino Physics Searches

    SciTech Connect

    Tsakstara, V.; Kosmas, T. S.; Chasioti, V. C.; Divari, P. C.; Sinatkas, J.

    2007-12-26

    We concentrate on the convolution method used in nuclear and astro-nuclear physics studies and, in particular, in the investigation of the nuclear response of various neutrino detection targets to the energy-spectra of specific neutrino sources. Since the reaction cross sections of the neutrinos with nuclear detectors employed in experiments are extremely small, very fine and fast convolution techniques are required. Furthermore, sophisticated de-convolution methods are also needed whenever a comparison between calculated unfolded cross sections and existing convoluted results is necessary.

  3. Stable macroscopic quantum superpositions.

    PubMed

    Fröwis, F; Dür, W

    2011-03-18

    We study the stability of superpositions of macroscopically distinct quantum states under decoherence. We introduce a class of quantum states with entanglement features similar to Greenberger-Horne-Zeilinger (GHZ) states, but with an inherent stability against noise and decoherence. We show that in contrast to GHZ states, these so-called concatenated GHZ states remain multipartite entangled even for macroscopic numbers of particles and can be used for quantum metrology in noisy environments. We also propose a scalable experimental realization of these states using existing ion-trap setups.

  4. Calculations of the ionization potentials of the halogens by the relativistic Hartree-Rock-Dirac method taking account of superposition of configurations

    SciTech Connect

    Tupitsyn, I.I.

    1988-03-01

    The ionization potentials of the halogen group have been calculated. The calculations were carried out using the relativistic Hartree-Fock method taking into account correlation effects. Comparison of theoretical results with experimental data for the elements F, Cl, Br, and I allows an estimation of the accuracy and reliability of the method. The theoretical values of the ionization potential of astatine obtained here may be of definite interest for the chemistry of astatine.

  5. Stereotactic Body Radiotherapy for Primary Lung Cancer at a Dose of 50 Gy Total in Five Fractions to the Periphery of the Planning Target Volume Calculated Using a Superposition Algorithm

    SciTech Connect

    Takeda, Atsuya; Sanuki, Naoko; Kunieda, Etsuo Ohashi, Toshio; Oku, Yohei; Takeda, Toshiaki; Shigematsu, Naoyuki; Kubo, Atsushi

    2009-02-01

    Purpose: To retrospectively analyze the clinical outcomes of stereotactic body radiotherapy (SBRT) for patients with Stages 1A and 1B non-small-cell lung cancer. Methods and Materials: We reviewed the records of patients with non-small-cell lung cancer treated with curative intent between Dec 2001 and May 2007. All patients had histopathologically or cytologically confirmed disease, increased levels of tumor markers, and/or positive findings on fluorodeoxyglucose positron emission tomography. Staging studies identified their disease as Stage 1A or 1B. Performance status was 2 or less according to World Health Organization guidelines in all cases. The prescribed dose of 50 Gy total in five fractions, calculated by using a superposition algorithm, was defined for the periphery of the planning target volume. Results: One hundred twenty-one patients underwent SBRT during the study period, and 63 were eligible for this analysis. Thirty-eight patients had Stage 1A (T1N0M0) and 25 had Stage 1B (T2N0M0). Forty-nine patients were not appropriate candidates for surgery because of chronic pulmonary disease. Median follow-up of these 49 patients was 31 months (range, 10-72 months). The 3-year local control, disease-free, and overall survival rates in patients with Stages 1A and 1B were 93% and 96% (p = 0.86), 76% and 77% (p = 0.83), and 90% and 63% (p = 0.09), respectively. No acute toxicity was observed. Grade 2 or higher radiation pneumonitis was experienced by 3 patients, and 1 of them had fatal bacterial pneumonia. Conclusions: The SBRT at 50 Gy total in five fractions to the periphery of the planning target volume calculated by using a superposition algorithm is feasible. High local control rates were achieved for both T2 and T1 tumors.

  6. Convolution of Two Series

    ERIC Educational Resources Information Center

    Umar, A.; Yusau, B.; Ghandi, B. M.

    2007-01-01

    In this note, we introduce and discuss convolutions of two series. The idea is simple and can be introduced to higher secondary school classes, and has the potential of providing a good background for the well known convolution of function.

  7. Convolution in Convolution for Network in Network.

    PubMed

    Pang, Yanwei; Sun, Manli; Jiang, Xiaoheng; Li, Xuelong

    2017-03-16

    Network in network (NiN) is an effective instance and an important extension of deep convolutional neural network consisting of alternating convolutional layers and pooling layers. Instead of using a linear filter for convolution, NiN utilizes shallow multilayer perceptron (MLP), a nonlinear function, to replace the linear filter. Because of the powerfulness of MLP and 1 x 1 convolutions in spatial domain, NiN has stronger ability of feature representation and hence results in better recognition performance. However, MLP itself consists of fully connected layers that give rise to a large number of parameters. In this paper, we propose to replace dense shallow MLP with sparse shallow MLP. One or more layers of the sparse shallow MLP are sparely connected in the channel dimension or channel-spatial domain. The proposed method is implemented by applying unshared convolution across the channel dimension and applying shared convolution across the spatial dimension in some computational layers. The proposed method is called convolution in convolution (CiC). The experimental results on the CIFAR10 data set, augmented CIFAR10 data set, and CIFAR100 data set demonstrate the effectiveness of the proposed CiC method.

  8. Distal Convoluted Tubule

    PubMed Central

    Ellison, David H.

    2014-01-01

    The distal convoluted tubule is the nephron segment that lies immediately downstream of the macula densa. Although short in length, the distal convoluted tubule plays a critical role in sodium, potassium, and divalent cation homeostasis. Recent genetic and physiologic studies have greatly expanded our understanding of how the distal convoluted tubule regulates these processes at the molecular level. This article provides an update on the distal convoluted tubule, highlighting concepts and pathophysiology relevant to clinical practice. PMID:24855283

  9. Investigation of the Fe{sup 3+} centers in perovskite KMgF{sub 3} through a combination of ab initio (density functional theory) and semi-empirical (superposition model) calculations

    SciTech Connect

    Emül, Y.; Erbahar, D.; Açıkgöz, M.

    2015-08-14

    Analyses of the local crystal and electronic structure in the vicinity of Fe{sup 3+} centers in perovskite KMgF{sub 3} crystal have been carried out in a comprehensive manner. A combination of density functional theory (DFT) and a semi-empirical superposition model (SPM) is used for a complete analysis of all Fe{sup 3+} centers in this study for the first time. Some quantitative information has been derived from the DFT calculations on both the electronic structure and the local geometry around Fe{sup 3+} centers. All of the trigonal (K-vacancy case, K-Li substitution case, and normal trigonal Fe{sup 3+} center case), FeF{sub 5}O cluster, and tetragonal (Mg-vacancy and Mg-Li substitution cases) centers have been taken into account based on the previously suggested experimental and theoretical inferences. The collaboration between the experimental data and the results of both DFT and SPM calculations provides us to understand most probable structural model for Fe{sup 3+} centers in KMgF{sub 3}.

  10. Constructing Parton Convolution in Effective Field Theory

    SciTech Connect

    Chen, Jiunn-Wei; Ji, Xiangdong

    2001-10-08

    Parton convolution models have been used extensively in describing the sea quarks in the nucleon and explaining quark distributions in nuclei (the EMC effect). From the effective field theory point of view, we construct the parton convolution formalism which has been the underlying conception of all convolution models. We explain the significance of scheme and scale dependence of the auxiliary quantities such as the pion distributions in a nucleon. As an application, we calculate the complete leading nonanalytic chiral contribution to the isovector component of the nucleon sea.

  11. Quantum superpositions of crystalline structures

    SciTech Connect

    Baltrusch, Jens D.; Morigi, Giovanna; Cormick, Cecilia; De Chiara, Gabriele; Calarco, Tommaso

    2011-12-15

    A procedure is discussed for creating coherent superpositions of motional states of ion strings. The motional states are across the structural transition linear-zigzag, and their coherent superposition is achieved by means of spin-dependent forces, such that a coherent superposition of the electronic states of one ion evolves into an entangled state between the chain's internal and external degrees of freedom. It is shown that the creation of such an entangled state can be revealed by performing Ramsey interferometry with one ion of the chain.

  12. Superposition properties of interacting ion channels.

    PubMed Central

    Keleshian, A M; Yeo, G F; Edeson, R O; Madsen, B W

    1994-01-01

    Quantitative analysis of patch clamp data is widely based on stochastic models of single-channel kinetics. Membrane patches often contain more than one active channel of a given type, and it is usually assumed that these behave independently in order to interpret the record and infer individual channel properties. However, recent studies suggest there are significant channel interactions in some systems. We examine a model of dependence in a system of two identical channels, each modeled by a continuous-time Markov chain in which specified transition rates are dependent on the conductance state of the other channel, changing instantaneously when the other channel opens or closes. Each channel then has, e.g., a closed time density that is conditional on the other channel being open or closed, these being identical under independence. We relate the two densities by a convolution function that embodies information about, and serves to quantify, dependence in the closed class. Distributions of observable (superposition) sojourn times are given in terms of these conditional densities. The behavior of two channel systems based on two- and three-state Markov models is examined by simulation. Optimized fitting of simulated data using reasonable parameters values and sample size indicates that both positive and negative cooperativity can be distinguished from independence. PMID:7524711

  13. Improved scatter correction using adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Sun, M.; Star-Lack, J. M.

    2010-11-01

    Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.

  14. SU-E-T-91: Accuracy of Dose Calculation Algorithms for Patients Undergoing Stereotactic Ablative Radiotherapy

    SciTech Connect

    Tajaldeen, A; Ramachandran, P; Geso, M

    2015-06-15

    Purpose: The purpose of this study was to investigate and quantify the variation in dose distributions in small field lung cancer radiotherapy using seven different dose calculation algorithms. Methods: The study was performed in 21 lung cancer patients who underwent Stereotactic Ablative Body Radiotherapy (SABR). Two different methods (i) Same dose coverage to the target volume (named as same dose method) (ii) Same monitor units in all algorithms (named as same monitor units) were used for studying the performance of seven different dose calculation algorithms in XiO and Eclipse treatment planning systems. The seven dose calculation algorithms include Superposition, Fast superposition, Fast Fourier Transform ( FFT) Convolution, Clarkson, Anisotropic Analytic Algorithm (AAA), Acurous XB and pencil beam (PB) algorithms. Prior to this, a phantom study was performed to assess the accuracy of these algorithms. Superposition algorithm was used as a reference algorithm in this study. The treatment plans were compared using different dosimetric parameters including conformity, heterogeneity and dose fall off index. In addition to this, the dose to critical structures like lungs, heart, oesophagus and spinal cord were also studied. Statistical analysis was performed using Prism software. Results: The mean±stdev with conformity index for Superposition, Fast superposition, Clarkson and FFT convolution algorithms were 1.29±0.13, 1.31±0.16, 2.2±0.7 and 2.17±0.59 respectively whereas for AAA, pencil beam and Acurous XB were 1.4±0.27, 1.66±0.27 and 1.35±0.24 respectively. Conclusion: Our study showed significant variations among the seven different algorithms. Superposition and AcurosXB algorithms showed similar values for most of the dosimetric parameters. Clarkson, FFT convolution and pencil beam algorithms showed large differences as compared to superposition algorithms. Based on our study, we recommend Superposition and AcurosXB algorithms as the first choice of

  15. Efficient convolutional sparse coding

    DOEpatents

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  16. Superposition as a Relativistic Filter

    NASA Astrophysics Data System (ADS)

    Ord, G. N.

    2017-07-01

    By associating a binary signal with the relativistic worldline of a particle, a binary form of the phase of non-relativistic wavefunctions is naturally produced by time dilation. An analog of superposition also appears as a Lorentz filtering process, removing paths that are relativistically inequivalent. In a model that includes a stochastic component, the free-particle Schrödinger equation emerges from a completely relativistic context in which its origin and function is known. The result establishes the fact that the phase of wavefunctions in Schrödinger's equation and the attendant superposition principle may both be considered remnants of time dilation. This strongly argues that quantum mechanics has its origins in special relativity.

  17. Linear superposition in nonlinear equations.

    PubMed

    Khare, Avinash; Sukhatme, Uday

    2002-06-17

    Several nonlinear systems such as the Korteweg-de Vries (KdV) and modified KdV equations and lambda phi(4) theory possess periodic traveling wave solutions involving Jacobi elliptic functions. We show that suitable linear combinations of these known periodic solutions yield many additional solutions with different periods and velocities. This linear superposition procedure works by virtue of some remarkable new identities involving elliptic functions.

  18. Particle flow superpositional GLMB filter

    NASA Astrophysics Data System (ADS)

    Saucan, Augustin-Alexandru; Li, Yunpeng; Coates, Mark

    2017-05-01

    In this paper we propose a Superpositional Marginalized δ-GLMB (SMδ-GLMB) filter for multi-target tracking and we provide bootstrap and particle flow particle filter implementations. Particle filter implementations of the marginalized δ-GLMB filter are computationally demanding. As a first contribution we show that for the specific case of superpositional observation models, a reduced complexity update step can be achieved by employing a superpositional change of variables. The resulting SMδ-GLMB filter can be readily implemented using the unscented Kalman filter or particle filtering methods. As a second contribution, we employ particle flow to produce a measurement-driven importance distribution that serves as a proposal in the SMδ-GLMB particle filter. In high-dimensional state systems or for highly- informative observations the generic particle filter often suffers from weight degeneracy or otherwise requires a prohibitively large number of particles. Particle flow avoids particle weight degeneracy by guiding particles to regions where the posterior is significant. Numerical simulations showcase the reduced complexity and improved performance of the bootstrap SMδ-GLMB filter with respect to the bootstrap Mδ-GLMB filter. The particle flow SMδ-GLMB filter further improves the accuracy of track estimates for highly informative measurements.

  19. Student ability to distinguish between superposition states and mixed states in quantum mechanics

    NASA Astrophysics Data System (ADS)

    Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.

    2015-12-01

    Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the experimental implications of a superposition state. In particular, they fail to recognize how a superposition state and a mixed state (sometimes called a "lack of knowledge" state) can produce different experimental results. We present data that suggest that superposition in quantum mechanics is a difficult concept for students enrolled in sophomore-, junior-, and graduate-level quantum mechanics courses. We illustrate how an interactive lecture tutorial can improve student understanding of quantum mechanical superposition. A longitudinal study suggests that the impact persists after an additional quarter of quantum mechanics instruction that does not specifically address these ideas.

  20. Student Ability to Distinguish between Superposition States and Mixed States in Quantum Mechanics

    ERIC Educational Resources Information Center

    Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.

    2015-01-01

    Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the…

  1. Student Ability to Distinguish between Superposition States and Mixed States in Quantum Mechanics

    ERIC Educational Resources Information Center

    Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.

    2015-01-01

    Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the…

  2. Understanding deep convolutional networks

    PubMed Central

    Mallat, Stéphane

    2016-01-01

    Deep convolutional networks provide state-of-the-art classifications and regressions results over many high-dimensional problems. We review their architecture, which scatters data with a cascade of linear filter weights and nonlinearities. A mathematical framework is introduced to analyse their properties. Computations of invariants involve multiscale contractions with wavelets, the linearization of hierarchical symmetries and sparse separations. Applications are discussed. PMID:26953183

  3. Creating a Superposition of Unknown Quantum States

    NASA Astrophysics Data System (ADS)

    Oszmaniec, Michał; Grudka, Andrzej; Horodecki, Michał; Wójcik, Antoni

    2016-03-01

    The superposition principle is one of the landmarks of quantum mechanics. The importance of quantum superpositions provokes questions about the limitations that quantum mechanics itself imposes on the possibility of their generation. In this work, we systematically study the problem of the creation of superpositions of unknown quantum states. First, we prove a no-go theorem that forbids the existence of a universal probabilistic quantum protocol producing a superposition of two unknown quantum states. Second, we provide an explicit probabilistic protocol generating a superposition of two unknown states, each having a fixed overlap with the known referential pure state. The protocol can be applied to generate coherent superposition of results of independent runs of subroutines in a quantum computer. Moreover, in the context of quantum optics it can be used to efficiently generate highly nonclassical states or non-Gaussian states.

  4. Creating a Superposition of Unknown Quantum States.

    PubMed

    Oszmaniec, Michał; Grudka, Andrzej; Horodecki, Michał; Wójcik, Antoni

    2016-03-18

    The superposition principle is one of the landmarks of quantum mechanics. The importance of quantum superpositions provokes questions about the limitations that quantum mechanics itself imposes on the possibility of their generation. In this work, we systematically study the problem of the creation of superpositions of unknown quantum states. First, we prove a no-go theorem that forbids the existence of a universal probabilistic quantum protocol producing a superposition of two unknown quantum states. Second, we provide an explicit probabilistic protocol generating a superposition of two unknown states, each having a fixed overlap with the known referential pure state. The protocol can be applied to generate coherent superposition of results of independent runs of subroutines in a quantum computer. Moreover, in the context of quantum optics it can be used to efficiently generate highly nonclassical states or non-Gaussian states.

  5. Mesoscopic Superposition States in Relativistic Landau Levels

    SciTech Connect

    Bermudez, A.; Martin-Delgado, M. A.; Solano, E.

    2007-09-21

    We show that a linear superposition of mesoscopic states in relativistic Landau levels can be built when an external magnetic field couples to a relativistic spin 1/2 charged particle. Under suitable initial conditions, the associated Dirac equation produces unitarily superpositions of coherent states involving the particle orbital quanta in a well-defined mesoscopic regime. We demonstrate that these mesoscopic superpositions have a purely relativistic origin and disappear in the nonrelativistic limit.

  6. Communication: Two measures of isochronal superposition

    NASA Astrophysics Data System (ADS)

    Roed, Lisa Anita; Gundermann, Ditte; Dyre, Jeppe C.; Niss, Kristine

    2013-09-01

    A liquid obeys isochronal superposition if its dynamics is invariant along the isochrones in the thermodynamic phase diagram (the curves of constant relaxation time). This paper introduces two quantitative measures of isochronal superposition. The measures are used to test the following six liquids for isochronal superposition: 1,2,6 hexanetriol, glycerol, polyphenyl ether, diethyl phthalate, tetramethyl tetraphenyl trisiloxane, and dibutyl phthalate. The latter four van der Waals liquids obey isochronal superposition to a higher degree than the two hydrogen-bonded liquids. This is a prediction of the isomorph theory, and it confirms findings by other groups.

  7. Convolution kernel design and efficient algorithm for sampling density correction.

    PubMed

    Johnson, Kenneth O; Pipe, James G

    2009-02-01

    Sampling density compensation is an important step in non-cartesian image reconstruction. One of the common techniques to determine weights that compensate for differences in sampling density involves a convolution. A new convolution kernel is designed for sampling density attempting to minimize the error in a fully reconstructed image. The resulting weights obtained using this new kernel are compared with various previous methods, showing a reduction in reconstruction error. A computationally efficient algorithm is also presented that facilitates the calculation of the convolution of finite kernels. Both the kernel and the algorithm are extended to 3D. Copyright 2009 Wiley-Liss, Inc.

  8. Convolution of degrees of coherence.

    PubMed

    Korotkova, Olga; Mei, Zhangrong

    2015-07-01

    The conditions under which convolution of two degrees of coherence represents a novel legitimate degree of coherence are established for wide-sense statistically stationary Schell-model beam-like optical fields. Several examples are given to illustrate how convolution can be used for generation of a far field being a modulated version of another one. Practically, the convolutions of the degrees of coherence can be achieved by programming the liquid crystal spatial light modulators.

  9. Convolutional coding techniques for data protection

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  10. Convolution algorithm for normalization constant evaluation in queuing system with random requirements

    NASA Astrophysics Data System (ADS)

    Samouylov, K.; Sopin, E.; Vikhrova, O.; Shorgin, S.

    2017-07-01

    We suggest a convolution algorithm for calculating the normalization constant for stationary probabilities of a multiserver queuing system with random resource requirements. Our algorithm significantly reduces computing time of the stationary probabilities and system characteristics such as blocking probabilities and average number of occupied resources. The algorithm aims to avoid calculation of k-fold convolutions and reasonably use memory resources.

  11. Determinate-state convolutional codes

    NASA Technical Reports Server (NTRS)

    Collins, O.; Hizlan, M.

    1991-01-01

    A determinate state convolutional code is formed from a conventional convolutional code by pruning away some of the possible state transitions in the decoding trellis. The type of staged power transfer used in determinate state convolutional codes proves to be an extremely efficient way of enhancing the performance of a concatenated coding system. The decoder complexity is analyzed along with free distances of these new codes and extensive simulation results is provided of their performance at the low signal to noise ratios where a real communication system would operate. Concise, practical examples are provided.

  12. A quantum algorithm for Viterbi decoding of classical convolutional codes

    NASA Astrophysics Data System (ADS)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  13. Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity

    SciTech Connect

    Sales, J. S.; Silva, L. F. da; Almeida, N. G. de

    2011-03-15

    We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.

  14. Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity

    NASA Astrophysics Data System (ADS)

    Sales, J. S.; da Silva, L. F.; de Almeida, N. G.

    2011-03-01

    We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.

  15. Fugacity superposition: a new approach to dynamic multimedia fate modeling.

    PubMed

    Hertwich, E G

    2001-08-01

    The fugacities, concentrations, or inventories of pollutants in environmental compartments as determined by multimedia environmental fate models of the Mackay type can be superimposed on each other. This is true for both steady-state (level III) and dynamic (level IV) models. Any problem in multimedia fate models with linear, time-invariant transfer and transformation coefficients can be solved through a superposition of a set of n independent solutions to a set of coupled, homogeneous first-order differential equations, where n is the number of compartments in the model. For initial condition problems in dynamic models, the initial inventories can be separated, e.g. by a compartment. The solution is obtained by adding the single-compartment solutions. For time-varying emissions, a convolution integral is used to superimpose solutions. The advantage of this approach is that the differential equations have to be solved only once. No numeric integration is required. Alternatively, the dynamic model can be simplified to algebraic equations using the Laplace transform. For time-varying emissions, the Laplace transform of the model equations is simply multiplied with the Laplace transform of the emission profile. It is also shown that the time-integrated inventories of the initial conditions problems are the same as the inventories in the steady-state problem. This implies that important properties of pollutants such as potential dose, persistence, and characteristic travel distance can be derived from the steady state.

  16. Reconstruction of nonstationary sound fields based on the time domain plane wave superposition method.

    PubMed

    Zhang, Xiao-Zheng; Thomas, Jean-Hugh; Bi, Chuan-Xing; Pascal, Jean-Claude

    2012-10-01

    A time-domain plane wave superposition method is proposed to reconstruct nonstationary sound fields. In this method, the sound field is expressed as a superposition of time convolutions between the estimated time-wavenumber spectrum of the sound pressure on a virtual source plane and the time-domain propagation kernel at each wavenumber. By discretizing the time convolutions directly, the reconstruction can be carried out iteratively in the time domain, thus providing the advantage of continuously reconstructing time-dependent pressure signals. In the reconstruction process, the Tikhonov regularization is introduced at each time step to obtain a relevant estimate of the time-wavenumber spectrum on the virtual source plane. Because the double infinite integral of the two-dimensional spatial Fourier transform is discretized directly in the wavenumber domain in the proposed method, it does not need to perform the two-dimensional spatial fast Fourier transform that is generally used in time domain holography and real-time near-field acoustic holography, and therefore it avoids some errors associated with the two-dimensional spatial fast Fourier transform in theory and makes possible to use an irregular microphone array. The feasibility of the proposed method is demonstrated by numerical simulations and an experiment with two speakers.

  17. a Logical Account of Quantum Superpositions

    NASA Astrophysics Data System (ADS)

    Krause, Décio Arenhart, Jonas R. Becker

    In this paper we consider the phenomenon of superpositions in quantum mechanics and suggest a way to deal with the idea in a logical setting from a syntactical point of view, that is, as subsumed in the language of the formalism, and not semantically. We restrict the discussion to the propositional level only. Then, after presenting the motivations and a possible world semantics, the formalism is outlined and we also consider within this scheme the claim that superpositions may involve contradictions, as in the case of the Schrödinger's cat, which (it is usually said) is both alive and dead. We argue that this claim is a misreading of the quantum case. Finally, we sketch a new form of quantum logic that involves three kinds of negations and present the relationships among them. The paper is a first approach to the subject, introducing some main guidelines to be developed by a `syntactical' logical approach to quantum superpositions.

  18. Experimental superposition of orders of quantum gates

    PubMed Central

    Procopio, Lorenzo M.; Moqanaki, Amir; Araújo, Mateus; Costa, Fabio; Alonso Calafell, Irati; Dowd, Emma G.; Hamel, Deny R.; Rozema, Lee A.; Brukner, Časlav; Walther, Philip

    2015-01-01

    Quantum computers achieve a speed-up by placing quantum bits (qubits) in superpositions of different states. However, it has recently been appreciated that quantum mechanics also allows one to ‘superimpose different operations'. Furthermore, it has been shown that using a qubit to coherently control the gate order allows one to accomplish a task—determining if two gates commute or anti-commute—with fewer gate uses than any known quantum algorithm. Here we experimentally demonstrate this advantage, in a photonic context, using a second qubit to control the order in which two gates are applied to a first qubit. We create the required superposition of gate orders by using additional degrees of freedom of the photons encoding our qubits. The new resource we exploit can be interpreted as a superposition of causal orders, and could allow quantum algorithms to be implemented with an efficiency unlikely to be achieved on a fixed-gate-order quantum computer. PMID:26250107

  19. Experimental superposition of orders of quantum gates.

    PubMed

    Procopio, Lorenzo M; Moqanaki, Amir; Araújo, Mateus; Costa, Fabio; Alonso Calafell, Irati; Dowd, Emma G; Hamel, Deny R; Rozema, Lee A; Brukner, Časlav; Walther, Philip

    2015-08-07

    Quantum computers achieve a speed-up by placing quantum bits (qubits) in superpositions of different states. However, it has recently been appreciated that quantum mechanics also allows one to 'superimpose different operations'. Furthermore, it has been shown that using a qubit to coherently control the gate order allows one to accomplish a task--determining if two gates commute or anti-commute--with fewer gate uses than any known quantum algorithm. Here we experimentally demonstrate this advantage, in a photonic context, using a second qubit to control the order in which two gates are applied to a first qubit. We create the required superposition of gate orders by using additional degrees of freedom of the photons encoding our qubits. The new resource we exploit can be interpreted as a superposition of causal orders, and could allow quantum algorithms to be implemented with an efficiency unlikely to be achieved on a fixed-gate-order quantum computer.

  20. Macroscopic Superpositions as Quantum Ground States

    NASA Astrophysics Data System (ADS)

    Dakić, Borivoje; Radonjić, Milan

    2017-09-01

    We study the question of what kind of a macroscopic superposition can(not) naturally exist as a ground state of some gapped local many-body Hamiltonian. We derive an upper bound on the energy gap of an arbitrary physical Hamiltonian provided that its ground state is a superposition of two well-distinguishable macroscopic "semiclassical" states. For a large class of macroscopic superposition states we show that the gap vanishes in the macroscopic limit. This in turn shows that preparation of such states by simple cooling to the ground state is not experimentally feasible and requires a different strategy. Our approach is very general and can be used to rule out a variety of quantum states, some of which do not even exhibit macroscopic quantum properties. Moreover, our methods and results can be used for addressing quantum marginal related problems.

  1. Large energy superpositions via Rydberg dressing

    NASA Astrophysics Data System (ADS)

    Khazali, Mohammadsadegh; Lau, Hon Wai; Humeniuk, Adam; Simon, Christoph

    2016-08-01

    We propose to create superposition states of over 100 strontium atoms in a ground state or metastable optical clock state using the Kerr-type interaction due to Rydberg state dressing in an optical lattice. The two components of the superposition can differ by an order of 300 eV in energy, allowing tests of energy decoherence models with greatly improved sensitivity. We take into account the effects of higher-order nonlinearities, spatial inhomogeneity of the interaction, decay from the Rydberg state, collective many-body decoherence, atomic motion, molecular formation, and diminishing Rydberg level separation for increasing principal number.

  2. Macroscopic Quantum Superposition in Cavity Optomechanics

    NASA Astrophysics Data System (ADS)

    Liao, Jie-Qiao; Tian, Lin

    2016-04-01

    Quantum superposition in mechanical systems is not only key evidence for macroscopic quantum coherence, but can also be utilized in modern quantum technology. Here we propose an efficient approach for creating macroscopically distinct mechanical superposition states in a two-mode optomechanical system. Photon hopping between the two cavity modes is modulated sinusoidally. The modulated photon tunneling enables an ultrastrong radiation-pressure force acting on the mechanical resonator, and hence significantly increases the mechanical displacement induced by a single photon. We study systematically the generation of the Yurke-Stoler-like states in the presence of system dissipations. We also discuss the experimental implementation of this scheme.

  3. On the Use of Material-Dependent Damping in ANSYS for Mode Superposition Transient Analysis

    SciTech Connect

    Nie, J.; Wei, X.

    2011-07-17

    The mode superposition method is often used for dynamic analysis of complex structures, such as the seismic Category I structures in nuclear power plants, in place of the less efficient full method, which uses the full system matrices for calculation of the transient responses. In such applications, specification of material-dependent damping is usually desirable because complex structures can consist of multiple types of materials that may have different energy dissipation capabilities. A recent review of the ANSYS manual for several releases found that the use of material-dependent damping is not clearly explained for performing a mode superposition transient dynamic analysis. This paper includes several mode superposition transient dynamic analyses using different ways to specify damping in ANSYS, in order to determine how material-dependent damping can be specified conveniently in a mode superposition transient dynamic analysis.

  4. Dose Calculation Accuracy of the Monte Carlo Algorithm for CyberKnife Compared with Other Commercially Available Dose Calculation Algorithms

    SciTech Connect

    Sharma, Subhash; Ott, Joseph Williams, Jamone; Dickow, Danny

    2011-01-01

    Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required.

  5. Dose calculation accuracy of the Monte Carlo algorithm for CyberKnife compared with other commercially available dose calculation algorithms.

    PubMed

    Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny

    2011-01-01

    Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required.

  6. Transfer of arbitrary quantum emitter states to near-field photon superpositions in nanocavities.

    PubMed

    Thijssen, Arthur C T; Cryan, Martin J; Rarity, John G; Oulton, Ruth

    2012-09-24

    We present a method to analyze the suitability of particular photonic cavity designs for information exchange between arbitrary superposition states of a quantum emitter and the near-field photonic cavity mode. As an illustrative example, we consider whether quantum dot emitters embedded in "L3" and "H1" photonic crystal cavities are able to transfer a spin superposition state to a confined photonic superposition state for use in quantum information transfer. Using an established dyadic Green's function (DGF) analysis, we describe methods to calculate coupling to arbitrary quantum emitter positions and orientations using the modified local density of states (LDOS) calculated using numerical finite-difference time-domain (FDTD) simulations. We find that while superposition states are not supported in L3 cavities, the double degeneracy of the H1 cavities supports superposition states of the two orthogonal modes that may be described as states on a Poincaré-like sphere. Methods are developed to comprehensively analyze the confined superposition state generated from an arbitrary emitter position and emitter dipole orientation.

  7. General logarithmic image processing convolution.

    PubMed

    Palomares, Jose M; González, Jesús; Ros, Eduardo; Prieto, Alberto

    2006-11-01

    The logarithmic image processing model (LIP) is a robust mathematical framework, which, among other benefits, behaves invariantly to illumination changes. This paper presents, for the first time, two general formulations of the 2-D convolution of separable kernels under the LIP paradigm. Although both formulations are mathematically equivalent, one of them has been designed avoiding the operations which are computationally expensive in current computers. Therefore, this fast LIP convolution method allows to obtain significant speedups and is more adequate for real-time processing. In order to support these statements, some experimental results are shown in Section V.

  8. Design of convolutional tornado code

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  9. The Evolution and Development of Neural Superposition

    PubMed Central

    Agi, Egemen; Langen, Marion; Altschuler, Steven J.; Wu, Lani F.; Zimmermann, Timo

    2014-01-01

    Visual systems have a rich history as model systems for the discovery and understanding of basic principles underlying neuronal connectivity. The compound eyes of insects consist of up to thousands of small unit eyes that are connected by photoreceptor axons to set up a visual map in the brain. The photoreceptor axon terminals thereby represent neighboring points seen in the environment in neighboring synaptic units in the brain. Neural superposition is a special case of such a wiring principle, where photoreceptors from different unit eyes that receive the same input converge upon the same synaptic units in the brain. This wiring principle is remarkable, because each photoreceptor in a single unit eye receives different input and each individual axon, among thousands others in the brain, must be sorted together with those few axons that have the same input. Key aspects of neural superposition have been described as early as 1907. Since then neuroscientists, evolutionary and developmental biologists have been fascinated by how such a complicated wiring principle could evolve, how it is genetically encoded, and how it is developmentally realized. In this review article, we will discuss current ideas about the evolutionary origin and developmental program of neural superposition. Our goal is to identify in what way the special case of neural superposition can help us answer more general questions about the evolution and development of genetically “hard-wired” synaptic connectivity in the brain. PMID:24912630

  10. The principle of superposition in human prehension

    PubMed Central

    Zatsiorsky, Vladimir M.; Latash, Mark L.; Gao, Fan; Shim, Jae Kun

    2010-01-01

    SUMMARY The experimental evidence supports the validity of the principle of superposition for multi-finger prehension in humans. Forces and moments of individual digits are defined by two independent commands: “Grasp the object stronger/weaker to prevent slipping” and “Maintain the rotational equilibrium of the object”. The effects of the two commands are summed up. PMID:20186284

  11. The principle of superposition in human prehension.

    PubMed

    Zatsiorsky, Vladimir M; Latash, Mark L; Gao, Fan; Shim, Jae Kun

    2004-03-01

    The experimental evidence supports the validity of the principle of superposition for multi-finger prehension in humans. Forces and moments of individual digits are defined by two independent commands: "Grasp the object stronger/weaker to prevent slipping" and "Maintain the rotational equilibrium of the object". The effects of the two commands are summed up.

  12. SUPERPOSITION OF POLYTROPES IN THE INNER HELIOSHEATH

    SciTech Connect

    Livadiotis, G.

    2016-03-15

    This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density–temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log–log scale is now generalized to a concave-downward parabola that is able to describe the observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ∼ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.

  13. The evolution and development of neural superposition.

    PubMed

    Agi, Egemen; Langen, Marion; Altschuler, Steven J; Wu, Lani F; Zimmermann, Timo; Hiesinger, Peter Robin

    2014-01-01

    Visual systems have a rich history as model systems for the discovery and understanding of basic principles underlying neuronal connectivity. The compound eyes of insects consist of up to thousands of small unit eyes that are connected by photoreceptor axons to set up a visual map in the brain. The photoreceptor axon terminals thereby represent neighboring points seen in the environment in neighboring synaptic units in the brain. Neural superposition is a special case of such a wiring principle, where photoreceptors from different unit eyes that receive the same input converge upon the same synaptic units in the brain. This wiring principle is remarkable, because each photoreceptor in a single unit eye receives different input and each individual axon, among thousands others in the brain, must be sorted together with those few axons that have the same input. Key aspects of neural superposition have been described as early as 1907. Since then neuroscientists, evolutionary and developmental biologists have been fascinated by how such a complicated wiring principle could evolve, how it is genetically encoded, and how it is developmentally realized. In this review article, we will discuss current ideas about the evolutionary origin and developmental program of neural superposition. Our goal is to identify in what way the special case of neural superposition can help us answer more general questions about the evolution and development of genetically "hard-wired" synaptic connectivity in the brain.

  14. QCDNUM: Fast QCD evolution and convolution

    NASA Astrophysics Data System (ADS)

    Botje, M.

    2011-02-01

    The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline

  15. Two dimensional convolute integers for machine vision and image recognition

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas R.

    1988-01-01

    Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.

  16. Two dimensional convolute integers for machine vision and image recognition

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas R.

    1988-01-01

    Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.

  17. Simplified Convolution Codes

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Reed, I. S.

    1985-01-01

    Simple recursive algorithm efficiently calculates minimum-weight error vectors using Diophantine equations. Recursive algorithm uses general solution of polynomial linear Diophantine equation to determine minimum-weight error polynomial vector in equation in polynomial space.

  18. Inhibitor Discovery by Convolution ABPP.

    PubMed

    Chandrasekar, Balakumaran; Hong, Tram Ngoc; van der Hoorn, Renier A L

    2017-01-01

    Activity-based protein profiling (ABPP) has emerged as a powerful proteomic approach to study the active proteins in their native environment by using chemical probes that label active site residues in proteins. Traditionally, ABPP is classified as either comparative or competitive ABPP. In this protocol, we describe a simple method called convolution ABPP, which takes benefit from both the competitive and comparative ABPP. Convolution ABPP allows one to detect if a reduced signal observed during comparative ABPP could be due to the presence of inhibitors. In convolution ABPP, the proteomes are analyzed by comparing labeling intensities in two mixed proteomes that were labeled either before or after mixing. A reduction of labeling in the mix-and-label sample when compared to the label-and-mix sample indicates the presence of an inhibitor excess in one of the proteomes. This method is broadly applicable to detect inhibitors in proteomes against any proteome containing protein activities of interest. As a proof of concept, we applied convolution ABPP to analyze secreted proteomes from Pseudomonas syringae-infected Nicotiana benthamiana leaves to display the presence of a beta-galactosidase inhibitor.

  19. Macroscopic Quantum Superposition in Cavity Optomechanics

    NASA Astrophysics Data System (ADS)

    Liao, Jie-Qiao; Tian, Lin

    Quantum superposition in mechanical systems is not only a key evidence of macroscopic quantum coherence, but can also be utilized in modern quantum technology. Here we propose an efficient approach for creating macroscopically distinct mechanical superposition states in a two-mode optomechanical system. Photon hopping between the two cavity-modes is modulated sinusoidally. The modulated photon tunneling enables an ultrastrong radiation-pressure force acting on the mechanical resonator, and hence significantly increases the mechanical displacement induced by a single photon. We present systematic studies on the generation of the Yurke-Stoler-like states in the presence of system dissipations. The state generation method is general and it can be implemented with either optomechanical or electromechanical systems. The authors are supported by the National Science Foundation under Award No. NSF-DMR-0956064 and the DARPA ORCHID program through AFOSR.

  20. Quantum inertia stops superposition: Scan Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Gato-Rivera, Beatriz

    2017-08-01

    Scan Quantum Mechanics is a novel interpretation of some aspects of quantum mechanics in which the superposition of states is only an approximate effective concept. Quantum systems scan all possible states in the superposition and switch randomly and very rapidly among them. A crucial property that we postulate is quantum inertia, that increases whenever a constituent is added, or the system is perturbed with all kinds of interactions. Once the quantum inertia Iq reaches a critical value Icr for an observable, the switching among its different eigenvalues stops and the corresponding superposition comes to an end, leaving behind a system with a well defined value of that observable. Consequently, increasing the mass, temperature, gravitational strength, etc. of a quantum system increases its quantum inertia until the superposition of states disappears for all the observables and the system transmutes into a classical one. Moreover, the process could be reversible. Entanglement can only occur between quantum systems because an exact synchronization between the switchings of the systems involved must be established in the first place and classical systems do not have any switchings to start with. Future experiments might determine the critical inertia Icr corresponding to different observables, which translates into a critical mass Mcr for fixed environmental conditions as well as critical temperatures, critical electric and magnetic fields, etc. In addition, this proposal implies a new radiation mechanism from astrophysical objects with strong gravitational fields, giving rise to non-thermal synchrotron emission, that could contribute to neutron star formation. Superconductivity, superfluidity, Bose-Einstein condensates, and any other physical phenomena at very low temperatures must be reanalyzed in the light of this interpretation, as well as mesoscopic systems in general.

  1. A convolution model for obtaining the response of an ionization chamber in static non standard fields

    SciTech Connect

    Gonzalez-Castano, D. M.; Gonzalez, L. Brualla; Gago-Arias, M. A.; Pardo-Montero, J.; Gomez, F.; Luna-Vega, V.; Sanchez, M.; Lobato, R.

    2012-01-15

    Purpose: This work contains an alternative methodology for obtaining correction factors for ionization chamber (IC) dosimetry of small fields and composite fields such as IMRT. The method is based on the convolution/superposition (C/S) of an IC response function (RF) with the dose distribution in a certain plane which includes chamber position. This method is an alternative to the full Monte Carlo (MC) approach that has been used previously by many authors for the same objective. Methods: The readout of an IC at a point inside a phantom irradiated by a certain beam can be obtained as the convolution of the dose spatial distribution caused by the beam and the IC two-dimensional RF. The proposed methodology has been applied successfully to predict the response of a PTW 30013 IC when measuring different nonreference fields, namely: output factors of 6 MV small fields, beam profiles of cobalt 60 narrow fields and 6 MV radiosurgery segments. The two-dimensional RF of a PTW 30013 IC was obtained by MC simulation of the absorbed dose to cavity air when the IC was scanned by a 0.6 x 0.6 mm{sup 2} cross section parallel pencil beam at low depth in a water phantom. For each of the cases studied, the results of the IC direct measurement were compared with the corresponding obtained by the C/S method. Results: For all of the cases studied, the agreement between the IC direct measurement and the IC calculated response was excellent (better than 1.5%). Conclusions: This method could be implemented in TPS in order to calculate dosimetry correction factors when an experimental IMRT treatment verification with in-phantom ionization chamber is performed. The miss-response of the IC due to the nonreference conditions could be quickly corrected by this method rather than employing MC derived correction factors. This method can be considered as an alternative to the plan-class associated correction factors proposed recently as part of an IAEA work group on nonstandard field dosimetry.

  2. Toward quantum superposition of living organisms

    NASA Astrophysics Data System (ADS)

    Romero-Isart, Oriol; Juan, Mathieu L.; Quidant, Romain; Cirac, J. Ignacio

    2010-03-01

    The most striking feature of quantum mechanics is the existence of superposition states, where an object appears to be in different situations at the same time. The existence of such states has been previously tested with small objects, such as atoms, ions, electrons and photons (Zoller et al 2005 Eur. Phys. J. D 36 203-28), and even with molecules (Arndt et al 1999 Nature 401 680-2). More recently, it has been shown that it is possible to create superpositions of collections of photons (Deléglise et al 2008 Nature 455 510-14), atoms (Hammerer et al 2008 arXiv:0807.3358) or Cooper pairs (Friedman et al 2000 Nature 406 43-6). Very recent progress in optomechanical systems may soon allow us to create superpositions of even larger objects, such as micro-sized mirrors or cantilevers (Marshall et al 2003 Phys. Rev. Lett. 91 130401; Kippenberg and Vahala 2008 Science 321 1172-6 Marquardt and Girvin 2009 Physics 2 40; Favero and Karrai 2009 Nature Photon. 3 201-5), and thus to test quantum mechanical phenomena at larger scales. Here we propose a method to cool down and create quantum superpositions of the motion of sub-wavelength, arbitrarily shaped dielectric objects trapped inside a high-finesse cavity at a very low pressure. Our method is ideally suited for the smallest living organisms, such as viruses, which survive under low-vacuum pressures (Rothschild and Mancinelli 2001 Nature 406 1092-101) and optically behave as dielectric objects (Ashkin and Dziedzic 1987 Science 235 1517-20). This opens up the possibility of testing the quantum nature of living organisms by creating quantum superposition states in very much the same spirit as the original Schrödinger's cat 'gedanken' paradigm (Schrödinger 1935 Naturwissenschaften 23 807-12, 823-8, 844-9). We anticipate that our paper will be a starting point for experimentally addressing fundamental questions, such as the role of life and consciousness in quantum mechanics.

  3. X-ray optics simulation using Gaussian superposition technique

    SciTech Connect

    Idir, M.; Cywiak, M.; Morales, A. and Modi, M.H.

    2011-09-15

    We present an efficient method to perform x-ray optics simulation with high or partially coherent x-ray sources using Gaussian superposition technique. In a previous paper, we have demonstrated that full characterization of optical systems, diffractive and geometric, is possible by using the Fresnel Gaussian Shape Invariant (FGSI) previously reported in the literature. The complex amplitude distribution in the object plane is represented by a linear superposition of complex Gaussians wavelets and then propagated through the optical system by means of the referred Gaussian invariant. This allows ray tracing through the optical system and at the same time allows calculating with high precision the complex wave-amplitude distribution at any plane of observation. This technique can be applied in a wide spectral range where the Fresnel diffraction integral applies including visible, x-rays, acoustic waves, etc. We describe the technique and include some computer simulations as illustrative examples for x-ray optical component. We show also that this method can be used to study partial or total coherence illumination problem.

  4. X-ray optics simulation using Gaussian superposition technique.

    PubMed

    Idir, Mourad; Cywiak, Moisés; Morales, Arquímedes; Modi, Mohammed H

    2011-09-26

    We present an efficient method to perform x-ray optics simulation with high or partially coherent x-ray sources using Gaussian superposition technique. In a previous paper, we have demonstrated that full characterization of optical systems, diffractive and geometric, is possible by using the Fresnel Gaussian Shape Invariant (FGSI) previously reported in the literature. The complex amplitude distribution in the object plane is represented by a linear superposition of complex Gaussians wavelets and then propagated through the optical system by means of the referred Gaussian invariant. This allows ray tracing through the optical system and at the same time allows calculating with high precision the complex wave-amplitude distribution at any plane of observation. This technique can be applied in a wide spectral range where the Fresnel diffraction integral applies including visible, x-rays, acoustic waves, etc. We describe the technique and include some computer simulations as illustrative examples for x-ray optical component. We show also that this method can be used to study partial or total coherence illumination problem.

  5. Laser superposition in multi-pass amplification process

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Liu, Lan-Qin; Wang, Wen-Yi; Huang, Wan-Qing; Geng, Yuan-Chao

    2015-02-01

    Physical model was established to describe the pulse superposition in multi-pass amplification process when the pulse reflected from the cavity mirror and the front and the end of the pulse encountered. Theoretical analysis indicates that pulse superposition will consume more inversion population than that consumed without superposition. The standing wave field will be formed when the front and the end of the pulse is coherent overlapped. The inversion population density is spatial hole-burning by the standing wave field. The pulse gain and pulse are affected by superposition. Based on this physical model, three conditions, without superposition, coherent superposition and incoherent superposition were compared. This study will give instructions for high power solid laser design.

  6. Maximum predictive power and the superposition principle

    NASA Technical Reports Server (NTRS)

    Summhammer, Johann

    1994-01-01

    In quantum physics the direct observables are probabilities of events. We ask how observed probabilities must be combined to achieve what we call maximum predictive power. According to this concept the accuracy of a prediction must only depend on the number of runs whose data serve as input for the prediction. We transform each probability to an associated variable whose uncertainty interval depends only on the amount of data and strictly decreases with it. We find that for a probability which is a function of two other probabilities maximum predictive power is achieved when linearly summing their associated variables and transforming back to a probability. This recovers the quantum mechanical superposition principle.

  7. On Kolmogorov's superpositions and Boolean functions

    SciTech Connect

    Beiu, V.

    1998-12-31

    The paper overviews results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on an explicit numerical (i.e., constructive) algorithm for Kolmogorov's superpositions they will show that for obtaining minimum size neutral networks for implementing any Boolean function, the activation function of the neurons is the identity function. Because classical AND-OR implementations, as well as threshold gate implementations require exponential size (in the worst case), it will follow that size-optimal solutions for implementing arbitrary Boolean functions require analog circuitry. Conclusions and several comments on the required precision are ending the paper.

  8. Design of artificial spherical superposition compound eye

    NASA Astrophysics Data System (ADS)

    Cao, Zhaolou; Zhai, Chunjie; Wang, Keyi

    2015-12-01

    In this research, design of artificial spherical superposition compound eye is presented. The imaging system consists of three layers of lens arrays. In each channel, two lenses are designed to control the angular magnification and a field lens is added to improve the image quality and extend the field of view. Aspherical surfaces are introduced to improve the image quality. Ray tracing results demonstrate that the light from the same object point is focused at the same imaging point through different channels. Therefore the system has much higher energy efficiency than conventional spherical apposition compound eye.

  9. About simple nonlinear and linear superpositions of special exact solutions of Veselov-Novikov equation

    SciTech Connect

    Dubrovsky, V. G.; Topovsky, A. V.

    2013-03-15

    New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u{sup (n)}, n= 1, Horizontal-Ellipsis , N are constructed via Zakharov and Manakov {partial_derivative}-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u{sup (n)} and calculated by {partial_derivative}-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schroedinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums of special solutions u{sup (n)}. It is shown that the sums u=u{sup (k{sub 1})}+...+u{sup (k{sub m})}, 1 Less-Than-Or-Slanted-Equal-To k{sub 1} < k{sub 2} < Horizontal-Ellipsis < k{sub m} Less-Than-Or-Slanted-Equal-To N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schroedinger equation and can serve as model potentials for electrons in planar structures of modern electronics.

  10. Use of the modal superposition technique for piping system blowdown analyses

    SciTech Connect

    Ware, A.G.; Macek, R.W.

    1983-01-01

    A standard method of solving for the seismic response of piping systems is the modal superposition technique. Only a limited number of structural modes are considered (typically those up to 33 Hz in the U.S.), since the effect on the calculated response due to higher modes is generally small, and the method can result in considerable computer cost savings over the direct integration method. The modal superposition technique has also been applied to piping response problems in which the forcing functions are due to fluid excitation. Application of the technique to this case is somewhat more difficult, because a well defined cutoff frequency for determining structural modes to be included has not been established. This paper outlines a method for higher mode corrections, and suggests methods to determine suitable cutoff frequencies for piping system blowdown analyses. A numerical example illustrates how uncorrected modal superposition results can produce erroneous stress results.

  11. Use of the modal superposition technique for piping system blowdown analyses. [PWR; BWR

    SciTech Connect

    Ware, A.G.; Macek, R.W.

    1983-01-01

    A standard method of solving for the seismic response of piping systems is the modal superposition technique. Only a limited number of structural modes are considered (typically those up to 33 Hz in the US), since the effect on the calculated response due to higher modes is generally small, and the method can result in considerable computer cost savings over the direct integration method. The modal superposition technique has also been applied to piping response problems in which the forcing functions are due to fluid excitation. Application of the technique to this case is somewhat more difficult, because a well defined cutoff frequency for determining structural modes to be included has not been established. This paper outlines a method for higher mode corrections, and suggests methods to determine suitable cutoff frequencies for piping system blowdown analyses. A numerical example illustrates how uncorrected modal superposition results can produce erroneous stress results.

  12. Transient Response of Shells of Revolution by Direct Integration and Modal Superposition Methods

    NASA Technical Reports Server (NTRS)

    Stephens, W. B.; Adelman, H. M.

    1974-01-01

    The results of an analytical effort to obtain and evaluate transient response data for a cylindrical and a conical shell by use of two different approaches: direct integration and modal superposition are described. The inclusion of nonlinear terms is more important than the inclusion of secondary linear effects (transverse shear deformation and rotary inertia) although there are thin-shell structures where these secondary effects are important. The advantages of the direct integration approach are that geometric nonlinear and secondary effects are easy to include and high-frequency response may be calculated. In comparison to the modal superposition technique the computer storage requirements are smaller. The advantages of the modal superposition approach are that the solution is independent of the previous time history and that once the modal data are obtained, the response for repeated cases may be efficiently computed. Also, any admissible set of initial conditions can be applied.

  13. Simulating images captured by superposition lens cameras

    NASA Astrophysics Data System (ADS)

    Thangarajan, Ashok Samraj; Kakarala, Ramakrishna

    2011-03-01

    As the demand for reduction in the thickness of cameras rises, so too does the interest in thinner lens designs. One such radical approach toward developing a thin lens is obtained from nature's superposition principle as used in the eyes of many insects. But generally the images obtained from those lenses are fuzzy, and require reconstruction algorithms to complete the imaging process. A hurdle to developing such algorithms is that the existing literature does not provide realistic test images, aside from using commercial ray-tracing software which is costly. A solution for that problem is presented in this paper. Here a Gabor Super Lens (GSL), which is based on the superposition principle, is simulated using the public-domain ray-tracing software POV-Ray. The image obtained is of a grating surface as viewed through an actual GSL, which can be used to test reconstruction algorithms. The large computational time in rendering such images requires further optimization, and methods to do so are discussed.

  14. A linear algebraic nonlinear superposition formula

    NASA Astrophysics Data System (ADS)

    Gordoa, Pilar R.; Conde, Juan M.

    2002-04-01

    The Darboux transformation provides an iterative approach to the generation of exact solutions for an integrable system. This process can be simplified using the Bäcklund transformation and Bianchi's theorem of permutability; in this way we construct a nonlinear superposition formula, that is, an equation relating a new solution to three previous solutions. In general this equation will be a differential equation; for some examples, such as the Korteweg-de Vries equation, it is a linear algebraic equation. This last is what happens also in the case of the system discussed in this Letter. The linear algebraic nonlinear superposition formula obtained here is a new result. As an example, we use it to construct the two soliton solution, as well as special cases of this last which give rise to solutions exhibiting combinations of fission and fusion. Solutions exhibiting repeated processes of fission and fusion are new phenomena within the area of soliton equations. We also consider obtaining solutions using a symmetry approach; in this way we obtain rational solutions and also the one soliton solution.

  15. Simplified Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Reed, I. S.

    1986-01-01

    Some complicated intermediate steps shortened or eliminated. Decoding of convolutional error-correcting digital codes simplified by new errortrellis syndrome technique. In new technique, syndrome vector not computed. Instead, advantage taken of newly-derived mathematical identities simplify decision tree, folding it back on itself into form called "error trellis." This trellis graph of all path solutions of syndrome equations. Each path through trellis corresponds to specific set of decisions as to received digits. Existing decoding algorithms combined with new mathematical identities reduce number of combinations of errors considered and enable computation of correction vector directly from data and check bits as received.

  16. Subfemtosecond steering of hydrocarbon deprotonation through superposition of vibrational modes.

    PubMed

    Alnaser, A S; Kübel, M; Siemering, R; Bergues, B; Kling, Nora G; Betsch, K J; Deng, Y; Schmidt, J; Alahmed, Z A; Azzeer, A M; Ullrich, J; Ben-Itzhak, I; Moshammer, R; Kleineberg, U; Krausz, F; de Vivie-Riedle, R; Kling, M F

    2014-05-08

    Subfemtosecond control of the breaking and making of chemical bonds in polyatomic molecules is poised to open new pathways for the laser-driven synthesis of chemical products. The break-up of the C-H bond in hydrocarbons is an ubiquitous process during laser-induced dissociation. While the yield of the deprotonation of hydrocarbons has been successfully manipulated in recent studies, full control of the reaction would also require a directional control (that is, which C-H bond is broken). Here, we demonstrate steering of deprotonation from symmetric acetylene molecules on subfemtosecond timescales before the break-up of the molecular dication. On the basis of quantum mechanical calculations, the experimental results are interpreted in terms of a novel subfemtosecond control mechanism involving non-resonant excitation and superposition of vibrational degrees of freedom. This mechanism permits control over the directionality of chemical reactions via vibrational excitation on timescales defined by the subcycle evolution of the laser waveform.

  17. slate: A method for the superposition of flexible ligands

    NASA Astrophysics Data System (ADS)

    Mills, J. E. J.; de Esch, I. J. P.; Perkins, T. D. J.; Dean, P. M.

    2001-01-01

    A novel program for the superposition of flexible molecules, slate, is presented. It uses simulated annealing to minimise the difference between the distance matrices calculated from the hydrogen-bonding and aromatic-ring properties of two ligands. A method for generating a molecular stack using multiple pairwise matches is illustrated. These stacks are used by the program doh to predict the relative positions of receptor atoms that could form hydrogen bonds to two or more ligands in the dataset. The methodology has been applied to ligands binding to dihydrofolate reductase, thermolysin, H3 histamine receptors, α2 adrenoceptors and 5-HT1D receptors. When there are sufficient numbers and diversity of molecules in the dataset, the prediction of receptor-atom positions is applicable to compound design.

  18. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  19. Convolution-deconvolution in DIGES

    SciTech Connect

    Philippacopoulos, A.J.; Simos, N.

    1995-05-01

    Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities.

  20. Modified cubic convolution resampling for Landsat

    NASA Technical Reports Server (NTRS)

    Prakash, A.; Mckee, B.

    1985-01-01

    An overview is given of Landsat Thematic Mapper resampling technique, including a modification of the well-known cubic convolution interpolator (nearest neighbor interpolation) used to provide geometric correction for TM data. Post launch study has shown that the modified cubic convolution interpolator can selectively enhance or suppress frequency bands in the output image. This selectivity is demonstrated on TM Band 3 imagery.

  1. The general theory of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Stanley, R. P.

    1993-01-01

    This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.

  2. On the superposition principle in interference experiments

    PubMed Central

    Sinha, Aninda; H. Vijay, Aravind; Sinha, Urbasi

    2015-01-01

    The superposition principle is usually incorrectly applied in interference experiments. This has recently been investigated through numerics based on Finite Difference Time Domain (FDTD) methods as well as the Feynman path integral formalism. In the current work, we have derived an analytic formula for the Sorkin parameter which can be used to determine the deviation from the application of the principle. We have found excellent agreement between the analytic distribution and those that have been earlier estimated by numerical integration as well as resource intensive FDTD simulations. The analytic handle would be useful for comparing theory with future experiments. It is applicable both to physics based on classical wave equations as well as the non-relativistic Schrödinger equation. PMID:25973948

  3. Authentication Protocol using Quantum Superposition States

    SciTech Connect

    Kanamori, Yoshito; Yoo, Seong-Moo; Gregory, Don A.; Sheldon, Frederick T

    2009-01-01

    When it became known that quantum computers could break the RSA (named for its creators - Rivest, Shamir, and Adleman) encryption algorithm within a polynomial-time, quantum cryptography began to be actively studied. Other classical cryptographic algorithms are only secure when malicious users do not have sufficient computational power to break security within a practical amount of time. Recently, many quantum authentication protocols sharing quantum entangled particles between communicators have been proposed, providing unconditional security. An issue caused by sharing quantum entangled particles is that it may not be simple to apply these protocols to authenticate a specific user in a group of many users. An authentication protocol using quantum superposition states instead of quantum entangled particles is proposed. The random number shared between a sender and a receiver can be used for classical encryption after the authentication has succeeded. The proposed protocol can be implemented with the current technologies we introduce in this paper.

  4. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  5. Search for optimal distance spectrum convolutional codes

    NASA Technical Reports Server (NTRS)

    Connor, Matthew C.; Perez, Lance C.; Costello, Daniel J., Jr.

    1993-01-01

    In order to communicate reliably and to reduce the required transmitter power, NASA uses coded communication systems on most of their deep space satellites and probes (e.g. Pioneer, Voyager, Galileo, and the TDRSS network). These communication systems use binary convolutional codes. Better codes make the system more reliable and require less transmitter power. However, there are no good construction techniques for convolutional codes. Thus, to find good convolutional codes requires an exhaustive search over the ensemble of all possible codes. In this paper, an efficient convolutional code search algorithm was implemented on an IBM RS6000 Model 580. The combination of algorithm efficiency and computational power enabled us to find, for the first time, the optimal rate 1/2, memory 14, convolutional code.

  6. Superposition and alignment of labeled point clouds.

    PubMed

    Fober, Thomas; Glinca, Serghei; Klebe, Gerhard; Hüllermeier, Eyke

    2011-01-01

    Geometric objects are often represented approximately in terms of a finite set of points in three-dimensional euclidean space. In this paper, we extend this representation to what we call labeled point clouds. A labeled point cloud is a finite set of points, where each point is not only associated with a position in three-dimensional space, but also with a discrete class label that represents a specific property. This type of model is especially suitable for modeling biomolecules such as proteins and protein binding sites, where a label may represent an atom type or a physico-chemical property. Proceeding from this representation, we address the question of how to compare two labeled points clouds in terms of their similarity. Using fuzzy modeling techniques, we develop a suitable similarity measure as well as an efficient evolutionary algorithm to compute it. Moreover, we consider the problem of establishing an alignment of the structures in the sense of a one-to-one correspondence between their basic constituents. From a biological point of view, alignments of this kind are of great interest, since mutually corresponding molecular constituents offer important information about evolution and heredity, and can also serve as a means to explain a degree of similarity. In this paper, we therefore develop a method for computing pairwise or multiple alignments of labeled point clouds. To this end, we proceed from an optimal superposition of the corresponding point clouds and construct an alignment which is as much as possible in agreement with the neighborhood structure established by this superposition. We apply our methods to the structural analysis of protein binding sites.

  7. Application of structured support vector machine backpropagation to a convolutional neural network for human pose estimation.

    PubMed

    Witoonchart, Peerajak; Chongstitvatana, Prabhas

    2017-08-01

    In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Two-dimensional cubic convolution.

    PubMed

    Reichenbach, Stephen E; Geng, Frank

    2003-01-01

    The paper develops two-dimensional (2D), nonseparable, piecewise cubic convolution (PCC) for image interpolation. Traditionally, PCC has been implemented based on a one-dimensional (1D) derivation with a separable generalization to two dimensions. However, typical scenes and imaging systems are not separable, so the traditional approach is suboptimal. We develop a closed-form derivation for a two-parameter, 2D PCC kernel with support [-2,2] x [-2,2] that is constrained for continuity, smoothness, symmetry, and flat-field response. Our analyses, using several image models, including Markov random fields, demonstrate that the 2D PCC yields small improvements in interpolation fidelity over the traditional, separable approach. The constraints on the derivation can be relaxed to provide greater flexibility and performance.

  9. Multichannel Polarization-Controllable Superpositions of Orbital Angular Momentum States.

    PubMed

    Yue, Fuyong; Wen, Dandan; Zhang, Chunmei; Gerardot, Brian D; Wang, Wei; Zhang, Shuang; Chen, Xianzhong

    2017-04-01

    A facile metasurface approach is shown to realize polarization-controllable multichannel superpositions of orbital angular momentum (OAM) states with various topological charges. By manipulating the polarization state of the incident light, four kinds of superpositions of OAM states are realized using a single metasurface consisting of space-variant arrays of gold nanoantennas.

  10. Nonclassical Properties of Q-Deformed Superposition Light Field State

    NASA Technical Reports Server (NTRS)

    Ren, Min; Shenggui, Wang; Ma, Aiqun; Jiang, Zhuohong

    1996-01-01

    In this paper, the squeezing effect, the bunching effect and the anti-bunching effect of the superposition light field state which involving q-deformation vacuum state and q-Glauber coherent state are studied, the controllable q-parameter of the squeezing effect, the bunching effect and the anti-bunching effect of q-deformed superposition light field state are obtained.

  11. Decoherence of quantum superpositions through coupling to engineered reservoirs

    PubMed

    Myatt; King; Turchette; Sackett; Kielpinski; Itano; Monroe; Wineland

    2000-01-20

    The theory of quantum mechanics applies to closed systems. In such ideal situations, a single atom can, for example, exist simultaneously in a superposition of two different spatial locations. In contrast, real systems always interact with their environment, with the consequence that macroscopic quantum superpositions (as illustrated by the 'Schrodinger's cat' thought-experiment) are not observed. Moreover, macroscopic superpositions decay so quickly that even the dynamics of decoherence cannot be observed. However, mesoscopic systems offer the possibility of observing the decoherence of such quantum superpositions. Here we present measurements of the decoherence of superposed motional states of a single trapped atom. Decoherence is induced by coupling the atom to engineered reservoirs, in which the coupling and state of the environment are controllable. We perform three experiments, finding that the decoherence rate scales with the square of a quantity describing the amplitude of the superposition state.

  12. Macroscopic superpositions and gravimetry with quantum magnetomechanics

    NASA Astrophysics Data System (ADS)

    Johnsson, Mattias T.; Brennen, Gavin K.; Twamley, Jason

    2016-11-01

    Precision measurements of gravity can provide tests of fundamental physics and are of broad practical interest for metrology. We propose a scheme for absolute gravimetry using a quantum magnetomechanical system consisting of a magnetically trapped superconducting resonator whose motion is controlled and measured by a nearby RF-SQUID or flux qubit. By driving the mechanical massive resonator to be in a macroscopic superposition of two different heights our we predict that our interferometry protocol could, subject to systematic errors, achieve a gravimetric sensitivity of Δg/g ~ 2.2 × 10-10 Hz-1/2, with a spatial resolution of a few nanometres. This sensitivity and spatial resolution exceeds the precision of current state of the art atom-interferometric and corner-cube gravimeters by more than an order of magnitude, and unlike classical superconducting interferometers produces an absolute rather than relative measurement of gravity. In addition, our scheme takes measurements at ~10 kHz, a region where the ambient vibrational noise spectrum is heavily suppressed compared the ~10 Hz region relevant for current cold atom gravimeters.

  13. An annular superposition integral for axisymmetric radiators

    PubMed Central

    Kelly, James F.; McGough, Robert J.

    2007-01-01

    A fast integral expression for computing the nearfield pressure is derived for axisymmetric radiators. This method replaces the sum of contributions from concentric annuli with an exact double integral that converges much faster than methods that evaluate the Rayleigh-Sommerfeld integral or the generalized King integral. Expressions are derived for plane circular pistons using both continuous wave and pulsed excitations. Several commonly used apodization schemes for the surface velocity distribution are considered, including polynomial functions and a “smooth piston” function. The effect of different apodization functions on the spectral content of the wave field is explored. Quantitative error and time comparisons between the new method, the Rayleigh-Sommerfeld integral, and the generalized King integral are discussed. At all error levels considered, the annular superposition method achieves a speed-up of at least a factor of 4 relative to the point-source method and a factor of 3 relative to the generalized King integral without increasing the computational complexity. PMID:17348500

  14. Macroscopic superpositions and gravimetry with quantum magnetomechanics

    PubMed Central

    Johnsson, Mattias T.; Brennen, Gavin K.; Twamley, Jason

    2016-01-01

    Precision measurements of gravity can provide tests of fundamental physics and are of broad practical interest for metrology. We propose a scheme for absolute gravimetry using a quantum magnetomechanical system consisting of a magnetically trapped superconducting resonator whose motion is controlled and measured by a nearby RF-SQUID or flux qubit. By driving the mechanical massive resonator to be in a macroscopic superposition of two different heights our we predict that our interferometry protocol could, subject to systematic errors, achieve a gravimetric sensitivity of Δg/g ~ 2.2 × 10−10 Hz−1/2, with a spatial resolution of a few nanometres. This sensitivity and spatial resolution exceeds the precision of current state of the art atom-interferometric and corner-cube gravimeters by more than an order of magnitude, and unlike classical superconducting interferometers produces an absolute rather than relative measurement of gravity. In addition, our scheme takes measurements at ~10 kHz, a region where the ambient vibrational noise spectrum is heavily suppressed compared the ~10 Hz region relevant for current cold atom gravimeters. PMID:27869142

  15. Controlling coherent state superpositions with superconducting circuits

    NASA Astrophysics Data System (ADS)

    Vlastakis, Brian Michael

    Quantum computation requires a large yet controllable Hilbert space. While many implementations use discrete quantum variables such as the energy states of a two-level system to encode quantum information, continuous variables could allow access to a larger computational space while minimizing the amount of re- quired hardware. With a toolset of conditional qubit-photon logic, we encode quantum information into the amplitude and phase of coherent state superpositions in a resonator, also known as Schrddinger cat states. We achieve this using a superconducting transmon qubit with a strong off-resonant coupling to a waveguide cavity. This dispersive interaction is much greater than decoherence rates and higher-order nonlinearites and therefore allows for simultaneous control of over one hundred photons. Furthermore, we combine this experiment with fast, high-fidelity qubit state readout to perform composite qubit-cavity state tomography and detect entanglement between a physical qubit and a cat-state encoded qubit. These results have promising applications for redundant encoding in a cavity state and ultimately quantum error correction with superconducting circuits.

  16. NRZ Data Asymmetry Corrector and Convolutional Encoder

    NASA Technical Reports Server (NTRS)

    Pfiffner, H. J.

    1983-01-01

    Circuit compensates for timing, amplitude and symmetry perturbations. Data asymmetry corrector and convolutional encoder regenerate data and clock signals in spite of signal variations such as data or clock asymmetry, phase errors, and amplitude variations, then encode data for transmission.

  17. Parallel architectures for computing cyclic convolutions

    NASA Technical Reports Server (NTRS)

    Yeh, C.-S.; Reed, I. S.; Truong, T. K.

    1983-01-01

    In the paper two parallel architectural structures are developed to compute one-dimensional cyclic convolutions. The first structure is based on the Chinese remainder theorem and Kung's pipelined array. The second structure is a direct mapping from the mathematical definition of a cyclic convolution to a computational architecture. To compute a d-point cyclic convolution the first structure needs d/2 inner product cells, while the second structure and Kung's linear array require d cells. However, to compute a cyclic convolution, the second structure requires less time than both the first structure and Kung's linear array. Another application of the second structure is to multiply a Toeplitz matrix by a vector. A table is listed to compare these two structures and Kung's linear array. Both structures are simple and regular and are therefore suitable for VLSI implementation.

  18. Parallel architectures for computing cyclic convolutions

    NASA Technical Reports Server (NTRS)

    Yeh, C.-S.; Reed, I. S.; Truong, T. K.

    1983-01-01

    In the paper two parallel architectural structures are developed to compute one-dimensional cyclic convolutions. The first structure is based on the Chinese remainder theorem and Kung's pipelined array. The second structure is a direct mapping from the mathematical definition of a cyclic convolution to a computational architecture. To compute a d-point cyclic convolution the first structure needs d/2 inner product cells, while the second structure and Kung's linear array require d cells. However, to compute a cyclic convolution, the second structure requires less time than both the first structure and Kung's linear array. Another application of the second structure is to multiply a Toeplitz matrix by a vector. A table is listed to compare these two structures and Kung's linear array. Both structures are simple and regular and are therefore suitable for VLSI implementation.

  19. Utilization of low-redundancy convolutional codes

    NASA Technical Reports Server (NTRS)

    Cain, J. B.

    1973-01-01

    This paper suggests guidelines for the utilization of low-redundancy convolutional codes with emphasis on providing a quick look capability (no decoding) and a moderate amount of coding gain. The performance and implementation complexity of threshold, Viterbi, and sequential decoding when used with low-redundancy, systematic, convolutional codes is discussed. An extensive list of optimum, short constraint length codes is found for use with Viterbi decoding, and several good, long constraint length codes are found for use with sequential decoding.

  20. A note on cubic convolution interpolation.

    PubMed

    Meijering, Erik; Unser, Michael

    2003-01-01

    We establish a link between classical osculatory interpolation and modern convolution-based interpolation and use it to show that two well-known cubic convolution schemes are formally equivalent to two osculatory interpolation schemes proposed in the actuarial literature about a century ago. We also discuss computational differences and give examples of other cubic interpolation schemes not previously studied in signal and image processing.

  1. Quantic Superpositions and the Geometry of Complex Hilbert Spaces

    NASA Astrophysics Data System (ADS)

    Lehmann, Daniel

    2008-05-01

    The concept of a superposition is a revolutionary novelty introduced by Quantum Mechanics. If a system may be in any one of two pure states x and y, we must consider that it may also be in any one of many superpositions of x and y. An in-depth analysis of superpositions is proposed, in which states are represented by one-dimensional subspaces, not by unit vectors as in Dirac’s notation. Superpositions must be considered when one cannot distinguish between possible paths, i.e., histories, leading to the current state of the system. In such a case the resulting state is some compound of the states that result from each of the possible paths. States can be compounded, i.e., superposed in such a way only if they are not orthogonal. Since different classical states are orthogonal, the claim implies no non-trivial superpositions can be observed in classical systems. The parameter that defines such compounds is a proportion defining the mix of the different states entering the compound. Two quantities, p and θ, both geometrical in nature, relate one-dimensional subspaces in complex Hilbert spaces: the first one is a measure of proximity relating two rays, the second one is an angle relating three rays. The properties of superpositions with respect to those two quantities are studied. The algebraic properties of the operation of superposition are very different from those that govern linear combination of vectors.

  2. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  3. Frequency domain convolution for SCANSAR

    NASA Astrophysics Data System (ADS)

    Cantraine, Guy; Dendal, Didier

    1994-12-01

    Starting from basic signals expressions, the rigorous formulation of frequency domain convolution is demonstrated, in general and impulse terms, including antenna patterns and squint angle. The major differences with conventional algorithms are discussed and theoretical concepts clarified. In a second part, the philosophy of advanced SAR algorithms is compared with that of a SCANSAR observation (several subswaths). It is proved that a general impulse response can always be written as the product of three factors, i.e., a phasor, an antenna coefficient, and a migration expression, and that the details of antenna effects can be ignored in the usual SAR system, but not the range migration (the situation is reversed in a SCANSAR reconstruction scheme). In a next step, some possible inverse filter kernels (the matched filter, the true inverse filter, ...) for general SAR or SCANSAR mode reconstructions, are compared. By adopting a noise corrupted model of data, we get the corresponding Wiener filter, the major interest of which is to avoid all divergence risk. Afterwards, the vocable `a class of filter' is introduced and summarized by a parametric formulation. Lastly, the homogeneity of the reconstruction, with a noncyclic fast Fourier transform deconvolution is studied by comparing peak responses according to the burst location. The more homogeneous sensitivity of the Wiener filter, with a stepper fall when the target begins to go outside the antenna pattern, is confirmed. A linear optimal merging of adjacent looks (in azimuth) minimizing the rms noise is also presented, as well as consideration about squint ambiguity.

  4. Nonclassical properties and quantum resources of hierarchical photonic superposition states

    SciTech Connect

    Volkoff, T. J.

    2015-11-15

    We motivate and introduce a class of “hierarchical” quantum superposition states of N coupled quantum oscillators. Unlike other well-known multimode photonic Schrödinger-cat states such as entangled coherent states, the hierarchical superposition states are characterized as two-branch superpositions of tensor products of single-mode Schrödinger-cat states. In addition to analyzing the photon statistics and quasiprobability distributions of prominent examples of these nonclassical states, we consider their usefulness for highprecision quantum metrology of nonlinear optical Hamiltonians and quantify their mode entanglement. We propose two methods for generating hierarchical superpositions in N = 2 coupled microwave cavities, exploiting currently existing quantum optical technology for generating entanglement between spatially separated electromagnetic field modes.

  5. Nonclassical properties and quantum resources of hierarchical photonic superposition states

    NASA Astrophysics Data System (ADS)

    Volkoff, T. J.

    2015-11-01

    We motivate and introduce a class of "hierarchical" quantum superposition states of N coupled quantum oscillators. Unlike other well-known multimode photonic Schrödinger-cat states such as entangled coherent states, the hierarchical superposition states are characterized as two-branch superpositions of tensor products of single-mode Schrödinger-cat states. In addition to analyzing the photon statistics and quasiprobability distributions of prominent examples of these nonclassical states, we consider their usefulness for highprecision quantum metrology of nonlinear optical Hamiltonians and quantify their mode entanglement. We propose two methods for generating hierarchical superpositions in N = 2 coupled microwave cavities, exploiting currently existing quantum optical technology for generating entanglement between spatially separated electromagnetic field modes.

  6. Quantum State Engineering Via Coherent-State Superpositions

    NASA Technical Reports Server (NTRS)

    Janszky, Jozsef; Adam, P.; Szabo, S.; Domokos, P.

    1996-01-01

    The quantum interference between the two parts of the optical Schrodinger-cat state makes possible to construct a wide class of quantum states via discrete superpositions of coherent states. Even a small number of coherent states can approximate the given quantum states at a high accuracy when the distance between the coherent states is optimized, e. g. nearly perfect Fock state can be constructed by discrete superpositions of n + 1 coherent states lying in the vicinity of the vacuum state.

  7. A fast convolution-based methodology to simulate 2-D/3-D cardiac ultrasound images.

    PubMed

    Gao, Hang; Choi, Hon Fai; Claus, Piet; Boonen, Steven; Jaecques, Siegfried; Van Lenthe, G Harry; Van der Perre, Georges; Lauriks, Walter; D'hooge, Jan

    2009-02-01

    This paper describes a fast convolution-based methodology for simulating ultrasound images in a 2-D/3-D sector format as typically used in cardiac ultrasound. The conventional convolution model is based on the assumption of a space-invariant point spread function (PSF) and typically results in linear images. These characteristics are not representative for cardiac data sets. The spatial impulse response method (IRM) has excellent accuracy in the linear domain; however, calculation time can become an issue when scatterer numbers become significant and when 3-D volumetric data sets need to be computed. As a solution to these problems, the current manuscript proposes a new convolution-based methodology in which the data sets are produced by reducing the conventional 2-D/3-D convolution model to multiple 1-D convolutions (one for each image line). As an example, simulated 2-D/3-D phantom images are presented along with their gray scale histogram statistics. In addition, the computation time is recorded and contrasted to a commonly used implementation of IRM (Field II). It is shown that COLE can produce anatomically plausible images with local Rayleigh statistics but at improved calculation time (1200 times faster than the reference method).

  8. Investigation of dosimetric differences between the TMR 10 and convolution algorithm for Gamma Knife stereotactic radiosurgery.

    PubMed

    Rojas-Villabona, Alvaro; Kitchen, Neil; Paddick, Ian

    2016-11-01

    Since its inception, doses applied using Gamma Knife Radiosurgery (GKR) have been calculated using a simple TMR algorithm, which assumes the patient's head is of even density, the same as water. This results in a significant approximation of the dose delivered by the Gamma Knife. We investigated how GKR dose calculations varied when using a new convolution algorithm clinically available for GKR planning that takes into account density variations in the head compared with the established calculation algorithm. Fifty-five patients undergoing GKR and harboring 85 lesions were voluntarily and prospectively enrolled into the study. Their clinical treatment plans were created and delivered using TMR 10, but were then recalculated using the density correction algorithm. Dosimetric differences between the planning algorithms were noted. Beam on time (BOT), which is directly proportional to dose, was the main value investigated. Changes of mean and maximum dose to organs at risk (OAR) were also assessed. Phantom studies were performed to investigate the effect of frame and pin materials on dose calculation using the convolution algorithm. Convolution yielded a mean increase in BOT of 7.4% (3.6%-11.6%). However, approximately 1.5% of this amount was due to the head contour being derived from the CT scans, as opposed to measurements using the Skull Scaling Instrument with TMR. Dose to the cochlea calculated with the convolution algorithm was approximately 7% lower than with the TMR 10 algorithm. No significant difference in relative dose distribution was noted and CT artifact typically caused by the stereotactic frame, glue embolization material or different fixation pin materials did not systematically affect convolution isodoses. Nonetheless, substantial error was introduced to the convolution calculation in one target located exactly in the area of major CT artifact caused by a fixation pin. Inhomogeneity correction using the convolution algorithm results in a considerable

  9. Investigation of dosimetric differences between the TMR 10 and convolution algorithm for Gamma Knife stereotactic radiosurgery.

    PubMed

    Rojas-Villabona, Alvaro; Kitchen, Neil; Paddick, Ian

    2016-11-08

    Since its inception, doses applied using Gamma Knife Radiosurgery (GKR) have been calculated using a simple TMR algorithm, which assumes the patient's head is of even density, the same as water. This results in a significant approximation of the dose delivered by the Gamma Knife. We investigated how GKR dose cal-culations varied when using a new convolution algorithm clinically available for GKR planning that takes into account density variations in the head compared with the established calculation algorithm. Fifty-five patients undergoing GKR and harboring 85 lesions were voluntarily and prospectively enrolled into the study. Their clinical treatment plans were created and delivered using TMR 10, but were then recalculated using the density correction algorithm. Dosimetric differences between the planning algorithms were noted. Beam on time (BOT), which is directly proportional to dose, was the main value investigated. Changes of mean and maximum dose to organs at risk (OAR) were also assessed. Phantom studies were performed to investigate the effect of frame and pin materials on dose calculation using the convolution algorithm. Convolution yielded a mean increase in BOT of 7.4% (3.6%-11.6%). However, approximately 1.5% of this amount was due to the head contour being derived from the CT scans, as opposed to measurements using the Skull Scaling Instrument with TMR. Dose to the cochlea calculated with the convolution algorithm was approximately 7% lower than with the TMR 10 algorithm. No significant difference in relative dose distribution was noted and CT artifact typically caused by the stereotactic frame, glue embolization material or different fixation pin materials did not systematically affect convolu-tion isodoses. Nonetheless, substantial error was introduced to the convolution calculation in one target located exactly in the area of major CT artifact caused by a fixation pin. Inhomogeneity correction using the convolution algorithm results in a

  10. A Superposition Technique for Deriving Photon Scattering Statistics in Plane-Parallel Cloudy Atmospheres

    NASA Technical Reports Server (NTRS)

    Platnick, S.

    1999-01-01

    Photon transport in a multiple scattering medium is critically dependent on scattering statistics, in particular the average number of scatterings. A superposition technique is derived to accurately determine the average number of scatterings encountered by reflected and transmitted photons within arbitrary layers in plane-parallel, vertically inhomogeneous clouds. As expected, the resulting scattering number profiles are highly dependent on cloud particle absorption and solar/viewing geometry. The technique uses efficient adding and doubling radiative transfer procedures, avoiding traditional time-intensive Monte Carlo methods. Derived superposition formulae are applied to a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Cloud remote sensing techniques that use solar reflectance or transmittance measurements generally assume a homogeneous plane-parallel cloud structure. The scales over which this assumption is relevant, in both the vertical and horizontal, can be obtained from the superposition calculations. Though the emphasis is on photon transport in clouds, the derived technique is applicable to any scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers in the atmosphere.

  11. Interpolation by two-dimensional cubic convolution

    NASA Astrophysics Data System (ADS)

    Shi, Jiazheng; Reichenbach, Stephen E.

    2003-08-01

    This paper presents results of image interpolation with an improved method for two-dimensional cubic convolution. Convolution with a piecewise cubic is one of the most popular methods for image reconstruction, but the traditional approach uses a separable two-dimensional convolution kernel that is based on a one-dimensional derivation. The traditional, separable method is sub-optimal for the usual case of non-separable images. The improved method in this paper implements the most general non-separable, two-dimensional, piecewise-cubic interpolator with constraints for symmetry, continuity, and smoothness. The improved method of two-dimensional cubic convolution has three parameters that can be tuned to yield maximal fidelity for specific scene ensembles characterized by autocorrelation or power-spectrum. This paper illustrates examples for several scene models (a circular disk of parametric size, a square pulse with parametric rotation, and a Markov random field with parametric spatial detail) and actual images -- presenting the optimal parameters and the resulting fidelity for each model. In these examples, improved two-dimensional cubic convolution is superior to several other popular small-kernel interpolation methods.

  12. Exponential Communication Complexity Advantage from Quantum Superposition of the Direction of Communication.

    PubMed

    Guérin, Philippe Allard; Feix, Adrien; Araújo, Mateus; Brukner, Časlav

    2016-09-02

    In communication complexity, a number of distant parties have the task of calculating a distributed function of their inputs, while minimizing the amount of communication between them. It is known that with quantum resources, such as entanglement and quantum channels, one can obtain significant reductions in the communication complexity of some tasks. In this work, we study the role of the quantum superposition of the direction of communication as a resource for communication complexity. We present a tripartite communication task for which such a superposition allows for an exponential saving in communication, compared to one-way quantum (or classical) communication; the advantage also holds when we allow for protocols with bounded error probability.

  13. Uncertainty estimation by convolution using spatial statistics.

    PubMed

    Sanchez-Brea, Luis Miguel; Bernabeu, Eusebio

    2006-10-01

    Kriging has proven to be a useful tool in image processing since it behaves, under regular sampling, as a convolution. Convolution kernels obtained with kriging allow noise filtering and include the effects of the random fluctuations of the experimental data and the resolution of the measuring devices. The uncertainty at each location of the image can also be determined using kriging. However, this procedure is slow since, currently, only matrix methods are available. In this work, we compare the way kriging performs the uncertainty estimation with the standard statistical technique for magnitudes without spatial dependence. As a result, we propose a much faster technique, based on the variogram, to determine the uncertainty using a convolutional procedure. We check the validity of this approach by applying it to one-dimensional images obtained in diffractometry and two-dimensional images obtained by shadow moire.

  14. Astronomical Image Subtraction by Cross-Convolution

    NASA Astrophysics Data System (ADS)

    Yuan, Fang; Akerlof, Carl W.

    2008-04-01

    In recent years, there has been a proliferation of wide-field sky surveys to search for a variety of transient objects. Using relatively short focal lengths, the optics of these systems produce undersampled stellar images often marred by a variety of aberrations. As participants in such activities, we have developed a new algorithm for image subtraction that no longer requires high-quality reference images for comparison. The computational efficiency is comparable with similar procedures currently in use. The general technique is cross-convolution: two convolution kernels are generated to make a test image and a reference image separately transform to match as closely as possible. In analogy to the optimization technique for generating smoothing splines, the inclusion of an rms width penalty term constrains the diffusion of stellar images. In addition, by evaluating the convolution kernels on uniformly spaced subimages across the total area, these routines can accommodate point-spread functions that vary considerably across the focal plane.

  15. Molecular graph convolutions: moving beyond fingerprints

    PubMed Central

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-01-01

    Molecular “fingerprints” encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503

  16. Cyclic Cocycles on Twisted Convolution Algebras

    NASA Astrophysics Data System (ADS)

    Angel, Eitan

    2013-01-01

    We give a construction of cyclic cocycles on convolution algebras twisted by gerbes over discrete translation groupoids. For proper étale groupoids, Tu and Xu (Adv Math 207(2):455-483, 2006) provide a map between the periodic cyclic cohomology of a gerbe-twisted convolution algebra and twisted cohomology groups which is similar to the construction of Mathai and Stevenson (Adv Math 200(2):303-335, 2006). When the groupoid is not proper, we cannot construct an invariant connection on the gerbe; therefore to study this algebra, we instead develop simplicial techniques to construct a simplicial curvature 3-form representing the class of the gerbe. Then by using a JLO formula we define a morphism from a simplicial complex twisted by this simplicial curvature 3-form to the mixed bicomplex computing the periodic cyclic cohomology of the twisted convolution algebras.

  17. Molecular graph convolutions: moving beyond fingerprints.

    PubMed

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  18. Molecular graph convolutions: moving beyond fingerprints

    NASA Astrophysics Data System (ADS)

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  19. Image reconstruction by parametric cubic convolution

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Schowengerdt, R. A.

    1983-01-01

    Cubic convolution, which has been discussed by Rifman and McKinnon (1974), was originally developed for the reconstruction of Landsat digital images. In the present investigation, the reconstruction properties of the one-parameter family of cubic convolution interpolation functions are considered and thee image degradation associated with reasonable choices of this parameter is analyzed. With the aid of an analysis in the frequency domain it is demonstrated that in an image-independent sense there is an optimal value for this parameter. The optimal value is not the standard value commonly referenced in the literature. It is also demonstrated that in an image-dependent sense, cubic convolution can be adapted to any class of images characterized by a common energy spectrum.

  20. Quantum superposition at the half-metre scale.

    PubMed

    Kovachy, T; Asenbaum, P; Overstreet, C; Donnelly, C A; Dickerson, S M; Sugarbaker, A; Hogan, J M; Kasevich, M A

    2015-12-24

    The quantum superposition principle allows massive particles to be delocalized over distant positions. Though quantum mechanics has proved adept at describing the microscopic world, quantum superposition runs counter to intuitive conceptions of reality and locality when extended to the macroscopic scale, as exemplified by the thought experiment of Schrödinger's cat. Matter-wave interferometers, which split and recombine wave packets in order to observe interference, provide a way to probe the superposition principle on macroscopic scales and explore the transition to classical physics. In such experiments, large wave-packet separation is impeded by the need for long interaction times and large momentum beam splitters, which cause susceptibility to dephasing and decoherence. Here we use light-pulse atom interferometry to realize quantum interference with wave packets separated by up to 54 centimetres on a timescale of 1 second. These results push quantum superposition into a new macroscopic regime, demonstrating that quantum superposition remains possible at the distances and timescales of everyday life. The sub-nanokelvin temperatures of the atoms and a compensation of transverse optical forces enable a large separation while maintaining an interference contrast of 28 per cent. In addition to testing the superposition principle in a new regime, large quantum superposition states are vital to exploring gravity with atom interferometers in greater detail. We anticipate that these states could be used to increase sensitivity in tests of the equivalence principle, measure the gravitational Aharonov-Bohm effect, and eventually detect gravitational waves and phase shifts associated with general relativity.

  1. Quantum superposition at the half-metre scale

    NASA Astrophysics Data System (ADS)

    Kovachy, T.; Asenbaum, P.; Overstreet, C.; Donnelly, C. A.; Dickerson, S. M.; Sugarbaker, A.; Hogan, J. M.; Kasevich, M. A.

    2015-12-01

    The quantum superposition principle allows massive particles to be delocalized over distant positions. Though quantum mechanics has proved adept at describing the microscopic world, quantum superposition runs counter to intuitive conceptions of reality and locality when extended to the macroscopic scale, as exemplified by the thought experiment of Schrödinger’s cat. Matter-wave interferometers, which split and recombine wave packets in order to observe interference, provide a way to probe the superposition principle on macroscopic scales and explore the transition to classical physics. In such experiments, large wave-packet separation is impeded by the need for long interaction times and large momentum beam splitters, which cause susceptibility to dephasing and decoherence. Here we use light-pulse atom interferometry to realize quantum interference with wave packets separated by up to 54 centimetres on a timescale of 1 second. These results push quantum superposition into a new macroscopic regime, demonstrating that quantum superposition remains possible at the distances and timescales of everyday life. The sub-nanokelvin temperatures of the atoms and a compensation of transverse optical forces enable a large separation while maintaining an interference contrast of 28 per cent. In addition to testing the superposition principle in a new regime, large quantum superposition states are vital to exploring gravity with atom interferometers in greater detail. We anticipate that these states could be used to increase sensitivity in tests of the equivalence principle, measure the gravitational Aharonov-Bohm effect, and eventually detect gravitational waves and phase shifts associated with general relativity.

  2. Multihop optical network with convolutional coding

    NASA Astrophysics Data System (ADS)

    Chien, Sufong; Takahashi, Kenzo; Prasad Majumder, Satya

    2002-01-01

    We evaluate the bit-error-rate (BER) performance of a multihop optical ShuffleNet with and without convolutional coding. Computed results show that there is considerable improvement in network performance resulting from coding in terms of an increased number of traversable hops from a given transmitter power at a given BER. For a rate-1/2 convolutional code with constraint length K = 9 at BER = 10-9, the hop gains are found to be 20 hops for hot-potato routing and 7 hops for single-buffer routing at the transmitter power of 0 dBm. We can further increase the hop gain by increasing transmitter power.

  3. Fast convolution algorithms for SAR processing

    NASA Astrophysics Data System (ADS)

    dall, Jorgen

    Most high resolution SAR processors apply the Fast Fourier Transform (FFT) to implement convolution by a matched filter impulse response. However, a lower computational complexity is attainable with other algorithms which accordingly have the potential of offering faster and/or simpler processors. Thirteen different fast transform and convolution algorithms are presented, and their characteristics are compared with the fundamental requirements imposed on the algorithms by various SAR processing schemes. The most promising algorithm is based on a Fermat Number Transform (FNT). SAR-580 and SEASAT SAR images have been successfully processed with the FNT, and in this connection the range curvature correction, noise properties and processing speed are discussed.

  4. FPT Algorithm for Two-Dimensional Cyclic Convolutions

    NASA Technical Reports Server (NTRS)

    Truong, Trieu-Kie; Shao, Howard M.; Pei, D. Y.; Reed, Irving S.

    1987-01-01

    Fast-polynomial-transform (FPT) algorithm computes two-dimensional cyclic convolution of two-dimensional arrays of complex numbers. New algorithm uses cyclic polynomial convolutions of same length. Algorithm regular, modular, and expandable.

  5. Mechanisms of circumferential gyral convolution in primate brains.

    PubMed

    Zhang, Tuo; Razavi, Mir Jalil; Chen, Hanbo; Li, Yujie; Li, Xiao; Li, Longchuan; Guo, Lei; Hu, Xiaoping; Liu, Tianming; Wang, Xianqiao

    2017-06-01

    Mammalian cerebral cortices are characterized by elaborate convolutions. Radial convolutions exhibit homology across primate species and generally are easily identified in individuals of the same species. In contrast, circumferential convolutions vary across species as well as individuals of the same species. However, systematic study of circumferential convolution patterns is lacking. To address this issue, we utilized structural MRI (sMRI) and diffusion MRI (dMRI) data from primate brains. We quantified cortical thickness and circumferential convolutions on gyral banks in relation to axonal pathways and density along the gray matter/white matter boundaries. Based on these observations, we performed a series of computational simulations. Results demonstrated that the interplay of heterogeneous cortex growth and mechanical forces along axons plays a vital role in the regulation of circumferential convolutions. In contrast, gyral geometry controls the complexity of circumferential convolutions. These findings offer insight into the mystery of circumferential convolutions in primate brains.

  6. Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution

    SciTech Connect

    Akabani, G.; Hawkins, W.G.; Eckblade, M.B.; Leichner, P.K.

    1999-01-01

    The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transform (CHT) algorithm.

  7. Fast computation algorithm for the Rayleigh-Sommerfeld diffraction formula using a type of scaled convolution.

    PubMed

    Nascov, Victor; Logofătu, Petre Cătălin

    2009-08-01

    We describe a fast computational algorithm able to evaluate the Rayleigh-Sommerfeld diffraction formula, based on a special formulation of the convolution theorem and the fast Fourier transform. What is new in our approach compared to other algorithms is the use of a more general type of convolution with a scale parameter, which allows for independent sampling intervals in the input and output computation windows. Comparison between the calculations made using our algorithm and direct numeric integration show a very good agreement, while the computation speed is increased by orders of magnitude.

  8. Digital image correlation based on a fast convolution strategy

    NASA Astrophysics Data System (ADS)

    Yuan, Yuan; Zhan, Qin; Xiong, Chunyang; Huang, Jianyong

    2017-10-01

    In recent years, the efficiency of digital image correlation (DIC) methods has attracted increasing attention because of its increasing importance for many engineering applications. Based on the classical affine optical flow (AOF) algorithm and the well-established inverse compositional Gauss-Newton algorithm, which is essentially a natural extension of the AOF algorithm under a nonlinear iterative framework, this paper develops a set of fast convolution-based DIC algorithms for high-efficiency subpixel image registration. Using a well-developed fast convolution technique, the set of algorithms establishes a series of global data tables (GDTs) over the digital images, which allows the reduction of the computational complexity of DIC significantly. Using the pre-calculated GDTs, the subpixel registration calculations can be implemented efficiently in a look-up-table fashion. Both numerical simulation and experimental verification indicate that the set of algorithms significantly enhances the computational efficiency of DIC, especially in the case of a dense data sampling for the digital images. Because the GDTs need to be computed only once, the algorithms are also suitable for efficiently coping with image sequences that record the time-varying dynamics of specimen deformations.

  9. [Application of numerical convolution in in vivo/in vitro correlation research].

    PubMed

    Yue, Peng

    2009-01-01

    This paper introduced the conception and principle of in vivo/in vitro correlation (IVIVC) and convolution/deconvolution methods, and elucidated in details the convolution strategy and method for calculating the in vivo absorption performance of the pharmaceutics according to the their pharmacokinetic data in Excel, then put the results forward to IVIVC research. Firstly, the pharmacokinetic data ware fitted by mathematical software to make up the lost points. Secondly, the parameters of the optimal fitted input function were defined by trail-and-error method according to the convolution principle in Excel under the hypothesis that all the input functions fit the Weibull functions. Finally, the IVIVC between in vivo input function and the in vitro dissolution was studied. In the examples, not only the application of this method was demonstrated in details but also its simplicity and effectiveness were proved by comparing with the compartment model method and deconvolution method. It showed to be a powerful tool for IVIVC research.

  10. Detail displaying difference of the digital holographic reconstructed image between the convolution algorithm and Fresnel algorithm.

    PubMed

    Zhong, Liyun; Li, Hongyan; Tao, Tao; Zhang, Zhun; Lu, Xiaoxu

    2011-11-07

    To reach the limiting resolution of a digital holographic system and improve the displaying quality of the reconstructed image, the subdivision convolution algorithm and the subdivision Fresnel algorithm are presented, respectively. The obtained results show that the lateral size of the reconstructed image obtained by two kinds of subdivision algorithms is the same in the central region of the reconstructed image-plane; moreover, the size of the central region is in proportional to the recording distance. Importantly, in the central region of the reconstructed image-plane, the reconstruction can be performed by the subdivision Fresnel algorithm instead of the subdivision convolution algorithm effectively, and, based on these subdivision approaches, both the displaying quality and the resolution of the reconstructed image can be improved significantly. Furthermore, in the reconstruction of the digital hologram with the large numerical aperture, the computer's memory consumed and the calculating time resulting from the subdivision Fresnel algorithm is significantly less than those from the subdivision convolution algorithm.

  11. Prehension synergies: principle of superposition and hierarchical organization in circular object prehension.

    PubMed

    Shim, Jae Kun; Park, Jaebum

    2007-07-01

    This study tests the following hypotheses in multi-digit circular object prehension: the principle of superposition (i.e., a complex action can be decomposed into independently controlled sub-actions) and the hierarchical organization (i.e., individual fingers at the lower level are coordinated to generate a desired task-specific outcome of the virtual finger at the higher level). Subjects performed 25 trials while statically holding a circular handle instrumented with five six-component force/moment sensors under seven external torque conditions. We performed a principal component (PC) analysis on forces and moments of the thumb and virtual finger (VF: an imagined finger producing the same mechanical effects of all finger forces and moments combined) to test the applicability of the principle of superposition in a circular object prehension. The synergy indices, measuring synergic actions of the individual finger (IF) moments for the stabilization of the VF moment, were calculated to test the hierarchical organization. Mixed-effect ANOVAs were used to test the dependent variable differences for different external torque conditions and different fingers at the VF and IF levels. The PC analysis showed that the elemental variables were decoupled into two groups: one group related to grasping stability control (normal force control) and the other group associated with rotational equilibrium control (tangential force control), which supports the principle of superposition. The synergy indices were always positive, suggesting error compensations between IF moments for the VF moment stabilization, which confirms the hierarchical organization of multi-digit prehension.

  12. Computer experiment on superposition of strengthening effects of different particles

    SciTech Connect

    Zhu, A.W.; Csontos, A.; Starke, E.A. Jr.

    1999-04-23

    Particle-hardening materials, particularly high strength aluminum alloys, usually contain two or more types of second-phase particles. While the strengthening effect of mono-dispersed particles has been studied extensively and hence well formulated, a rational and consolidated evaluation of superposed hardening effects of different particle mixtures is still an open problem both experimentally and theoretically. A computer simulation technique is utilized to examine the details of the problem. The technique developed is based on the circle-rolling approach of Morris et al. The strengthening stress {tau}{sub p} due to the mixture of different particles is determined by examination of a dislocation-slip process through the particles on one slip plane and along one slip direction under the action of an applied shear stress {tau}. Two kinds of particle mixtures are investigated. One consists of hard or unshearable point-like particles and soft or shearable point-like ones. The other is a mixture of two types of unshearable plate-like particles. The simulation results indicate that the superposition law can be well described by an equation {tau}{sup {alpha}} = n{sub {Alpha}}{sup {alpha}/2}{tau}{sub {Alpha}}{sup {alpha}} + n{sub {Beta}}{sup {alpha}/2}{tau}{sub {Beta}}{sup {alpha}} where n{sub {Alpha}} and n{sub {Beta}} are the density fractions of {Alpha}- and {Beta}-particles, {tau}{sup {Alpha}} and {tau}{sub {Beta}} the strengthening stresses due to pure {Alpha}- and {Beta}-particles, and the exponent {alpha} varies between 1.0 and 2.0. Application to the spherical precipitates predicts that a bi-modal particle size distribution can give rise to about an 8% increment in strengthening stress with regard to a single size distribution that is normally produced by conventional aging. Calculated values using the simulation method compare favorably with those determined experimentally.

  13. Towards dropout training for convolutional neural networks.

    PubMed

    Wu, Haibing; Gu, Xiaodong

    2015-11-01

    Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Sequential Syndrome Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.

  15. Number-Theoretic Functions via Convolution Rings.

    ERIC Educational Resources Information Center

    Berberian, S. K.

    1992-01-01

    Demonstrates the number theory property that the number of divisors of an integer n times the number of positive integers k, less than or equal to and relatively prime to n, equals the sum of the divisors of n using theory developed about multiplicative functions, the units of a convolution ring, and the Mobius Function. (MDH)

  16. Convolutions and Their Applications in Information Science.

    ERIC Educational Resources Information Center

    Rousseau, Ronald

    1998-01-01

    Presents definitions of convolutions, mathematical operations between sequences or between functions, and gives examples of their use in information science. In particular they can be used to explain the decline in the use of older literature (obsolescence) or the influence of publication delays on the aging of scientific literature. (Author/LRW)

  17. VLSI Unit for Two-Dimensional Convolutions

    NASA Technical Reports Server (NTRS)

    Liu, K. Y.

    1983-01-01

    Universal logic structure allows same VLSI chip to be used for variety of computational functions required for two dimensional convolutions. Fast polynomial transform technique is extended into tree computational structure composed of two units: fast polynomial transform (FPT) unit and Chinese remainder theorem (CRT) computational unit.

  18. Robust mesoscopic superposition of strongly correlated ultracold atoms

    SciTech Connect

    Hallwood, David W.; Ernst, Thomas; Brand, Joachim

    2010-12-15

    We propose a scheme to create coherent superpositions of annular flow of strongly interacting bosonic atoms in a one-dimensional ring trap. The nonrotating ground state is coupled to a vortex state with mesoscopic angular momentum by means of a narrow potential barrier and an applied phase that originates from either rotation or a synthetic magnetic field. We show that superposition states in the Tonks-Girardeau regime are robust against single-particle loss due to the effects of strong correlations. The coupling between the mesoscopically distinct states scales much more favorably with particle number than in schemes relying on weak interactions, thus making particle numbers of hundreds or thousands feasible. Coherent oscillations induced by time variation of parameters may serve as a 'smoking gun' signature for detecting superposition states.

  19. Non-coaxial superposition of vector vortex beams.

    PubMed

    Aadhi, A; Vaity, Pravin; Chithrabhanu, P; Reddy, Salla Gangi; Prabakar, Shashi; Singh, R P

    2016-02-10

    Vector vortex beams are classified into four types depending upon spatial variation in their polarization vector. We have generated all four of these types of vector vortex beams by using a modified polarization Sagnac interferometer with a vortex lens. Further, we have studied the non-coaxial superposition of two vector vortex beams. It is observed that the superposition of two vector vortex beams with same polarization singularity leads to a beam with another kind of polarization singularity in their interaction region. The results may be of importance in ultrahigh security of the polarization-encrypted data that utilizes vector vortex beams and multiple optical trapping with non-coaxial superposition of vector vortex beams. We verified our experimental results with theory.

  20. Dissipative Optomechanical Preparation of Macroscopic Quantum Superposition States

    NASA Astrophysics Data System (ADS)

    Abdi, M.; Degenfeld-Schonburg, P.; Sameti, M.; Navarrete-Benlloch, C.; Hartmann, M. J.

    2016-06-01

    The transition from quantum to classical physics remains an intensely debated question even though it has been investigated for more than a century. Further clarifications could be obtained by preparing macroscopic objects in spatial quantum superpositions and proposals for generating such states for nanomechanical devices either in a transient or a probabilistic fashion have been put forward. Here, we introduce a method to deterministically obtain spatial superpositions of arbitrary lifetime via dissipative state preparation. In our approach, we engineer a double-well potential for the motion of the mechanical element and drive it towards the ground state, which shows the desired spatial superposition, via optomechanical sideband cooling. We propose a specific implementation based on a superconducting circuit coupled to the mechanical motion of a lithium-decorated monolayer graphene sheet, introduce a method to verify the mechanical state by coupling it to a superconducting qubit, and discuss its prospects for testing collapse models for the quantum to classical transition.

  1. Optimal control of quantum superpositions in a bosonic Josephson junction

    NASA Astrophysics Data System (ADS)

    Lapert, M.; Ferrini, G.; Sugny, D.

    2012-02-01

    We show how to optimally control the creation of quantum superpositions in a bosonic Josephson junction within the two-site Bose-Hubbard-model framework. Both geometric and purely numerical optimal-control approaches are used, the former providing a generalization of the proposal of Micheli [Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.67.013607 67, 013607 (2003)]. While this method is shown not to lead to significant improvements in terms of time of formation and fidelity of the superposition, a numerical optimal-control approach appears more promising, as it allows creation of an almost perfect superposition, within a time short compared to other existing protocols. We analyze the robustness of the optimal solution against atom-number variations. Finally, we discuss the extent to which these optimal solutions could be implemented with state-of-the-art technology.

  2. Dimensional limits for arthropod eyes with superposition optics.

    PubMed

    Meyer-Rochow, Victor Benno; Gál, József

    2004-01-01

    An essential feature of the superposition type of compound eye is the presence of a wide zone, which is transparent and devoid of pigment and interposed between the distal array of dioptric elements and the proximally placed photoreceptive layer. Parallel rays, collected by many lenses, must (through reflection or refraction) cross this transparent clear-zone in such a way that they become focused on one receptor. Superposition depends mostly on diameter and curvature of the cornea, size and shape of the crystalline cone, lens cylinder properties of cornea and cone, dimensions of the receptor cells, and width of the clear-zone. We examined the role of the latter by geometrical, geometric-optical, and anatomical measurements and concluded that a minimal size exists, below which effective superposition can no longer occur. For an eye of a given size, it is not possible to increase the width of the clear-zone cz=dcz/R1 and decrease R2 (i.e., the radius of curvature of the distal retinal surface) and/or c=dc/R1 without reaching a limit. In the equations 'cz' is the width of the clear-zone dcz relative to the radius R1 of the eye and c is the length of the cornea-cone unit relative to R1. Our results provide one explanation as to why apposition eyes exist in very small scarabaeid beetles, when generally the taxon Scarabaeoidea is characterized by the presence of superposition eyes. The results may also provide the answer for the puzzle why juveniles or the young of species, in which the adults possess superposition (=clear-zone) eyes, frequently bear eyes that do not contain a clear zone, but resemble apposition eyes. The eyes of the young and immature specimens may simply be too small to permit superposition to occur.

  3. Superposition of helical beams by using a Michelson interferometer.

    PubMed

    Gao, Chunqing; Qi, Xiaoqing; Liu, Yidong; Weber, Horst

    2010-01-04

    Orbital angular momentum (OAM) of a helical beam is of great interests in the high density optical communication due to its infinite number of eigen-states. In this paper, an experimental setup is realized to the information encoding and decoding on the OAM eigen-states. A hologram designed by the iterative method is used to generate the helical beams, and a Michelson interferometer with two Porro prisms is used for the superposition of two helical beams. The experimental results of the collinear superposition of helical beams and their OAM eigen-states detection are presented.

  4. Convolution theorems: partitioning the space of integral transforms

    NASA Astrophysics Data System (ADS)

    Lindsey, Alan R.; Suter, Bruce W.

    1999-03-01

    Investigating a number of different integral transforms uncovers distinct patterns in the type of translation convolution theorems afforded by each. It is shown that transforms based on separable kernels (aka Fourier, Laplace and their relatives) have a form of the convolution theorem providing for a transform domain product of the convolved functions. However, transforms based on kernels not separable in the function and transform variables mandate a convolution theorem of a different type; namely in the transform domain the convolution becomes another convolution--one function with the transform of the other.

  5. Effectiveness of Convolutional Code in Multipath Underwater Acoustic Channel

    NASA Astrophysics Data System (ADS)

    Park, Jihyun; Seo, Chulwon; Park, Kyu-Chil; Yoon, Jong Rak

    2013-07-01

    The forward error correction (FEC) is achieved by increasing redundancy of information. Convolutional coding with Viterbi decoding is a typical FEC technique in channel corrupted by additive white gaussian noise. But the FEC effectiveness of convolutional code is questioned in multipath frequency selective fading channel. In this paper, how convolutional code works in multipath channel in underwater, is examined. Bit error rates (BER) with and without 1/2 convolutional code are analyzed based on channel bandwidth which is frequency selectivity parameter. It is found that convolution code performance is well matched in non selective channel and also effective in selective channel.

  6. A Mathematical Motivation for Complex-Valued Convolutional Networks.

    PubMed

    Tygert, Mark; Bruna, Joan; Chintala, Soumith; LeCun, Yann; Piantino, Serkan; Szlam, Arthur

    2016-05-01

    A complex-valued convolutional network (convnet) implements the repeated application of the following composition of three operations, recursively applying the composition to an input vector of nonnegative real numbers: (1) convolution with complex-valued vectors, followed by (2) taking the absolute value of every entry of the resulting vectors, followed by (3) local averaging. For processing real-valued random vectors, complex-valued convnets can be viewed as data-driven multiscale windowed power spectra, data-driven multiscale windowed absolute spectra, data-driven multiwavelet absolute values, or (in their most general configuration) data-driven nonlinear multiwavelet packets. Indeed, complex-valued convnets can calculate multiscale windowed spectra when the convnet filters are windowed complex-valued exponentials. Standard real-valued convnets, using rectified linear units (ReLUs), sigmoidal (e.g., logistic or tanh) nonlinearities, or max pooling, for example, do not obviously exhibit the same exact correspondence with data-driven wavelets (whereas for complex-valued convnets, the correspondence is much more than just a vague analogy). Courtesy of the exact correspondence, the remarkably rich and rigorous body of mathematical analysis for wavelets applies directly to (complex-valued) convnets.

  7. Real-time feedback control of a mesoscopic superposition

    SciTech Connect

    Jacobs, Kurt; Finn, Justin; Vinjanampathy, Sai

    2011-04-15

    We show that continuous real-time feedback can be used to track, control, and protect a mesoscopic superposition of two spatially separated wave packets. The feedback protocol is enabled by an approximate state estimator and requires two continuous measurements, performed simultaneously. For nanomechanical and superconducting resonators, both measurements can be implemented by coupling the resonators to superconducting qubits.

  8. Measuring orbital angular momentum superpositions of light by mode transformation.

    PubMed

    Berkhout, Gregorius C G; Lavery, Martin P J; Padgett, Miles J; Beijersbergen, Marco W

    2011-05-15

    We recently reported on a method for measuring orbital angular momentum (OAM) states of light based on the transformation of helically phased beams to tilted plane waves [Phys. Rev. Lett.105, 153601 (2010)]. Here we consider the performance of such a system for superpositions of OAM states by measuring the modal content of noninteger OAM states and beams produced by a Heaviside phase plate.

  9. Generation of macroscopic superposition states with small nonlinearity

    SciTech Connect

    Jeong, H.; Ralph, T.C.; Kim, M. S.; Ham, B.S.

    2004-12-01

    We suggest a scheme to generate a macroscopic superposition state ('Schroedinger cat state') of a free-propagating optical field using a beam splitter, homodyne measurement, and a very small Kerr nonlinear effect. Our scheme makes it possible to reduce considerably the required nonlinear effect to generate an optical cat state using simple and efficient optical elements.

  10. A convolutional neural network neutrino event classifier

    NASA Astrophysics Data System (ADS)

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  11. DCMDN: Deep Convolutional Mixture Density Network

    NASA Astrophysics Data System (ADS)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  12. A Construction of MDS Quantum Convolutional Codes

    NASA Astrophysics Data System (ADS)

    Zhang, Guanghui; Chen, Bocong; Li, Liangchen

    2015-09-01

    In this paper, two new families of MDS quantum convolutional codes are constructed. The first one can be regarded as a generalization of [36, Theorem 6.5], in the sense that we do not assume that q≡1 (mod 4). More specifically, we obtain two classes of MDS quantum convolutional codes with parameters: (i) [( q 2+1, q 2-4 i+3,1;2,2 i+2)] q , where q≥5 is an odd prime power and 2≤ i≤( q-1)/2; (ii) , where q is an odd prime power with the form q=10 m+3 or 10 m+7 ( m≥2), and 2≤ i≤2 m-1.

  13. Deep Learning with Hierarchical Convolutional Factor Analysis

    PubMed Central

    Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence

    2013-01-01

    Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342

  14. Performance of convolutionally coded unbalanced QPSK systems

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Yuen, J. H.

    1980-01-01

    An evaluation is presented of the performance of three representative convolutionally coded unbalanced quadri-phase-shift-keying (UQPSK) systems in the presence of noisy carrier reference and crosstalk. The use of a coded UQPSK system for transmitting two telemetry data streams with different rates and different powers has been proposed for the Venus Orbiting Imaging Radar mission. Analytical expressions for bit error rates in the presence of a noisy carrier phase reference are derived for three representative cases: (1) I and Q channels are coded independently; (2) I channel is coded, Q channel is uncoded; and (3) I and Q channels are coded by a common 1/2 code. For rate 1/2 convolutional codes, QPSK modulation can be used to reduce the bandwidth requirement.

  15. Digital Correlation By Optical Convolution/Correlation

    NASA Astrophysics Data System (ADS)

    Trimble, Joel; Casasent, David; Psaltis, Demetri; Caimi, Frank; Carlotto, Mark; Neft, Deborah

    1980-12-01

    Attention is given to various methods by which the accuracy achieveable and the dynamic range requirements of an optical computer can be enhanced. A new time position coding acousto-optic technique for optical residue arithmetic processing is presented and experimental demonstration is included. Major attention is given to the implementation of a correlator operating on digital or decimal encoded signals. Using a convolution description of multiplication, we realize such a correlator by optical convolution in one dimension and optical correlation in the other dimension of a optical system. A coherent matched spatial filter system operating on digital encoded signals, a noncoherent processor operating on complex-valued digital-encoded data, and a real-time multi-channel acousto-optic system for such operations are described and experimental verifications are included.

  16. A convolutional neural network neutrino event classifier

    DOE PAGES

    Aurisano, A.; Radovic, A.; Rocco, D.; ...

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  17. Convoluted accommodation structures in folded rocks

    NASA Astrophysics Data System (ADS)

    Dodwell, T. J.; Hunt, G. W.

    2012-10-01

    A simplified variational model for the formation of convoluted accommodation structures, as seen in the hinge zones of larger-scale geological folds, is presented. The model encapsulates some important and intriguing nonlinear features, notably: infinite critical loads, formation of plastic hinges, and buckling on different length-scales. An inextensible elastic beam is forced by uniform overburden pressure and axial load into a V-shaped geometry dictated by formation of a plastic hinge. Using variational methods developed by Dodwell et al., upon which this paper leans heavily, energy minimisation leads to representation as a fourth-order nonlinear differential equation with free boundary conditions. Equilibrium solutions are found using numerical shooting techniques. Under the Maxwell stability criterion, it is recognised that global energy minimisers can exist with convoluted physical shapes. For such solutions, parallels can be drawn with some of the accommodation structures seen in exposed escarpments of real geological folds.

  18. A convolutional neural network neutrino event classifier

    SciTech Connect

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  19. A convolutional neural network neutrino event classifier

    SciTech Connect

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  20. Quantum convolutional codes derived from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Yan, Tingsu; Huang, Xinmei; Tang, Yuansheng

    2014-12-01

    In this paper, three families of quantum convolutional codes are constructed. The first one and the second one can be regarded as a generalization of Theorems 3, 4, 7 and 8 [J. Chen, J. Li, F. Yang and Y. Huang, Int. J. Theor. Phys., doi:10.1007/s10773-014-2214-6 (2014)], in the sense that we drop the constraint q ≡ 1 (mod 4). Furthermore, the second one and the third one attain the quantum generalized Singleton bound.

  1. Long decoding runs for Galileo's convolutional codes

    NASA Technical Reports Server (NTRS)

    Lahmeyer, C. R.; Cheung, K.-M.

    1988-01-01

    Decoding results are described for long decoding runs of Galileo's convolutional codes. A 1 k-bit/sec hardware Viterbi decoder is used for the (15, 1/4) convolutional code, and a software Viterbi decoder is used for the (7, 1/2) convolutional code. The output data of these long runs are stored in data files using a data compression format which can reduce file size by a factor of 100 to 1 typically. These data files can be used to replicate the long, time-consuming runs exactly and are useful to anyone who wants to analyze the burst statistics of the Viterbi decoders. The 1 k-bit/sec hardware Viterbi decoder was developed in order to demonstrate the correctness of certain algorithmic concepts for decoding Galileo's experimental (15, 1/4) code, and for the long-constraint-length codes in general. The hardware decoder can be used both to search for good codes and to measure accurately the performance of known codes.

  2. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons.

    PubMed

    Sanchez-Parcerisa, D; Cortés-Giraldo, M A; Dolney, D; Kondrla, M; Fager, M; Carabe, A

    2016-02-21

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm(-1)) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.

  3. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons

    NASA Astrophysics Data System (ADS)

    Sanchez-Parcerisa, D.; Cortés-Giraldo, M. A.; Dolney, D.; Kondrla, M.; Fager, M.; Carabe, A.

    2016-02-01

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm-1) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.

  4. Enhanced interference-pattern visibility using multislit optical superposition method for imaging-type two-dimensional Fourier spectroscopy.

    PubMed

    Qi, Wei; Suzuki, Yo; Sato, Shun; Fujiwara, Masaru; Kawashima, Natsumi; Suzuki, Satoru; Abeygunawardhana, Pradeep; Wada, Kenji; Nishiyama, Akira; Ishimaru, Ichiro

    2015-07-10

    A solution is found for the problem of phase cancellation between adjacent bright points in wavefront-division phase-shift interferometry. To this end, a design is proposed that optimizes the visibility of the interference pattern from multiple slits. The method is explained in terms of Fraunhofer diffraction and convolution imaging. Optical simulations verify the technique. The final design can be calculated using a simple equation.

  5. Modifying real convolutional codes for protecting digital filtering systems

    NASA Technical Reports Server (NTRS)

    Redinbo, G. R.; Zagar, Bernhard

    1993-01-01

    A novel method is proposed for protecting digital filters from temporary and permanent failures that are not easily detected by conventional fault-tolerant computer design principles, on the basis of the error-detecting properties of real convolutional codes. Erroneous behavior is detected by externally comparing the calculated and regenerated parity samples. Great simplifications are obtainable by modifying the code structure to yield simplified parity channels with finite impulse response structures. A matrix equation involving the original parity values of the code and the polynomial of the digital filter's transfer function is formed, and row manipulations separate this equation into a set of homogeneous equations constraining the modifying scaling coefficients and another set which defines the code parity values' implementation.

  6. A convolution model of rock bed thermal storage units

    NASA Astrophysics Data System (ADS)

    Sowell, E. F.; Curry, R. L.

    1980-01-01

    A method is presented whereby a packed-bed thermal storage unit is dynamically modeled for bi-directional flow and arbitrary input flow stream temperature variations. The method is based on the principle of calculating the output temperature as the sum of earlier input temperatures, each multiplied by a predetermined 'response factor', i.e., discrete convolution. A computer implementation of the scheme, in the form of a subroutine for a widely used solar simulation program (TRNSYS) is described and numerical results compared with other models. Also, a method for efficient computation of the required response factors is described; this solution is for a triangular input pulse, previously unreported, although the solution method is also applicable for other input functions. This solution requires a single integration of a known function which is easily carried out numerically to the required precision.

  7. relline: Relativistic line profiles calculation

    NASA Astrophysics Data System (ADS)

    Dauser, Thomas

    2015-05-01

    relline calculates relativistic line profiles; it is compatible with the common X-ray data analysis software XSPEC (ascl:9910.005) and ISIS (ascl:1302.002). The two basic forms are an additive line model (RELLINE) and a convolution model to calculate relativistic smearing (RELCONV).

  8. Fast 3D dosimetric verifications based on an electronic portal imaging device using a GPU calculation engine.

    PubMed

    Zhu, Jinhan; Chen, Lixin; Chen, Along; Luo, Guangwen; Deng, Xiaowu; Liu, Xiaowei

    2015-04-11

    To use a graphic processing unit (GPU) calculation engine to implement a fast 3D pre-treatment dosimetric verification procedure based on an electronic portal imaging device (EPID). The GPU algorithm includes the deconvolution and convolution method for the fluence-map calculations, the collapsed-cone convolution/superposition (CCCS) algorithm for the 3D dose calculations and the 3D gamma evaluation calculations. The results of the GPU-based CCCS algorithm were compared to those of Monte Carlo simulations. The planned and EPID-based reconstructed dose distributions in overridden-to-water phantoms and the original patients were compared for 6 MV and 10 MV photon beams in intensity-modulated radiation therapy (IMRT) treatment plans based on dose differences and gamma analysis. The total single-field dose computation time was less than 8 s, and the gamma evaluation for a 0.1-cm grid resolution was completed in approximately 1 s. The results of the GPU-based CCCS algorithm exhibited good agreement with those of the Monte Carlo simulations. The gamma analysis indicated good agreement between the planned and reconstructed dose distributions for the treatment plans. For the target volume, the differences in the mean dose were less than 1.8%, and the differences in the maximum dose were less than 2.5%. For the critical organs, minor differences were observed between the reconstructed and planned doses. The GPU calculation engine was used to boost the speed of 3D dose and gamma evaluation calculations, thus offering the possibility of true real-time 3D dosimetric verification.

  9. Quantifying the interplay effect in prostate IMRT delivery using a convolution-based method

    SciTech Connect

    Li, Haisen S.; Chetty, Indrin J.; Solberg, Timothy D.

    2008-05-15

    The authors present a segment-based convolution method to account for the interplay effect between intrafraction organ motion and the multileaf collimator position for each particular segment in intensity modulated radiation therapy (IMRT) delivered in a step-and-shoot manner. In this method, the static dose distribution attributed to each segment is convolved with the probability density function (PDF) of motion during delivery of the segment, whereas in the conventional convolution method (''average-based convolution''), the static dose distribution is convolved with the PDF averaged over an entire fraction, an entire treatment course, or even an entire patient population. In the case of IMRT delivered in a step-and-shoot manner, the average-based convolution method assumes that in each segment the target volume experiences the same motion pattern (PDF) as that of population. In the segment-based convolution method, the dose during each segment is calculated by convolving the static dose with the motion PDF specific to that segment, allowing both intrafraction motion and the interplay effect to be accounted for in the dose calculation. Intrafraction prostate motion data from a population of 35 patients tracked using the Calypso system (Calypso Medical Technologies, Inc., Seattle, WA) was used to generate motion PDFs. These were then convolved with dose distributions from clinical prostate IMRT plans. For a single segment with a small number of monitor units, the interplay effect introduced errors of up to 25.9% in the mean CTV dose compared against the planned dose evaluated by using the PDF of the entire fraction. In contrast, the interplay effect reduced the minimum CTV dose by 4.4%, and the CTV generalized equivalent uniform dose by 1.3%, in single fraction plans. For entire treatment courses delivered in either a hypofractionated (five fractions) or conventional (>30 fractions) regimen, the discrepancy in total dose due to interplay effect was negligible.

  10. Quantifying the interplay effect in prostate IMRT delivery using a convolution-based method.

    PubMed

    Li, Haisen S; Chetty, Indrin J; Solberg, Timothy D

    2008-05-01

    The authors present a segment-based convolution method to account for the interplay effect between intrafraction organ motion and the multileaf collimator position for each particular segment in intensity modulated radiation therapy (IMRT) delivered in a step-and-shoot manner. In this method, the static dose distribution attributed to each segment is convolved with the probability density function (PDF) of motion during delivery of the segment, whereas in the conventional convolution method ("average-based convolution"), the static dose distribution is convolved with the PDF averaged over an entire fraction, an entire treatment course, or even an entire patient population. In the case of IMRT delivered in a step-and-shoot manner, the average-based convolution method assumes that in each segment the target volume experiences the same motion pattern (PDF) as that of population. In the segment-based convolution method, the dose during each segment is calculated by convolving the static dose with the motion PDF specific to that segment, allowing both intrafraction motion and the interplay effect to be accounted for in the dose calculation. Intrafraction prostate motion data from a population of 35 patients tracked using the Calypso system (Calypso Medical Technologies, Inc., Seattle, WA) was used to generate motion PDFs. These were then convolved with dose distributions from clinical prostate IMRT plans. For a single segment with a small number of monitor units, the interplay effect introduced errors of up to 25.9% in the mean CTV dose compared against the planned dose evaluated by using the PDF of the entire fraction. In contrast, the interplay effect reduced the minimum CTV dose by 4.4%, and the CTV generalized equivalent uniform dose by 1.3%, in single fraction plans. For entire treatment courses delivered in either a hypofractionated (five fractions) or conventional (> 30 fractions) regimen, the discrepancy in total dose due to interplay effect was negligible.

  11. The analysis of convolutional codes via the extended Smith algorithm

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Onyszchuk, I.

    1993-01-01

    Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.

  12. The Developmental Rules of Neural Superposition in Drosophila.

    PubMed

    Langen, Marion; Agi, Egemen; Altschuler, Dylan J; Wu, Lani F; Altschuler, Steven J; Hiesinger, Peter Robin

    2015-07-02

    Complicated neuronal circuits can be genetically encoded, but the underlying developmental algorithms remain largely unknown. Here, we describe a developmental algorithm for the specification of synaptic partner cells through axonal sorting in the Drosophila visual map. Our approach combines intravital imaging of growth cone dynamics in developing brains of intact pupae and data-driven computational modeling. These analyses suggest that three simple rules are sufficient to generate the seemingly complex neural superposition wiring of the fly visual map without an elaborate molecular matchmaking code. Our computational model explains robust and precise wiring in a crowded brain region despite extensive growth cone overlaps and provides a framework for matching molecular mechanisms with the rules they execute. Finally, ordered geometric axon terminal arrangements that are not required for neural superposition are a side product of the developmental algorithm, thus elucidating neural circuit connectivity that remained unexplained based on adult structure and function alone.

  13. Generic preparation and entanglement detection of equal superposition states

    NASA Astrophysics Data System (ADS)

    Yu, Qi; Zhang, YanBao; Li, Jun; Wang, HengYan; Peng, XinHua; Du, JiangFeng

    2017-07-01

    Quantum superposition is a fundamental principle of quantum mechanics, so it is not surprising that equal superposition states (ESS) serve as powerful resources for quantum information processing. In this work, we propose a quantum circuit that creates an arbitrary dimensional ESS. The circuit construction is efficient as the number of required elementary gates scales polynomially with the number of required qubits. For experimental realization of the method, we use techniques of nuclear magnetic resonance (NMR).We have succeeded in preparing a 9-dimensional ESS on a 4-qubit NMR quantum register. The full tomography indicates that the fidelity of our prepared state with respect to the ideal 9-dimensional ESS is over 96%. We also prove the prepared state is pseudo-entangled by directly measuring an entanglement witness operator. Our result can be useful for the implementation of those quantum algorithms that require an ESS as an input state.

  14. Nonclassicality tests and entanglement witnesses for macroscopic mechanical superposition states

    NASA Astrophysics Data System (ADS)

    Gittsovich, Oleg; Moroder, Tobias; Asadian, Ali; Gühne, Otfried; Rabl, Peter

    2015-02-01

    We describe a set of measurement protocols for performing nonclassicality tests and the verification of entangled superposition states of macroscopic continuous variable systems, such as nanomechanical resonators. Following earlier works, we first consider a setup where a two-level system is used to indirectly probe the motion of the mechanical system via Ramsey measurements and discuss the application of this method for detecting nonclassical mechanical states. We then show that the generalization of this technique to multiple resonator modes allows the conditioned preparation and the detection of entangled mechanical superposition states. The proposed measurement protocols can be implemented in various qubit-resonator systems that are currently under experimental investigation and find applications in future tests of quantum mechanics at a macroscopic scale.

  15. A fast complex integer convolution using a hybrid transform

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; K Truong, T.

    1978-01-01

    It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.

  16. Convolutional Architecture Exploration for Action Recognition and Image Classification

    DTIC Science & Technology

    2015-01-01

    Convolutional Architecture Exploration for Action Recognition and Image Classification JT Turner∗1, David Aha2, Leslie Smith2, and Kalyan Moy Gupta1...Intelligence; Naval Research Laboratory (Code 5514); Washington, DC 20375 Abstract Convolutional Architecture for Fast Feature Encoding (CAFFE) [11] is a soft...This is especially true with convolutional neural networks which depend upon the architecture to detect edges and objects in the same way the human

  17. A fast complex integer convolution using a hybrid transform

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; K Truong, T.

    1978-01-01

    It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.

  18. Sensing Super-position: Visual Instrument Sensor Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2006-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an

  19. Measurement-Induced Macroscopic Superposition States in Cavity Optomechanics

    NASA Astrophysics Data System (ADS)

    Hoff, Ulrich B.; Kollath-Bönig, Johann; Neergaard-Nielsen, Jonas S.; Andersen, Ulrik L.

    2016-09-01

    A novel protocol for generating quantum superpositions of macroscopically distinct states of a bulk mechanical oscillator is proposed, compatible with existing optomechanical devices operating in the bad-cavity limit. By combining a pulsed optomechanical quantum nondemolition (QND) interaction with nonclassical optical resources and measurement-induced feedback, the need for strong single-photon coupling is avoided. We outline a three-pulse sequence of QND interactions encompassing squeezing-enhanced cooling by measurement, state preparation, and tomography.

  20. Weak measurement of quantum superposition states in graphene

    NASA Astrophysics Data System (ADS)

    Trushin, Maxim; Bülte, Johannes; Belzig, Wolfgang

    2017-09-01

    We employ a weak measurement approach to demonstrate the very existence of the photoexcited interband superposition states in intrinsic graphene. We propose an optical two-beam setup where such measurements are possible and derive an explicit formula for the differential optical absorption that contains a signature of such states. We provide an interpretation of our results in terms of a non-Markovian weak measurement formalism applied to the pseudospin degree of freedom coupled with an electromagnetic wave.

  1. Macroscopic superposition of ultracold atoms with orbital degrees of freedom

    SciTech Connect

    Garcia-March, M. A.; Carr, L. D.; Dounas-Frazer, D. R.

    2011-04-15

    We introduce higher dimensions into the problem of Bose-Einstein condensates in a double-well potential, taking into account orbital angular momentum. We completely characterize the eigenstates of this system, delineating new regimes via both analytical high-order perturbation theory and numerical exact diagonalization. Among these regimes are mixed Josephson- and Fock-like behavior, crossings in both excited and ground states, and shadows of macroscopic superposition states.

  2. Interplay of gravitation and linear superposition of different mass eigenstates

    SciTech Connect

    Ahluwalia, D.V. |; Burgard, C.

    1998-04-01

    The interplay of gravitation and the quantum-mechanical principle of linear superposition induces a new set of neutrino oscillation phases. These ensure that the flavor-oscillation clocks, inherent in the phenomenon of neutrino oscillations, redshift precisely as required by Einstein{close_quote}s theory of gravitation. The physical observability of these phases in the context of the solar neutrino anomaly, type-II supernova, and certain atomic systems is briefly discussed. {copyright} {ital 1998} {ital The American Physical Society}

  3. Sensing Super-position: Visual Instrument Sensor Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2006-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an

  4. Robust probabilistic superposition and comparison of protein structures

    PubMed Central

    2010-01-01

    Background Protein structure comparison is a central issue in structural bioinformatics. The standard dissimilarity measure for protein structures is the root mean square deviation (RMSD) of representative atom positions such as α-carbons. To evaluate the RMSD the structures under comparison must be superimposed optimally so as to minimize the RMSD. How to evaluate optimal fits becomes a matter of debate, if the structures contain regions which differ largely - a situation encountered in NMR ensembles and proteins undergoing large-scale conformational transitions. Results We present a probabilistic method for robust superposition and comparison of protein structures. Our method aims to identify the largest structurally invariant core. To do so, we model non-rigid displacements in protein structures with outlier-tolerant probability distributions. These distributions exhibit heavier tails than the Gaussian distribution underlying standard RMSD minimization and thus accommodate highly divergent structural regions. The drawback is that under a heavy-tailed model analytical expressions for the optimal superposition no longer exist. To circumvent this problem we work with a scale mixture representation, which implies a weighted RMSD. We develop two iterative procedures, an Expectation Maximization algorithm and a Gibbs sampler, to estimate the local weights, the optimal superposition, and the parameters of the heavy-tailed distribution. Applications demonstrate that heavy-tailed models capture differences between structures undergoing substantial conformational changes and can be used to assess the precision of NMR structures. By comparing Bayes factors we can automatically choose the most adequate model. Therefore our method is parameter-free. Conclusions Heavy-tailed distributions are well-suited to describe large-scale conformational differences in protein structures. A scale mixture representation facilitates the fitting of these distributions and enables outlier

  5. Single-Atom Gating of Quantum State Superpositions

    SciTech Connect

    Moon, Christopher

    2010-04-28

    The ultimate miniaturization of electronic devices will likely require local and coherent control of single electronic wavefunctions. Wavefunctions exist within both physical real space and an abstract state space with a simple geometric interpretation: this state space - or Hilbert space - is spanned by mutually orthogonal state vectors corresponding to the quantized degrees of freedom of the real-space system. Measurement of superpositions is akin to accessing the direction of a vector in Hilbert space, determining an angle of rotation equivalent to quantum phase. Here we show that an individual atom inside a designed quantum corral1 can control this angle, producing arbitrary coherent superpositions of spatial quantum states. Using scanning tunnelling microscopy and nanostructures assembled atom-by-atom we demonstrate how single spins and quantum mirages can be harnessed to image the superposition of two electronic states. We also present a straightforward method to determine the atom path enacting phase rotations between any desired state vectors. A single atom thus becomes a real-space handle for an abstract Hilbert space, providing a simple technique for coherent quantum state manipulation at the spatial limit of condensed matter.

  6. Dosimetric comparison of absolute and relative dose distributions between tissue maximum ratio and convolution algorithms for acoustic neurinoma plans in Gamma Knife radiosurgery.

    PubMed

    Nakazawa, Hisato; Komori, Masataka; Shibamoto, Yuta; Tsugawa, Takahiko; Mori, Yoshimasa; Kobayashi, Tatsuya

    2014-08-01

    The treatment planning for Gamma Knife (GK) stereotactic radiosurgery (SRS) that performs dose calculations based on tissue maximum ratio (TMR) algorithm has disadvantages in predicting dose in tissue heterogeneity. The latest version of the planning software is equipped with a convolution dose algorithm as an optional extra and the new algorithm is able to compensate for head inhomogeneity. However, the effect of this improved calculation method requires detailed validation in clinical cases. In this study, we compared absolute and relative dose distributions of treatment plans for acoustic neurinoma between TMR and the convolution calculation. Twenty-nine clinically used plans created by TMR algorithm were recalculated by convolution method. Differences between TMR and convolution were evaluated in terms of absolute dose (beam-on time), dosimetric parameters including target coverage, selectivity, conformity index, gradient index, radical homogeneity index and the dose-volume relationship. The discrepancy in estimated absolute dose to the target ranged from 1 to 7 % between TMR and convolution. In addition, dosimetric parameters of the two methods achieved statistical significance. However, it was difficult to see the change of relative dose distribution by visual assessment on a monitor. Convolution, heterogeneity correction calculation, and the algorithm are necessary to reduce the dosimetric uncertainty of each case in GK SRS.

  7. Human Parsing with Contextualized Convolutional Neural Network.

    PubMed

    Liang, Xiaodan; Xu, Chunyan; Shen, Xiaohui; Yang, Jianchao; Tang, Jinhui; Lin, Liang; Yan, Shuicheng

    2016-03-02

    In this work, we address the human parsing task with a novel Contextualized Convolutional Neural Network (Co-CNN) architecture, which well integrates the cross-layer context, global image-level context, semantic edge context, within-super-pixel context and cross-super-pixel neighborhood context into a unified network. Given an input human image, Co-CNN produces the pixel-wise categorization in an end-to-end way. First, the cross-layer context is captured by our basic local-to-global-to-local structure, which hierarchically combines the global semantic information and the local fine details across different convolutional layers. Second, the global image-level label prediction is used as an auxiliary objective in the intermediate layer of the Co-CNN, and its outputs are further used for guiding the feature learning in subsequent convolutional layers to leverage the global imagelevel context. Third, semantic edge context is further incorporated into Co-CNN, where the high-level semantic boundaries are leveraged to guide pixel-wise labeling. Finally, to further utilize the local super-pixel contexts, the within-super-pixel smoothing and cross-super-pixel neighbourhood voting are formulated as natural sub-components of the Co-CNN to achieve the local label consistency in both training and testing process. Comprehensive evaluations on two public datasets well demonstrate the significant superiority of our Co-CNN over other state-of-the-arts for human parsing. In particular, the F-1 score on the large dataset [1] reaches 81:72% by Co-CNN, significantly higher than 62:81% and 64:38% by the state-of-the-art algorithms, MCNN [2] and ATR [1], respectively. By utilizing our newly collected large dataset for training, our Co-CNN can achieve 85:36% in F-1 score.

  8. Applications of convolution voltammetry in electroanalytical chemistry.

    PubMed

    Bentley, Cameron L; Bond, Alan M; Hollenkamp, Anthony F; Mahon, Peter J; Zhang, Jie

    2014-02-18

    The robustness of convolution voltammetry for determining accurate values of the diffusivity (D), bulk concentration (C(b)), and stoichiometric number of electrons (n) has been demonstrated by applying the technique to a series of electrode reactions in molecular solvents and room temperature ionic liquids (RTILs). In acetonitrile, the relatively minor contribution of nonfaradaic current facilitates analysis with macrodisk electrodes, thus moderate scan rates can be used without the need to perform background subtraction to quantify the diffusivity of iodide [D = 1.75 (±0.02) × 10(-5) cm(2) s(-1)] in this solvent. In the RTIL 1-ethyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide, background subtraction is necessary at a macrodisk electrode but can be avoided at a microdisk electrode, thereby simplifying the analytical procedure and allowing the diffusivity of iodide [D = 2.70 (±0.03) × 10(-7) cm(2) s(-1)] to be quantified. Use of a convolutive procedure which simultaneously allows D and nC(b) values to be determined is also demonstrated. Three conditions under which a technique of this kind may be applied are explored and are related to electroactive species which display slow dissolution kinetics, undergo a single multielectron transfer step, or contain multiple noninteracting redox centers using ferrocene in an RTIL, 1,4-dinitro-2,3,5,6-tetramethylbenzene, and an alkynylruthenium trimer, respectively, as examples. The results highlight the advantages of convolution voltammetry over steady-state techniques such as rotating disk electrode voltammetry and microdisk electrode voltammetry, as it is not restricted by the mode of diffusion (planar or radial), hence removing limitations on solvent viscosity, electrode geometry, and voltammetric scan rate.

  9. Zebrafish tracking using convolutional neural networks

    PubMed Central

    XU, Zhiping; Cheng, Xi En

    2017-01-01

    Keeping identity for a long term after occlusion is still an open problem in the video tracking of zebrafish-like model animals, and accurate animal trajectories are the foundation of behaviour analysis. We utilize the highly accurate object recognition capability of a convolutional neural network (CNN) to distinguish fish of the same congener, even though these animals are indistinguishable to the human eye. We used data augmentation and an iterative CNN training method to optimize the accuracy for our classification task, achieving surprisingly accurate trajectories of zebrafish of different size and age zebrafish groups over different time spans. This work will make further behaviour analysis more reliable. PMID:28211462

  10. Zebrafish tracking using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Xu, Zhiping; Cheng, Xi En

    2017-02-01

    Keeping identity for a long term after occlusion is still an open problem in the video tracking of zebrafish-like model animals, and accurate animal trajectories are the foundation of behaviour analysis. We utilize the highly accurate object recognition capability of a convolutional neural network (CNN) to distinguish fish of the same congener, even though these animals are indistinguishable to the human eye. We used data augmentation and an iterative CNN training method to optimize the accuracy for our classification task, achieving surprisingly accurate trajectories of zebrafish of different size and age zebrafish groups over different time spans. This work will make further behaviour analysis more reliable.

  11. Convolutional coding combined with continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Pizzi, S. V.; Wilson, S. G.

    1985-01-01

    Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.

  12. Fast Convolution Algorithms and Associated VHSIC Architectures.

    DTIC Science & Technology

    1983-05-23

    Idenftfy by block number) Finite field, Mersenne prime , Fermat number, primitive element, number- theoretic transform, cyclic convolution, polynomial...elements of order 2 P+p and 2k n in the finite field GF(q 2), where q = 2P-l is a Mersenne prime , p is a prime number, and n is a divisor of 2pl...Abstract - A high-radix f.f.t. algorithm for computing transforms over GF(q2), where q is a Mersenne prime , is developed to implement fast circular

  13. Bacterial colony counting by Convolutional Neural Networks.

    PubMed

    Ferrari, Alessandro; Lombardi, Stefano; Signoroni, Alberto

    2015-01-01

    Counting bacterial colonies on microbiological culture plates is a time-consuming, error-prone, nevertheless fundamental task in microbiology. Computer vision based approaches can increase the efficiency and the reliability of the process, but accurate counting is challenging, due to the high degree of variability of agglomerated colonies. In this paper, we propose a solution which adopts Convolutional Neural Networks (CNN) for counting the number of colonies contained in confluent agglomerates, that scored an overall accuracy of the 92.8% on a large challenging dataset. The proposed CNN-based technique for estimating the cardinality of colony aggregates outperforms traditional image processing approaches, becoming a promising approach to many related applications.

  14. Convolutional coding combined with continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Pizzi, S. V.; Wilson, S. G.

    1985-01-01

    Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.

  15. SU-E-T-355: Efficient Scatter Correction for Direct Ray-Tracing Based Dose Calculation

    SciTech Connect

    Chen, M; Jiang, S; Lu, W

    2015-06-15

    Purpose: To propose a scatter correction method with linear computational complexity for direct-ray-tracing (DRT) based dose calculation. Due to its speed and simplicity, DRT is widely used as a dose engine in the treatment planning system (TPS) and monitor unit (MU) verification software, where heterogeneity correction is applied by radiological distance scaling. However, such correction only accounts for attenuation but not scatter difference, causing the DRT algorithm less accurate than the model-based algorithms for small field size in heterogeneous media. Methods: Inspired by the convolution formula derived from an exponential kernel as is typically done in the collapsed-cone-convolution-superposition (CCCS) method, we redesigned the ray tracing component as the sum of TERMA scaled by a local deposition factor, which is linear with respect to density, and dose of the previous voxel scaled by a remote deposition factor, D(i)=aρ(i)T(i)+(b+c(ρ(i)-1))D(i-1),where T(i)=e(-αr(i)+β(r(i))2) and r(i)=Σ-(j=1,..,i)ρ(j).The two factors together with TERMA can be expressed in terms of 5 parameters, which are subsequently optimized by curve fitting using digital phantoms for each field size and each beam energy. Results: The proposed algorithm was implemented for the Fluence-Convolution-Broad-Beam (FCBB) dose engine and evaluated using digital slab phantoms and clinical CT data. Compared with the gold standard calculation, dose deviations were improved from 20% to 2% in the low density regions of the slab phantoms for the 1-cm field size, and within 2% for over 95% of the volume with the largest discrepancy at the interface for the clinical lung case. Conclusion: We developed a simple recursive formula for scatter correction for the DRT-based dose calculation with much improved accuracy, especially for small field size, while still keeping calculation to linear complexity. The proposed calculator is fast, yet accurate, which is crucial for dose updating in IMRT

  16. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.

    PubMed

    Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L

    2017-04-27

    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

  17. A Simple Method on Generating any Bi-Photon Superposition State with Linear Optics

    NASA Astrophysics Data System (ADS)

    Zhang, Ting-Ting; Wei, Jie; Wang, Qin

    2017-04-01

    We present a simple method on the generation of any bi-photon superposition state using only linear optics. In this scheme, the input states, a two-mode squeezed state and a bi-photon state, meet on a beam-splitter and the output states are post-selected with two threshold single-photon detectors. We carry out corresponding numerical simulations by accounting for practical experimental conditions, calculating both the Wigner function and the state fidelity of those generated bi-photon superposition states. Our simulation results demonstrate that not only distinct nonclassical characteristics but also very high state fidelities can be achieved even under imperfect experimental conditions. Supported by the National Natural Science Foundation of China under Grant Nos. 61475197, 61590932, 11274178, the Natural Science Foundation of the Jiangsu Higher Education Institutions under Grant No. 15KJA120002, the Outstanding Youth Project of Jiangsu Province under Grant No. BK20150039, and the Priority Academic Program Development of Jiangsu Higher Education Institutions under Grant No. YX002001

  18. Convolutional fountain distribution over fading wireless channels

    NASA Astrophysics Data System (ADS)

    Usman, Mohammed

    2012-08-01

    Mobile broadband has opened the possibility of a rich variety of services to end users. Broadcast/multicast of multimedia data is one such service which can be used to deliver multimedia to multiple users economically. However, the radio channel poses serious challenges due to its time-varying properties, resulting in each user experiencing different channel characteristics, independent of other users. Conventional methods of achieving reliability in communication, such as automatic repeat request and forward error correction do not scale well in a broadcast/multicast scenario over radio channels. Fountain codes, being rateless and information additive, overcome these problems. Although the design of fountain codes makes it possible to generate an infinite sequence of encoded symbols, the erroneous nature of radio channels mandates the need for protecting the fountain-encoded symbols, so that the transmission is feasible. In this article, the performance of fountain codes in combination with convolutional codes, when used over radio channels, is presented. An investigation of various parameters, such as goodput, delay and buffer size requirements, pertaining to the performance of fountain codes in a multimedia broadcast/multicast environment is presented. Finally, a strategy for the use of 'convolutional fountain' over radio channels is also presented.

  19. Convolution formulations for non-negative intensity.

    PubMed

    Williams, Earl G

    2013-08-01

    Previously unknown spatial convolution formulas for a variant of the active normal intensity in planar coordinates have been derived that use measured pressure or normal velocity near-field holograms to construct a positive-only (outward) intensity distribution in the plane, quantifying the areas of the vibrating structure that produce radiation to the far-field. This is an extension of the outgoing-only (unipolar) intensity technique recently developed for arbitrary geometries by Steffen Marburg. The method is applied independently to pressure and velocity data measured in a plane close to the surface of a point-driven, unbaffled rectangular plate in the laboratory. It is demonstrated that the sound producing regions of the structure are clearly revealed using the derived formulas and that the spatial resolution is limited to a half-wavelength. A second set of formulas called the hybrid-intensity formulas are also derived which yield a bipolar intensity using a different spatial convolution operator, again using either the measured pressure or velocity. It is demonstrated from the experiment results that the velocity formula yields the classical active intensity and the pressure formula an interesting hybrid intensity that may be useful for source localization. Computations are fast and carried out in real space without Fourier transforms into wavenumber space.

  20. NUCLEI SEGMENTATION VIA SPARSITY CONSTRAINED CONVOLUTIONAL REGRESSION

    PubMed Central

    Zhou, Yin; Chang, Hang; Barner, Kenneth E.; Parvin, Bahram

    2017-01-01

    Automated profiling of nuclear architecture, in histology sections, can potentially help predict the clinical outcomes. However, the task is challenging as a result of nuclear pleomorphism and cellular states (e.g., cell fate, cell cycle), which are compounded by the batch effect (e.g., variations in fixation and staining). Present methods, for nuclear segmentation, are based on human-designed features that may not effectively capture intrinsic nuclear architecture. In this paper, we propose a novel approach, called sparsity constrained convolutional regression (SCCR), for nuclei segmentation. Specifically, given raw image patches and the corresponding annotated binary masks, our algorithm jointly learns a bank of convolutional filters and a sparse linear regressor, where the former is used for feature extraction, and the latter aims to produce a likelihood for each pixel being nuclear region or background. During classification, the pixel label is simply determined by a thresholding operation applied on the likelihood map. The method has been evaluated using the benchmark dataset collected from The Cancer Genome Atlas (TCGA). Experimental results demonstrate that our method outperforms traditional nuclei segmentation algorithms and is able to achieve competitive performance compared to the state-of-the-art algorithm built upon human-designed features with biological prior knowledge. PMID:28101301

  1. Convolution Inequalities for the Boltzmann Collision Operator

    NASA Astrophysics Data System (ADS)

    Alonso, Ricardo J.; Carneiro, Emanuel; Gamba, Irene M.

    2010-09-01

    We study integrability properties of a general version of the Boltzmann collision operator for hard and soft potentials in n-dimensions. A reformulation of the collisional integrals allows us to write the weak form of the collision operator as a weighted convolution, where the weight is given by an operator invariant under rotations. Using a symmetrization technique in L p we prove a Young’s inequality for hard potentials, which is sharp for Maxwell molecules in the L 2 case. Further, we find a new Hardy-Littlewood-Sobolev type of inequality for Boltzmann collision integrals with soft potentials. The same method extends to radially symmetric, non-increasing potentials that lie in some {Ls_{weak}} or L s . The method we use resembles a Brascamp, Lieb and Luttinger approach for multilinear weighted convolution inequalities and follows a weak formulation setting. Consequently, it is closely connected to the classical analysis of Young and Hardy-Littlewood-Sobolev inequalities. In all cases, the inequality constants are explicitly given by formulas depending on integrability conditions of the angular cross section (in the spirit of Grad cut-off). As an additional application of the technique we also obtain estimates with exponential weights for hard potentials in both conservative and dissipative interactions.

  2. Comparison of dose calculation algorithms in slab phantoms with cortical bone equivalent heterogeneities

    SciTech Connect

    Carrasco, P.; Jornet, N.; Duch, M. A.; Panettieri, V.; Weber, L.; Eudaldo, T.; Ginjaume, M.; Ribas, M.

    2007-08-15

    To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10x10, 5x5, and 2x2 cm{sup 2}) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2x2 cm{sup 2} field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values

  3. Comparison of dose calculation algorithms in slab phantoms with cortical bone equivalent heterogeneities.

    PubMed

    Carrasco, P; Jornet, N; Duch, M A; Panettieri, V; Weber, L; Eudaldo, T; Ginjaume, M; Ribas, M

    2007-08-01

    To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10 x 10, 5 x 5, and 2 x 2 cm2) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2 x 2 cm2 field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values

  4. New quantum MDS-convolutional codes derived from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Li, Fengwei; Yue, Qin

    2015-12-01

    In this paper, we utilize a family of Hermitian dual-containing constacyclic codes to construct classical and quantum MDS convolutional codes. Our classical and quantum convolutional codes are optimal in the sense that they attain the classical (quantum) generalized Singleton bound.

  5. Experimental Investigation of Convoluted Contouring for Aircraft Afterbody Drag Reduction

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Hunter, Craig A.

    1999-01-01

    An experimental investigation was performed in the NASA Langley 16-Foot Transonic Tunnel to determine the aerodynamic effects of external convolutions, placed on the boattail of a nonaxisymmetric nozzle for drag reduction. Boattail angles of 15 and 22 were tested with convolutions placed at a forward location upstream of the boattail curvature, at a mid location along the curvature and at a full location that spanned the entire boattail flap. Each of the baseline nozzle afterbodies (no convolutions) had a parabolic, converging contour with a parabolically decreasing corner radius. Data were obtained at several Mach numbers from static conditions to 1.2 for a range of nozzle pressure ratios and angles of attack. An oil paint flow visualization technique was used to qualitatively assess the effect of the convolutions. Results indicate that afterbody drag reduction by convoluted contouring is convolution location, Mach number, boattail angle, and NPR dependent. The forward convolution location was the most effective contouring geometry for drag reduction on the 22 afterbody, but was only effective for M < 0.95. At M = 0.8, drag was reduced 20 and 36 percent at NPRs of 5.4 and 7, respectively, but drag was increased 10 percent for M = 0.95 at NPR = 7. Convoluted contouring along the 15 boattail angle afterbody was not effective at reducing drag because the flow was minimally separated from the baseline afterbody, unlike the massive separation along the 22 boattail angle baseline afterbody.

  6. SU-E-T-277: Dose Calculation Comparisons Between Monaco, Pinnacle and Eclipse Treatment Planning Systems

    SciTech Connect

    Bosse, C; Kirby, N; Narayanasamy, G; Papanikolaou, N; Stathakis, S

    2015-06-15

    Purpose: Monaco treatment planning system (TPS) version 5.0 uses a Monte-Carlo based dose calculation engine. The aim of this study is to verify and compare the Monaco based dose calculations with both Pinnacle{sup 3} collapsed cone convolution superposition (CCC) and Eclipse analytical anisotropic algorithm (AAA) calculations. Methods: For this study, previously treated SBRT lung, head and neck and abdomen patients were chosen to compare dose calculations between Pinnacle, Monaco and Eclipse. Plans were chosen from those that had been treated using the Elekta VersaHD or a NovalisTX linac. The plans included 3D conventional and IMRT beams using 6MV and 6MV Flattening filter free (FFF) photon beams. The original plans calculated with CCCS or AAA along with the recalculated ones using MC from the three TPS were exported into Velocity software for inter-comparison. Results: To compare the dose calculations, Mean Lung Dose (MLD), lung V5 and V20 values, and PTV Heterogeneity indexes (HI) and Conformity indexes (CI) were all calculated and recorded from the dose volume histograms (DVH). For each patient, the CI values were identical but there were differences in all other parameters. The HI was computed higher by 5 and 4% for calculated plans AAA and CCCS respectively, compared to the MC ones. The DVH graphs showed large differences between the CCCS and AAA and Monaco for 3D FFF, VMAT and IMRT plans. Better DVH agreement between was observed for 3D conventional plans. Conclusion: Better agreement was observed between CCCS and MC calculations than AAA and MC calculations. Those differences were more profound as the field size was decreasing and in the presence of inhomogeneities.

  7. Quantum jumps, superpositions, and the continuous evolution of quantum states

    NASA Astrophysics Data System (ADS)

    Dick, Rainer

    2017-02-01

    The apparent dichotomy between quantum jumps on the one hand, and continuous time evolution according to wave equations on the other hand, provided a challenge to Bohr's proposal of quantum jumps in atoms. Furthermore, Schrödinger's time-dependent equation also seemed to require a modification of the explanation for the origin of line spectra due to the apparent possibility of superpositions of energy eigenstates for different energy levels. Indeed, Schrödinger himself proposed a quantum beat mechanism for the generation of discrete line spectra from superpositions of eigenstates with different energies. However, these issues between old quantum theory and Schrödinger's wave mechanics were correctly resolved only after the development and full implementation of photon quantization. The second quantized scattering matrix formalism reconciles quantum jumps with continuous time evolution through the identification of quantum jumps with transitions between different sectors of Fock space. The continuous evolution of quantum states is then recognized as a sum over continually evolving jump amplitudes between different sectors in Fock space. In today's terminology, this suggests that linear combinations of scattering matrix elements are epistemic sums over ontic states. Insights from the resolution of the dichotomy between quantum jumps and continuous time evolution therefore hold important lessons for modern research both on interpretations of quantum mechanics and on the foundations of quantum computing. They demonstrate that discussions of interpretations of quantum theory necessarily need to take into account field quantization. They also demonstrate the limitations of the role of wave equations in quantum theory, and caution us that superpositions of quantum states for the formation of qubits may be more limited than usually expected.

  8. Comparison of selected dose calculation algorithms in radiotherapy treatment planning for tissues with inhomogeneities

    NASA Astrophysics Data System (ADS)

    Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.

    2016-03-01

    Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.

  9. Labelled Unit Superposition Calculi for Instantiation-Based Reasoning

    NASA Astrophysics Data System (ADS)

    Korovin, Konstantin; Sticksel, Christoph

    The Inst-Gen-Eq method is an instantiation-based calculus which is complete for first-order clause logic modulo equality. Its distinctive feature is that it combines first-order reasoning with efficient ground satisfiability checking which is delegated in a modular way to any state-of-the-art ground SMT solver. The first-order reasoning modulo equality employs a superposition-style calculus which generates the instances needed by the ground solver to refine a model of a ground abstraction or to witness unsatisfiability.

  10. Scaling of macroscopic superpositions close to a quantum phase transition

    NASA Astrophysics Data System (ADS)

    Abad, Tahereh; Karimipour, Vahid

    2016-05-01

    It is well known that in a quantum phase transition (QPT), entanglement remains short ranged [Osterloh et al., Nature (London) 416, 608 (2005), 10.1038/416608a]. We ask if there is a quantum property entailing the whole system which diverges near this point. Using the recently proposed measures of quantum macroscopicity, we show that near a quantum critical point, it is the effective size of macroscopic superposition between the two symmetry breaking states which grows to the scale of system size, and its derivative with respect to the coupling shows both singular behavior and scaling properties.

  11. Quantum superposition of massive objects and collapse models

    SciTech Connect

    Romero-Isart, Oriol

    2011-11-15

    We analyze the requirements to test some of the most paradigmatic collapse models with a protocol that prepares quantum superpositions of massive objects. This consists of coherently expanding the wave function of a ground-state-cooled mechanical resonator, performing a squared position measurement that acts as a double slit, and observing interference after further evolution. The analysis is performed in a general framework and takes into account only unavoidable sources of decoherence: blackbody radiation and scattering of environmental particles. We also discuss the limitations imposed by the experimental implementation of this protocol using cavity quantum optomechanics with levitating dielectric nanospheres.

  12. Teleportation of a general two-mode coherent-state superposition via attenuated quantum channels with ideal and/or threshold detectors

    NASA Astrophysics Data System (ADS)

    An, Nguyen Ba

    2009-04-01

    Three novel probabilistic yet conclusive schemes are proposed to teleport a general two-mode coherent-state superposition via attenuated quantum channels with ideal and/or threshold detectors. The calculated total success probability is highest (lowest) when only ideal (threshold) detectors are used.

  13. An efficient de-convolution reconstruction method for spatiotemporal-encoding single-scan 2D MRI.

    PubMed

    Cai, Congbo; Dong, Jiyang; Cai, Shuhui; Li, Jing; Chen, Ying; Bao, Lijun; Chen, Zhong

    2013-03-01

    Spatiotemporal-encoding single-scan MRI method is relatively insensitive to field inhomogeneity compared to EPI method. Conjugate gradient (CG) method has been used to reconstruct super-resolved images from the original blurred ones based on coarse magnitude-calculation. In this article, a new de-convolution reconstruction method is proposed. Through removing the quadratic phase modulation from the signal acquired with spatiotemporal-encoding MRI, the signal can be described as a convolution of desired super-resolved image and a point spread function. The de-convolution method proposed herein not only is simpler than the CG method, but also provides super-resolved images with better quality. This new reconstruction method may make the spatiotemporal-encoding 2D MRI technique more valuable for clinic applications. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data

    SciTech Connect

    Chen, M; Jiang, S; Lu, W

    2015-06-15

    Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data, as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.

  15. Superposition states for quantum nanoelectronic circuits and their nonclassical properties

    NASA Astrophysics Data System (ADS)

    Choi, Jeong Ryeol

    2016-09-01

    Quantum properties of a superposition state for a series RLC nanoelectronic circuit are investigated. Two displaced number states of the same amplitude but with opposite phases are considered as components of the superposition state. We have assumed that the capacitance of the system varies with time and a time-dependent power source is exerted on the system. The effects of displacement and a sinusoidal power source on the characteristics of the state are addressed in detail. Depending on the magnitude of the sinusoidal power source, the wave packets that propagate in charge(q)-space are more or less distorted. Provided that the displacement is sufficiently high, distinct interference structures appear in the plot of the time behavior of the probability density whenever the two components of the wave packet meet together. This is strong evidence for the advent of nonclassical properties in the system, that cannot be interpretable by the classical theory. Nonclassicality of a quantum system is not only a beneficial topic for academic interest in itself, but its results can be useful resources for quantum information and computation as well.

  16. Experiments testing macroscopic quantum superpositions must be slow

    PubMed Central

    Mari, Andrea; De Palma, Giacomo; Giovannetti, Vittorio

    2016-01-01

    We consider a thought experiment where the preparation of a macroscopically massive or charged particle in a quantum superposition and the associated dynamics of a distant test particle apparently allow for superluminal communication. We give a solution to the paradox which is based on the following fundamental principle: any local experiment, discriminating a coherent superposition from an incoherent statistical mixture, necessarily requires a minimum time proportional to the mass (or charge) of the system. For a charged particle, we consider two examples of such experiments, and show that they are both consistent with the previous limitation. In the first, the measurement requires to accelerate the charge, that can entangle with the emitted photons. In the second, the limitation can be ascribed to the quantum vacuum fluctuations of the electromagnetic field. On the other hand, when applied to massive particles our result provides an indirect evidence for the existence of gravitational vacuum fluctuations and for the possibility of entangling a particle with quantum gravitational radiation. PMID:26959656

  17. Unveiling the curtain of superposition: Recent gedanken and laboratory experiments

    NASA Astrophysics Data System (ADS)

    Cohen, E.; Elitzur, A. C.

    2017-08-01

    What is the true meaning of quantum superposition? Can a particle genuinely reside in several places simultaneously? These questions lie at the heart of this paper which presents an updated survey of some important stages in the evolution of the three-boxes paradox, as well as novel conclusions drawn from it. We begin with the original thought experiment of Aharonov and Vaidman, and proceed to its non-counterfactual version. The latter was recently realized by Okamoto and Takeuchi using a quantum router. We then outline a dynamic version of this experiment, where a particle is shown to “disappear” and “re-appear” during the time evolution of the system. This surprising prediction based on self-cancellation of weak values is directly related to our notion of Quantum Oblivion. Finally, we present the non-counterfactual version of this disappearing-reappearing experiment. Within the near future, this last version of the experiment is likely to be realized in the lab, proving the existence of exotic hitherto unknown forms of superposition. With the aid of Bell’s theorem, we prove the inherent nonlocality and nontemporality underlying such pre- and post-selected systems, rendering anomalous weak values ontologically real.

  18. Runs in superpositions of renewal processes with applications to discrimination

    NASA Astrophysics Data System (ADS)

    Alsmeyer, Gerold; Irle, Albrecht

    2006-02-01

    Wald and Wolfowitz [Ann. Math. Statist. 11 (1940) 147-162] introduced the run test for testing whether two samples of i.i.d. random variables follow the same distribution. Here a run means a consecutive subsequence of maximal length from only one of the two samples. In this paper we contribute to the problem of runs and resulting test procedures for the superposition of independent renewal processes which may be interpreted as arrival processes of customers from two different input channels at the same service station. To be more precise, let (Sn)n[greater-or-equal, slanted]1 and (Tn)n[greater-or-equal, slanted]1 be the arrival processes for channel 1 and channel 2, respectively, and (Wn)n[greater-or-equal, slanted]1 their be superposition with counting process . Let further be the number of runs in W1,...,Wn and the number of runs observed up to time t. We study the asymptotic behavior of and Rt, first for the case where (Sn)n[greater-or-equal, slanted]1 and (Tn)n[greater-or-equal, slanted]1 have exponentially distributed increments with parameters [lambda]1 and [lambda]2, and then for the more difficult situation when these increments have an absolutely continuous distribution. These results are used to design asymptotic level [alpha] tests for testing [lambda]1=[lambda]2 against [lambda]1[not equal to][lambda]2 in the first case, and for testing for equal scale parameters in the second.

  19. Free Nano-Object Ramsey Interferometry for Large Quantum Superpositions

    NASA Astrophysics Data System (ADS)

    Wan, C.; Scala, M.; Morley, G. W.; Rahman, ATM. A.; Ulbricht, H.; Bateman, J.; Barker, P. F.; Bose, S.; Kim, M. S.

    2016-09-01

    We propose an interferometric scheme based on an untrapped nano-object subjected to gravity. The motion of the center of mass (c.m.) of the free object is coupled to its internal spin system magnetically, and a free flight scheme is developed based on coherent spin control. The wave packet of the test object, under a spin-dependent force, may then be delocalized to a macroscopic scale. A gravity induced dynamical phase (accrued solely on the spin state, and measured through a Ramsey scheme) is used to reveal the above spatially delocalized superposition of the spin-nano-object composite system that arises during our scheme. We find a remarkable immunity to the motional noise in the c.m. (initially in a thermal state with moderate cooling), and also a dynamical decoupling nature of the scheme itself. Together they secure a high visibility of the resulting Ramsey fringes. The mass independence of our scheme makes it viable for a nano-object selected from an ensemble with a high mass variability. Given these advantages, a quantum superposition with a 100 nm spatial separation for a massive object of 1 09 amu is achievable experimentally, providing a route to test postulated modifications of quantum theory such as continuous spontaneous localization.

  20. Modeling scattering from azimuthally symmetric bathymetric features using wavefield superposition.

    PubMed

    Fawcett, John A

    2007-12-01

    In this paper, an approach for modeling the scattering from azimuthally symmetric bathymetric features is described. These features are useful models for small mounds and indentations on the seafloor at high frequencies and seamounts, shoals, and basins at low frequencies. A bathymetric feature can be considered as a compact closed region, with the same sound speed and density as one of the surrounding media. Using this approach, a number of numerical methods appropriate for a partially buried target or facet problem can be applied. This paper considers the use of wavefield superposition and because of the azimuthal symmetry, the three-dimensional solution to the scattering problem can be expressed as a Fourier sum of solutions to a set of two-dimensional scattering problems. In the case where the surrounding two half spaces have only a density contrast, a semianalytic coupled mode solution is derived. This provides a benchmark solution to scattering from a class of penetrable hemispherical bosses or indentations. The details and problems of the numerical implementation of the wavefield superposition method are described. Example computations using the method for a simple scattering feature on a seabed are presented for a wide band of frequencies.

  1. Experiments testing macroscopic quantum superpositions must be slow

    NASA Astrophysics Data System (ADS)

    Mari, Andrea; de Palma, Giacomo; Giovannetti, Vittorio

    2016-03-01

    We consider a thought experiment where the preparation of a macroscopically massive or charged particle in a quantum superposition and the associated dynamics of a distant test particle apparently allow for superluminal communication. We give a solution to the paradox which is based on the following fundamental principle: any local experiment, discriminating a coherent superposition from an incoherent statistical mixture, necessarily requires a minimum time proportional to the mass (or charge) of the system. For a charged particle, we consider two examples of such experiments, and show that they are both consistent with the previous limitation. In the first, the measurement requires to accelerate the charge, that can entangle with the emitted photons. In the second, the limitation can be ascribed to the quantum vacuum fluctuations of the electromagnetic field. On the other hand, when applied to massive particles our result provides an indirect evidence for the existence of gravitational vacuum fluctuations and for the possibility of entangling a particle with quantum gravitational radiation.

  2. Superposition states for quantum nanoelectronic circuits and their nonclassical properties

    NASA Astrophysics Data System (ADS)

    Choi, Jeong Ryeol

    2017-09-01

    Quantum properties of a superposition state for a series RLC nanoelectronic circuit are investigated. Two displaced number states of the same amplitude but with opposite phases are considered as components of the superposition state. We have assumed that the capacitance of the system varies with time and a time-dependent power source is exerted on the system. The effects of displacement and a sinusoidal power source on the characteristics of the state are addressed in detail. Depending on the magnitude of the sinusoidal power source, the wave packets that propagate in charge( q)-space are more or less distorted. Provided that the displacement is sufficiently high, distinct interference structures appear in the plot of the time behavior of the probability density whenever the two components of the wave packet meet together. This is strong evidence for the advent of nonclassical properties in the system, that cannot be interpretable by the classical theory. Nonclassicality of a quantum system is not only a beneficial topic for academic interest in itself, but its results can be useful resources for quantum information and computation as well.

  3. Free Nano-Object Ramsey Interferometry for Large Quantum Superpositions.

    PubMed

    Wan, C; Scala, M; Morley, G W; Rahman, Atm A; Ulbricht, H; Bateman, J; Barker, P F; Bose, S; Kim, M S

    2016-09-30

    We propose an interferometric scheme based on an untrapped nano-object subjected to gravity. The motion of the center of mass (c.m.) of the free object is coupled to its internal spin system magnetically, and a free flight scheme is developed based on coherent spin control. The wave packet of the test object, under a spin-dependent force, may then be delocalized to a macroscopic scale. A gravity induced dynamical phase (accrued solely on the spin state, and measured through a Ramsey scheme) is used to reveal the above spatially delocalized superposition of the spin-nano-object composite system that arises during our scheme. We find a remarkable immunity to the motional noise in the c.m. (initially in a thermal state with moderate cooling), and also a dynamical decoupling nature of the scheme itself. Together they secure a high visibility of the resulting Ramsey fringes. The mass independence of our scheme makes it viable for a nano-object selected from an ensemble with a high mass variability. Given these advantages, a quantum superposition with a 100 nm spatial separation for a massive object of 10^{9}  amu is achievable experimentally, providing a route to test postulated modifications of quantum theory such as continuous spontaneous localization.

  4. Experiments testing macroscopic quantum superpositions must be slow.

    PubMed

    Mari, Andrea; De Palma, Giacomo; Giovannetti, Vittorio

    2016-03-09

    We consider a thought experiment where the preparation of a macroscopically massive or charged particle in a quantum superposition and the associated dynamics of a distant test particle apparently allow for superluminal communication. We give a solution to the paradox which is based on the following fundamental principle: any local experiment, discriminating a coherent superposition from an incoherent statistical mixture, necessarily requires a minimum time proportional to the mass (or charge) of the system. For a charged particle, we consider two examples of such experiments, and show that they are both consistent with the previous limitation. In the first, the measurement requires to accelerate the charge, that can entangle with the emitted photons. In the second, the limitation can be ascribed to the quantum vacuum fluctuations of the electromagnetic field. On the other hand, when applied to massive particles our result provides an indirect evidence for the existence of gravitational vacuum fluctuations and for the possibility of entangling a particle with quantum gravitational radiation.

  5. Evolution of superpositions of quantum states through a level crossing

    SciTech Connect

    Torosov, B. T.; Vitanov, N. V.

    2011-12-15

    The Landau-Zener-Stueckelberg-Majorana (LZSM) model is widely used for estimating transition probabilities in the presence of crossing energy levels in quantum physics. This model, however, makes the unphysical assumption of an infinitely long constant interaction, which introduces a divergent phase in the propagator. This divergence remains hidden when estimating output probabilities for a single input state insofar as the divergent phase cancels out. In this paper we show that, because of this divergent phase, the LZSM model is inadequate to describe the evolution of pure or mixed superposition states across a level crossing. The LZSM model can be used only if the system is initially in a single state or in a completely mixed superposition state. To this end, we show that the more realistic Demkov-Kunike model, which assumes a hyperbolic-tangent level crossing and a hyperbolic-secant interaction envelope, is free of divergences and is a much more adequate tool for describing the evolution through a level crossing for an arbitrary input state. For multiple crossing energies which are reducible to one or more effective two-state systems (e.g., by the Majorana and Morris-Shore decompositions), similar conclusions apply: the LZSM model does not produce definite values of the populations and the coherences, and one should use the Demkov-Kunike model instead.

  6. Robust smile detection using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Celona, Luigi; Schettini, Raimondo

    2016-11-01

    We present a fully automated approach for smile detection. Faces are detected using a multiview face detector and aligned and scaled using automatically detected eye locations. Then, we use a convolutional neural network (CNN) to determine whether it is a smiling face or not. To this end, we investigate different shallow CNN architectures that can be trained even when the amount of learning data is limited. We evaluate our complete processing pipeline on the largest publicly available image database for smile detection in an uncontrolled scenario. We investigate the robustness of the method to different kinds of geometric transformations (rotation, translation, and scaling) due to imprecise face localization, and to several kinds of distortions (compression, noise, and blur). To the best of our knowledge, this is the first time that this type of investigation has been performed for smile detection. Experimental results show that our proposal outperforms state-of-the-art methods on both high- and low-quality images.

  7. Some partial-unit-memory convolutional codes

    NASA Technical Reports Server (NTRS)

    Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.

    1991-01-01

    The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.

  8. Compressed imaging by sparse random convolution.

    PubMed

    Marcos, Diego; Lasser, Theo; López, Antonio; Bourquard, Aurélien

    2016-01-25

    The theory of compressed sensing (CS) shows that signals can be acquired at sub-Nyquist rates if they are sufficiently sparse or compressible. Since many images bear this property, several acquisition models have been proposed for optical CS. An interesting approach is random convolution (RC). In contrast with single-pixel CS approaches, RC allows for the parallel capture of visual information on a sensor array as in conventional imaging approaches. Unfortunately, the RC strategy is difficult to implement as is in practical settings due to important contrast-to-noise-ratio (CNR) limitations. In this paper, we introduce a modified RC model circumventing such difficulties by considering measurement matrices involving sparse non-negative entries. We then implement this model based on a slightly modified microscopy setup using incoherent light. Our experiments demonstrate the suitability of this approach for dealing with distinct CS scenarii, including 1-bit CS.

  9. Convolutional neural network for pottery retrieval

    NASA Astrophysics Data System (ADS)

    Benhabiles, Halim; Tabia, Hedi

    2017-01-01

    The effectiveness of the convolutional neural network (CNN) has already been demonstrated in many challenging tasks of computer vision, such as image retrieval, action recognition, and object classification. This paper specifically exploits CNN to design local descriptors for content-based retrieval of complete or nearly complete three-dimensional (3-D) vessel replicas. Based on vector quantization, the designed descriptors are clustered to form a shape vocabulary. Then, each 3-D object is associated to a set of clusters (words) in that vocabulary. Finally, a weighted vector counting the occurrences of every word is computed. The reported experimental results on the 3-D pottery benchmark show the superior performance of the proposed method.

  10. Image statistics decoding for convolutional codes

    NASA Technical Reports Server (NTRS)

    Pitt, G. H., III; Swanson, L.; Yuen, J. H.

    1987-01-01

    It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.

  11. Convolution models for induced electromagnetic responses

    PubMed Central

    Litvak, Vladimir; Jha, Ashwani; Flandin, Guillaume; Friston, Karl

    2013-01-01

    In Kilner et al. [Kilner, J.M., Kiebel, S.J., Friston, K.J., 2005. Applications of random field theory to electrophysiology. Neurosci. Lett. 374, 174–178.] we described a fairly general analysis of induced responses—in electromagnetic brain signals—using the summary statistic approach and statistical parametric mapping. This involves localising induced responses—in peristimulus time and frequency—by testing for effects in time–frequency images that summarise the response of each subject to each trial type. Conventionally, these time–frequency summaries are estimated using post‐hoc averaging of epoched data. However, post‐hoc averaging of this sort fails when the induced responses overlap or when there are multiple response components that have variable timing within each trial (for example stimulus and response components associated with different reaction times). In these situations, it is advantageous to estimate response components using a convolution model of the sort that is standard in the analysis of fMRI time series. In this paper, we describe one such approach, based upon ordinary least squares deconvolution of induced responses to input functions encoding the onset of different components within each trial. There are a number of fundamental advantages to this approach: for example; (i) one can disambiguate induced responses to stimulus onsets and variably timed responses; (ii) one can test for the modulation of induced responses—over peristimulus time and frequency—by parametric experimental factors and (iii) one can gracefully handle confounds—such as slow drifts in power—by including them in the model. In what follows, we consider optimal forms for convolution models of induced responses, in terms of impulse response basis function sets and illustrate the utility of deconvolution estimators using simulated and real MEG data. PMID:22982359

  12. Quantifying the interplay effect in prostate IMRT delivery using a convolution-based method

    PubMed Central

    Li, Haisen S.; Chetty, Indrin J.; Solberg, Timothy D.

    2008-01-01

    The authors present a segment-based convolution method to account for the interplay effect between intrafraction organ motion and the multileaf collimator position for each particular segment in intensity modulated radiation therapy (IMRT) delivered in a step-and-shoot manner. In this method, the static dose distribution attributed to each segment is convolved with the probability density function (PDF) of motion during delivery of the segment, whereas in the conventional convolution method (“average-based convolution”), the static dose distribution is convolved with the PDF averaged over an entire fraction, an entire treatment course, or even an entire patient population. In the case of IMRT delivered in a step-and-shoot manner, the average-based convolution method assumes that in each segment the target volume experiences the same motion pattern (PDF) as that of population. In the segment-based convolution method, the dose during each segment is calculated by convolving the static dose with the motion PDF specific to that segment, allowing both intrafraction motion and the interplay effect to be accounted for in the dose calculation. Intrafraction prostate motion data from a population of 35 patients tracked using the Calypso system (Calypso Medical Technologies, Inc., Seattle, WA) was used to generate motion PDFs. These were then convolved with dose distributions from clinical prostate IMRT plans. For a single segment with a small number of monitor units, the interplay effect introduced errors of up to 25.9% in the mean CTV dose compared against the planned dose evaluated by using the PDF of the entire fraction. In contrast, the interplay effect reduced the minimum CTV dose by 4.4%, and the CTV generalized equivalent uniform dose by 1.3%, in single fraction plans. For entire treatment courses delivered in either a hypofractionated (five fractions) or conventional (>30 fractions) regimen, the discrepancy in total dose due to interplay effect was negligible

  13. Generation of fast charged particles in a superposition of oscillating electric fields with stochastically jumping phases

    NASA Astrophysics Data System (ADS)

    Loginov, V. M.

    2017-07-01

    The motion of a nonrelativistic charged particle in an alternating electric field representing a superposition of monochromatic waves with phases described by stochastic jumplike functions of time has been studied. Statistical analysis is performed in the framework of an exactly solvable model, in which the phases are treated as independent random telegraph signals. The mean kinetic energy of the charged particle is calculated. It is shown that there is a manifold of characteristics of stochastically jumping phases (shift amplitudes and mean frequencies) for which the oscillating mean energy grows with the time. For time periods much greater than the characteristic decay time of phase correlations, the mean kinetic energy linearly increases with time (stochastic heating). The growth rate nonmonotonically depends on the parameters of phase jumps, and the maximum increment is proportional to the number of harmonics.

  14. Limitations to the validity of single wake superposition in wind farm yield assessment

    NASA Astrophysics Data System (ADS)

    Gunn, K.; Stock-Williams, C.; Burke, M.; Willden, R.; Vogel, C.; Hunter, W.; Stallard, T.; Robinson, N.; Schmidt, S. R.

    2016-09-01

    Commercially available wind yield assessment models rely on superposition of wakes calculated for isolated single turbines. These methods of wake simulation fail to account for emergent flow physics that may affect the behaviour of multiple turbines and their wakes and therefore wind farm yield predictions. In this paper wake-wake interaction is modelled computationally (CFD) and physically (in a hydraulic flume) to investigate physical causes of discrepancies between analytical modelling and simulations or measurements. Three effects, currently neglected in commercial models, are identified as being of importance: 1) when turbines are directly aligned, the combined wake is shortened relative to the single turbine wake; 2) when wakes are adjacent, each will be lengthened due to reduced mixing; and 3) the pressure field of downstream turbines can move and modify wakes flowing close to them.

  15. Communications: Is quantum chemical treatment of biopolymers accurate? Intramolecular basis set superposition error (BSSE).

    PubMed

    Balabin, Roman M

    2010-06-21

    The accuracy of quantum chemical treatment of biopolymers by means of density functional theory is brought into question in terms of intramolecular basis set superposition error (BSSE). Secondary structure forms--beta-strands (C5; fully extended conformation), repeated gamma-turns (C7), 3(10)-helices (C10), and alpha-helices (C13)--of homopolypeptides (polyglycine and polyalanine) are used as representative examples. The studied molecules include Ace(Gly)(5)NH(2), Ace(Gly)(10)NH(2), Ace(Ala)(5)NH(2), and Ace(Ala)(10)NH(2). The counterpoise correction procedure was found to produce reliable estimations for the BSSE values (other methods of BSSE correction are discussed). The calculations reported here used the B3LYP, PBE0 (PBE1PBE), and BMK density functionals with different basis sets [from 6-31G(d) to 6-311+G(3df,3pd)] to estimate the influence of basis set size on intramolecular BSSE. Calculation of BSSE was used to determine the deviation of the current results from the complete basis set limit. Intramolecular BSSE was found to be nonadditive with respect to biopolymer size, in contrast to claims in recent literature. The error, which is produced by a basis set superposition, was found to exceed 4 kcal mol(-1) when a medium-sized basis set was used. This indicates that this error has the same order of magnitude as the relative energy differences of secondary structure elements of biopolymers. This result makes all recent reports on the gas-phase stability of homopolypeptides and their analogs questionable.

  16. Image quality of mixed convolution kernel in thoracic computed tomography

    PubMed Central

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-01-01

    Abstract The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images. Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test. Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001). The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT. PMID:27858910

  17. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  18. Enthalpy difference between conformations of normal alkanes: effects of basis set and chain length on intramolecular basis set superposition error

    NASA Astrophysics Data System (ADS)

    Balabin, Roman M.

    2011-03-01

    The quantum chemistry of conformation equilibrium is a field where great accuracy (better than 100 cal mol-1) is needed because the energy difference between molecular conformers rarely exceeds 1000-3000 cal mol-1. The conformation equilibrium of straight-chain (normal) alkanes is of particular interest and importance for modern chemistry. In this paper, an extra error source for high-quality ab initio (first principles) and DFT calculations of the conformation equilibrium of normal alkanes, namely the intramolecular basis set superposition error (BSSE), is discussed. In contrast to out-of-plane vibrations in benzene molecules, diffuse functions on carbon and hydrogen atoms were found to greatly reduce the relative BSSE of n-alkanes. The corrections due to the intramolecular BSSE were found to be almost identical for the MP2, MP4, and CCSD(T) levels of theory. Their cancelation is expected when CCSD(T)/CBS (CBS, complete basis set) energies are evaluated by addition schemes. For larger normal alkanes (N > 12), the magnitude of the BSSE correction was found to be up to three times larger than the relative stability of the conformer; in this case, the basis set superposition error led to a two orders of magnitude difference in conformer abundance. No error cancelation due to the basis set superposition was found. A comparison with amino acid, peptide, and protein data was provided.

  19. Enabling coherent superpositions of iso-frequency optical states in multimode fibers.

    PubMed

    Shapira, Ofer; Abouraddy, Ayman F; Hu, Qichao; Shemuly, Dana; Joannopoulos, John D; Fink, Yoel

    2010-06-07

    The ability to precisely and selectively excite superpositions of specific fiber eigenmodes allows one in principle to control the three dimensional field distribution along the length of a fiber. Here we demonstrate the dynamic synthesis and controlled transmission of vectorial eigenstates in a hollow core cylindrical photonic bandgap fiber, including a coherent superposition of two different angular momentum states. The results are verified using a modal decomposition algorithm that yields the unique complex superposition coefficients of the eigenstate space.

  20. Creation of Arbitrary Coherent Superposition States in Four-Level Systems

    SciTech Connect

    Gong, S.; Niu, Y.

    2005-08-15

    Using the technique of stimulated Raman adiabatic passage, we propose schemes for creating arbitrary coherent superposition states of atoms in four-level systems: a {lambda}-type system with twofold final states and a four-level ladder system. With the use of a control field, arbitrary coherent superposition states are created without the condition of multiphoton resonance. Suitable manipulation of detunings and the control field can create either a single state or any superposition states desired.

  1. The origin of non-classical effects in a one-dimensional superposition of coherent states

    NASA Technical Reports Server (NTRS)

    Buzek, V.; Knight, P. L.; Barranco, A. Vidiella

    1992-01-01

    We investigate the nature of the quantum fluctuations in a light field created by the superposition of coherent fields. We give a physical explanation (in terms of Wigner functions and phase-space interference) why the 1-D superposition of coherent states in the direction of the x-quadrature leads to the squeezing of fluctuations in the y-direction, and show that such a superposition can generate the squeezed vacuum and squeezed coherent states.

  2. Sensing Super-Position: Human Sensing Beyond the Visual Spectrum

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2007-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This paper addresses the technical feasibility of augmenting human vision through Sensing Super-position by mixing natural Human sensing. The current implementation of the device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of Lie human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system. The

  3. Entanglement of coherent superposition of photon-subtraction squeezed vacuum

    NASA Astrophysics Data System (ADS)

    Liu, Cun-Jin; Ye, Wei; Zhou, Wei-Dong; Zhang, Hao-Liang; Huang, Jie-Hui; Hu, Li-Yun

    2017-10-01

    A new kind of non-Gaussian quantum state is introduced by applying nonlocal coherent superposition ( τa + sb) m of photon subtraction to two single-mode squeezed vacuum states, and the properties of entanglement are investigated according to the degree of entanglement and the average fidelity of quantum teleportation. The state can be seen as a single-variable Hermitian polynomial excited squeezed vacuum state, and its normalization factor is related to the Legendre polynomial. It is shown that, for τ = s, the maximum fidelity can be achieved, even over the classical limit (1/2), only for even-order operation m and equivalent squeezing parameters in a certain region. However, the maximum entanglement can be achieved for squeezing parameters with a π phase difference. These indicate that the optimal realizations of fidelity and entanglement could be different from one another. In addition, the parameter τ/ s has an obvious effect on entanglement and fidelity.

  4. Entanglement and Decoherence in Two-Dimensional Coherent State Superpositions

    NASA Astrophysics Data System (ADS)

    Maleki, Y.

    2017-03-01

    A detailed investigation of entanglement in the generalized two-dimensional nonorthogonal states, which are expressed in the framework of superposed coherent states, is presented. In addition to quantifying entanglement of the generalized two-dimensional coherent states superposition, necessary and sufficient conditions for maximality of entanglement of these states are found. We show that a large class of maximally entangled coherent states can be constructed, and hence, some new maximally entangled coherent states are explicitly manipulated. The investigation is extended to the mixed system states and entanglement properties of such mixed states are investigated. It is shown that in some cases maximally entangled mixed states can be detected. Furthermore, the effect of decoherence, due to both cavity losses and noisy channel process, on such entangled states are studied and its features are discussed.

  5. Predicting jet radius in electrospinning by superpositioning exponential functions

    NASA Astrophysics Data System (ADS)

    Widartiningsih, P. M.; Iskandar, F.; Munir, M. M.; Viridi, S.

    2016-08-01

    This paper presents an analytical study of the correlation between viscosity and fiber diameter in electrospinning. Control over fiber diameter in electrospinning process was important since it will determine the performance of resulting nanofiber. Theoretically, fiber diameter was determined by surface tension, solution concentration, flow rate, and electric current. But experimentally it had been proven that significantly viscosity had an influence to fiber diameter. Jet radius equation in electrospinning process was divided into three areas: near the nozzle, far from the nozzle, and at jet terminal. There was no correlation between these equations. Superposition of exponential series model provides the equations combined into one, thus the entire of working parameters on electrospinning take a contribution to fiber diameter. This method yields the value of solution viscosity has a linear relation to jet radius. However, this method works only for low viscosity.

  6. Superposition of an orthogonal oscillation to study anisotropy in polymers

    NASA Astrophysics Data System (ADS)

    Coletti, Marco; Pepi, Renzo

    2014-05-01

    Rheology is routinely used to assess visco-elastic properties and structure of polymers, both in the melt state and in solution. Standard rheometers, though, can apply shear or oscillation in one direction, only. Several systems show a clear anisotropic behavior when tested along different directions, but a typical rheometer cannot perform such 2D deformation. In this paper we propose a different approach to such a characterization, where a controlled oscillation is applied in an orthogonal direction with respect to shear or oscillation. This technique is capable of revealing additional information on how samples behave in two directions, since it is either possible to superimpose the orthogonal oscillation to a constant shear (Orthogonal SuperPosition: OSP) or to another oscillation in the shear direction (2D SAOS).

  7. Robustness of superposition states evolving under the influence of a thermal reservoir

    SciTech Connect

    Sales, J. S.; Almeida, N. G. de

    2011-06-15

    We study the evolution of superposition states under the influence of a reservoir at zero and finite temperatures in cavity quantum electrodynamics aiming to know how their purity is lost over time. The superpositions studied here are composed of coherent states, orthogonal coherent states, squeezed coherent states, and orthogonal squeezed coherent states, which we introduce to generalize the orthogonal coherent states. For comparison, we also show how the robustness of the superpositions studied here differs from that of a qubit given by a superposition of zero- and one-photon states.

  8. Macroscopicity of quantum superpositions on a one-parameter unitary path in Hilbert space

    NASA Astrophysics Data System (ADS)

    Volkoff, T. J.; Whaley, K. B.

    2014-12-01

    We analyze quantum states formed as superpositions of an initial pure product state and its image under local unitary evolution, using two measurement-based measures of superposition size: one based on the optimal quantum binary distinguishability of the branches of the superposition and another based on the ratio of the maximal quantum Fisher information of the superposition to that of its branches, i.e., the relative metrological usefulness of the superposition. A general formula for the effective sizes of these states according to the branch-distinguishability measure is obtained and applied to superposition states of N quantum harmonic oscillators composed of Gaussian branches. Considering optimal distinguishability of pure states on a time-evolution path leads naturally to a notion of distinguishability time that generalizes the well-known orthogonalization times of Mandelstam and Tamm and Margolus and Levitin. We further show that the distinguishability time provides a compact operational expression for the superposition size measure based on the relative quantum Fisher information. By restricting the maximization procedure in the definition of this measure to an appropriate algebra of observables, we show that the superposition size of, e.g., NOON states and hierarchical cat states, can scale linearly with the number of elementary particles comprising the superposition state, implying precision scaling inversely with the total number of photons when these states are employed as probes in quantum parameter estimation of a 1-local Hamiltonian in this algebra.

  9. Polyphony: superposition independent methods for ensemble-based drug discovery.

    PubMed

    Pitt, William R; Montalvão, Rinaldo W; Blundell, Tom L

    2014-09-30

    Structure-based drug design is an iterative process, following cycles of structural biology, computer-aided design, synthetic chemistry and bioassay. In favorable circumstances, this process can lead to the structures of hundreds of protein-ligand crystal structures. In addition, molecular dynamics simulations are increasingly being used to further explore the conformational landscape of these complexes. Currently, methods capable of the analysis of ensembles of crystal structures and MD trajectories are limited and usually rely upon least squares superposition of coordinates. Novel methodologies are described for the analysis of multiple structures of a protein. Statistical approaches that rely upon residue equivalence, but not superposition, are developed. Tasks that can be performed include the identification of hinge regions, allosteric conformational changes and transient binding sites. The approaches are tested on crystal structures of CDK2 and other CMGC protein kinases and a simulation of p38α. Known interaction - conformational change relationships are highlighted but also new ones are revealed. A transient but druggable allosteric pocket in CDK2 is predicted to occur under the CMGC insert. Furthermore, an evolutionarily-conserved conformational link from the location of this pocket, via the αEF-αF loop, to phosphorylation sites on the activation loop is discovered. New methodologies are described and validated for the superimposition independent conformational analysis of large collections of structures or simulation snapshots of the same protein. The methodologies are encoded in a Python package called Polyphony, which is released as open source to accompany this paper [http://wrpitt.bitbucket.org/polyphony/].

  10. Vehicle detection based on visual saliency and deep sparse convolution hierarchical model

    NASA Astrophysics Data System (ADS)

    Cai, Yingfeng; Wang, Hai; Chen, Xiaobo; Gao, Li; Chen, Long

    2016-07-01

    Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification. These types of methods generally have high processing times and low vehicle detection performance. To address this issue, a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed. A visual saliency calculation is firstly used to generate a small vehicle candidate area. The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection. The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group, which outperforms the existing state-of-the-art algorithms. More importantly, high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.

  11. Hardware efficient implementation of DFT using an improved first-order moments based cyclic convolution structure

    NASA Astrophysics Data System (ADS)

    Xiong, Jun; Liu, J. G.; Cao, Li

    2015-12-01

    This paper presents hardware efficient designs for implementing the one-dimensional (1D) discrete Fourier transform (DFT). Once DFT is formulated as the cyclic convolution form, the improved first-order moments-based cyclic convolution structure can be used as the basic computing unit for the DFT computation, which only contains a control module, a barrel shifter and (N-1)/2 accumulation units. After decomposing and reordering the twiddle factors, all that remains to do is shifting the input data sequence and accumulating them under the control of the statistical results on the twiddle factors. The whole calculation process only contains shift operations and additions with no need for multipliers and large memory. Compared with the previous first-order moments-based structure for DFT, the proposed designs have the advantages of less hardware consumption, lower power consumption and the flexibility to achieve better performance in certain cases. A series of experiments have proven the high performance of the proposed designs in terms of the area time product and power consumption. Similar efficient designs can be obtained for other computations, such as DCT/IDCT, DST/IDST, digital filter and correlation by transforming them into the forms of the first-order moments based cyclic convolution.

  12. Programmable convolution via the chirp Z-transform with CCD's

    NASA Technical Reports Server (NTRS)

    Buss, D. D.

    1977-01-01

    Technique filtering by convolution in frequency domain rather than in time domain presents possible solution to problem of programmable transversal filters. Process is accomplished through utilization of chip z-transform (CZT) with charge-coupled devices

  13. Model Convolution: A Computational Approach to Digital Image Interpretation.

    PubMed

    Gardner, Melissa K; Sprague, Brian L; Pearson, Chad G; Cosgrove, Benjamin D; Bicek, Andrew D; Bloom, Kerry; Salmon, E D; Odde, David J

    2010-06-01

    Digital fluorescence microscopy is commonly used to track individual proteins and their dynamics in living cells. However, extracting molecule-specific information from fluorescence images is often limited by the noise and blur intrinsic to the cell and the imaging system. Here we discuss a method called "model-convolution," which uses experimentally measured noise and blur to simulate the process of imaging fluorescent proteins whose spatial distribution cannot be resolved. We then compare model-convolution to the more standard approach of experimental deconvolution. In some circumstances, standard experimental deconvolution approaches fail to yield the correct underlying fluorophore distribution. In these situations, model-convolution removes the uncertainty associated with deconvolution and therefore allows direct statistical comparison of experimental and theoretical data. Thus, if there are structural constraints on molecular organization, the model-convolution method better utilizes information gathered via fluorescence microscopy, and naturally integrates experiment and theory.

  14. A fast computation of complex convolution using a hybrid transform

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1978-01-01

    The cyclic convolution of complex values was obtained by a hybrid transform that is a combination of a Winograd transform and a fast complex integer transform. This new hybrid algorithm requires fewer multiplications than any previously known algorithm.

  15. Programmable convolution via the chirp Z-transform with CCD's

    NASA Technical Reports Server (NTRS)

    Buss, D. D.

    1977-01-01

    Technique filtering by convolution in frequency domain rather than in time domain presents possible solution to problem of programmable transversal filters. Process is accomplished through utilization of chip z-transform (CZT) with charge-coupled devices

  16. Model Convolution: A Computational Approach to Digital Image Interpretation

    PubMed Central

    Gardner, Melissa K.; Sprague, Brian L.; Pearson, Chad G.; Cosgrove, Benjamin D.; Bicek, Andrew D.; Bloom, Kerry; Salmon, E. D.

    2010-01-01

    Digital fluorescence microscopy is commonly used to track individual proteins and their dynamics in living cells. However, extracting molecule-specific information from fluorescence images is often limited by the noise and blur intrinsic to the cell and the imaging system. Here we discuss a method called “model-convolution,” which uses experimentally measured noise and blur to simulate the process of imaging fluorescent proteins whose spatial distribution cannot be resolved. We then compare model-convolution to the more standard approach of experimental deconvolution. In some circumstances, standard experimental deconvolution approaches fail to yield the correct underlying fluorophore distribution. In these situations, model-convolution removes the uncertainty associated with deconvolution and therefore allows direct statistical comparison of experimental and theoretical data. Thus, if there are structural constraints on molecular organization, the model-convolution method better utilizes information gathered via fluorescence microscopy, and naturally integrates experiment and theory. PMID:20461132

  17. A fuzzy convolution model for radiobiologically optimized radiotherapy margins.

    PubMed

    Mzenda, Bongile; Hosseini-Ashrafi, Mir; Gegov, Alex; Brown, David J

    2010-06-07

    In this study we investigate the use of a new knowledge-based fuzzy logic technique to derive radiotherapy margins based on radiotherapy uncertainties and their radiobiological effects. The main radiotherapy uncertainties considered and used to build the model were delineation, set-up and organ motion-induced errors. The radiobiological effects of these combined errors, in terms of prostate tumour control probability and rectal normal tissue complication probability, were used to formulate the rule base and membership functions for a Sugeno type fuzzy system linking the error effect to the treatment margin. The defuzzified output was optimized by convolving it with a Gaussian convolution kernel to give a uniformly varying transfer function which was used to calculate the required treatment margins. The margin derived using the fuzzy technique showed good agreement compared to current prostate margins based on the commonly used margin formulation proposed by van Herk et al (2000 Int. J. Radiat. Oncol. Biol. Phys. 47 1121-35), and has nonlinear variation above combined errors of 5 mm standard deviation. The derived margin is on average 0.5 mm bigger than currently used margins in the region of small treatment uncertainties where margin reduction would be applicable. The new margin was applied in an intensity modulated radiotherapy prostate treatment planning example where margin reduction and a dose escalation regime were implemented, and by inducing equivalent treatment uncertainties, the resulting target and organs at risk doses were found to compare well to results obtained using currently recommended margins.

  18. Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.

    PubMed

    Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian

    2016-10-01

    In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.

  19. Determination of collisional linewidths and shifts by a convolution method

    NASA Technical Reports Server (NTRS)

    Pickett, H. M.

    1980-01-01

    A technique is described for fitting collisional linewidths and shifts from experimental spectral data. The method involves convoluting a low-pressure reference spectrum with a Lorentz shape function and comparing the convoluted spectrum with higher pressure spectra. Several experimental examples are given. One advantage of the method is that no extra information is needed about the instrument response function or spectral modulation. In addition, the method is shown to be relatively insensitive to the presence of reflections in the sample cell.

  20. A Review on the Use of Grid-Based Boltzmann Equation Solvers for Dose Calculation in External Photon Beam Treatment Planning

    PubMed Central

    Kan, Monica W. K.; Yu, Peter K. N.; Leung, Lucullus H. T.

    2013-01-01

    Deterministic linear Boltzmann transport equation (D-LBTE) solvers have recently been developed, and one of the latest available software codes, Acuros XB, has been implemented in a commercial treatment planning system for radiotherapy photon beam dose calculation. One of the major limitations of most commercially available model-based algorithms for photon dose calculation is the ability to account for the effect of electron transport. This induces some errors in patient dose calculations, especially near heterogeneous interfaces between low and high density media such as tissue/lung interfaces. D-LBTE solvers have a high potential of producing accurate dose distributions in and near heterogeneous media in the human body. Extensive previous investigations have proved that D-LBTE solvers were able to produce comparable dose calculation accuracy as Monte Carlo methods with a reasonable speed good enough for clinical use. The current paper reviews the dosimetric evaluations of D-LBTE solvers for external beam photon radiotherapy. This content summarizes and discusses dosimetric validations for D-LBTE solvers in both homogeneous and heterogeneous media under different circumstances and also the clinical impact on various diseases due to the conversion of dose calculation from a conventional convolution/superposition algorithm to a recently released D-LBTE solver. PMID:24066294

  1. A review on the use of grid-based Boltzmann equation solvers for dose calculation in external photon beam treatment planning.

    PubMed

    Kan, Monica W K; Yu, Peter K N; Leung, Lucullus H T

    2013-01-01

    Deterministic linear Boltzmann transport equation (D-LBTE) solvers have recently been developed, and one of the latest available software codes, Acuros XB, has been implemented in a commercial treatment planning system for radiotherapy photon beam dose calculation. One of the major limitations of most commercially available model-based algorithms for photon dose calculation is the ability to account for the effect of electron transport. This induces some errors in patient dose calculations, especially near heterogeneous interfaces between low and high density media such as tissue/lung interfaces. D-LBTE solvers have a high potential of producing accurate dose distributions in and near heterogeneous media in the human body. Extensive previous investigations have proved that D-LBTE solvers were able to produce comparable dose calculation accuracy as Monte Carlo methods with a reasonable speed good enough for clinical use. The current paper reviews the dosimetric evaluations of D-LBTE solvers for external beam photon radiotherapy. This content summarizes and discusses dosimetric validations for D-LBTE solvers in both homogeneous and heterogeneous media under different circumstances and also the clinical impact on various diseases due to the conversion of dose calculation from a conventional convolution/superposition algorithm to a recently released D-LBTE solver.

  2. Radial Structure Scaffolds Convolution Patterns of Developing Cerebral Cortex

    PubMed Central

    Razavi, Mir Jalil; Zhang, Tuo; Chen, Hanbo; Li, Yujie; Platt, Simon; Zhao, Yu; Guo, Lei; Hu, Xiaoping; Wang, Xianqiao; Liu, Tianming

    2017-01-01

    Commonly-preserved radial convolution is a prominent characteristic of the mammalian cerebral cortex. Endeavors from multiple disciplines have been devoted for decades to explore the causes for this enigmatic structure. However, the underlying mechanisms that lead to consistent cortical convolution patterns still remain poorly understood. In this work, inspired by prior studies, we propose and evaluate a plausible theory that radial convolution during the early development of the brain is sculptured by radial structures consisting of radial glial cells (RGCs) and maturing axons. Specifically, the regionally heterogeneous development and distribution of RGCs controlled by Trnp1 regulate the convex and concave convolution patterns (gyri and sulci) in the radial direction, while the interplay of RGCs' effects on convolution and axons regulates the convex (gyral) convolution patterns. This theory is assessed by observations and measurements in literature from multiple disciplines such as neurobiology, genetics, biomechanics, etc., at multiple scales to date. Particularly, this theory is further validated by multimodal imaging data analysis and computational simulations in this study. We offer a versatile and descriptive study model that can provide reasonable explanations of observations, experiments, and simulations of the characteristic mammalian cortical folding. PMID:28860983

  3. Radial Structure Scaffolds Convolution Patterns of Developing Cerebral Cortex.

    PubMed

    Razavi, Mir Jalil; Zhang, Tuo; Chen, Hanbo; Li, Yujie; Platt, Simon; Zhao, Yu; Guo, Lei; Hu, Xiaoping; Wang, Xianqiao; Liu, Tianming

    2017-01-01

    Commonly-preserved radial convolution is a prominent characteristic of the mammalian cerebral cortex. Endeavors from multiple disciplines have been devoted for decades to explore the causes for this enigmatic structure. However, the underlying mechanisms that lead to consistent cortical convolution patterns still remain poorly understood. In this work, inspired by prior studies, we propose and evaluate a plausible theory that radial convolution during the early development of the brain is sculptured by radial structures consisting of radial glial cells (RGCs) and maturing axons. Specifically, the regionally heterogeneous development and distribution of RGCs controlled by Trnp1 regulate the convex and concave convolution patterns (gyri and sulci) in the radial direction, while the interplay of RGCs' effects on convolution and axons regulates the convex (gyral) convolution patterns. This theory is assessed by observations and measurements in literature from multiple disciplines such as neurobiology, genetics, biomechanics, etc., at multiple scales to date. Particularly, this theory is further validated by multimodal imaging data analysis and computational simulations in this study. We offer a versatile and descriptive study model that can provide reasonable explanations of observations, experiments, and simulations of the characteristic mammalian cortical folding.

  4. Patient-specific dosimetry using quantitative SPECT imaging and three-dimensional discrete fourier transform convolution

    SciTech Connect

    Akabani, G.; Hawkins, W.G.; Eckblade, M.B.; Leichner, P.K.

    1997-02-01

    The objective of this study was to develop a three-dimensional discrete Fourier transform (3D-DFT) convolution method to perform the dosimetry for {sup 131}I-labeled antibodies in soft tissues. Mathematical and physical phantoms were used to compare 3D-DFT with Monte Carlo transport (MCT) calculations based on the EGS4 code. The mathematical and physical phantoms consisted of a sphere and cylinder, respectively, containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the circular harmonic transform (CHT) algorithm. The radial dose profile obtained from MCT calculations and the 3D-DFT convolution method for the mathematical phantom were in close agreement. The root mean square error (RMSE) for the two methods was <0.1%, with a maximum difference <21%. Results obtained for the physical phantom gave a RMSE <0.1% and a maximum difference of <13%; isodose contours were in good agreement. SPECT data for two patients who had undergone {sup 131}I radioimmunotherapy (RIT) were used to compare absorbed-dose rates and isodose rate contours with the two methods of calculations. This yielded a RMSE <0.02% and a maximum difference of <13%. Our results showed that the 3D-DFT convolution method compared well with MCT calculations. The 3D-DFT approach is computationally much more efficient and, hence, the method of choice. This method is patient-specific and applicable to the dosimetry of soft-tissue tumors and normal organs. It can be implemented on personal computers. 22 refs., 6 figs., 2 tabs.

  5. Convolution kernels for multi-wavelength imaging

    NASA Astrophysics Data System (ADS)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  6. Piano Transcription with Convolutional Sparse Lateral Inhibition

    DOE PAGES

    Cogliati, Andrea; Duan, Zhiyao; Wohlberg, Brendt Egon

    2017-02-08

    This paper extends our prior work on contextdependent piano transcription to estimate the length of the notes in addition to their pitch and onset. This approach employs convolutional sparse coding along with lateral inhibition constraints to approximate a musical signal as the sum of piano note waveforms (dictionary elements) convolved with their temporal activations. The waveforms are pre-recorded for the specific piano to be transcribed in the specific environment. A dictionary containing multiple waveforms per pitch is generated by truncating a long waveform for each pitch to different lengths. During transcription, the dictionary elements are fixed and their temporal activationsmore » are estimated and post-processed to obtain the pitch, onset and note length estimation. A sparsity penalty promotes globally sparse activations of the dictionary elements, and a lateral inhibition term penalizes concurrent activations of different waveforms corresponding to the same pitch within a temporal neighborhood, to achieve note length estimation. Experiments on the MAPS dataset show that the proposed approach significantly outperforms a state-of-the-art music transcription method trained in the same context-dependent setting in transcription accuracy.« less

  7. Event Discrimination using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Menon, Hareesh; Hughes, Richard; Daling, Alec; Winer, Brian

    2017-01-01

    Convolutional Neural Networks (CNNs) are computational models that have been shown to be effective at classifying different types of images. We present a method to use CNNs to distinguish events involving the production of a top quark pair and a Higgs boson from events involving the production of a top quark pair and several quark and gluon jets. To do this, we generate and simulate data using MADGRAPH and DELPHES for a general purpose LHC detector at 13 TeV. We produce images using a particle flow algorithm by binning the particles geometrically based on their position in the detector and weighting the bins by the energy of each particle within each bin, and by defining channels based on particle types (charged track, neutral hadronic, neutral EM, lepton, heavy flavor). Our classification results are competitive with standard machine learning techniques. We have also looked into the classification of the substructure of the events, in a process known as scene labeling. In this context, we look for the presence of boosted objects (such as top quarks) with substructure encompassed within single jets. Preliminary results on substructure classification will be presented.

  8. Accelerated unsteady flow line integral convolution.

    PubMed

    Liu, Zhanping; Moorhead, Robert J

    2005-01-01

    Unsteady flow line integral convolution (UFLIC) is a texture synthesis technique for visualizing unsteady flows with high temporal-spatial coherence. Unfortunately, UFLIC requires considerable time to generate each frame due to the huge amount of pathline integration that is computed for particle value scattering. This paper presents Accelerated UFLIC (AUFLIC) for near interactive (1 frame/second) visualization with 160,000 particles per frame. AUFLIC reuses pathlines in the value scattering process to reduce computationally expensive pathline integration. A flow-driven seeding strategy is employed to distribute seeds such that only a few of them need pathline integration while most seeds are placed along the pathlines advected at earlier times by other seeds upstream and, therefore, the known pathlines can be reused for fast value scattering. To maintain a dense scattering coverage to convey high temporal-spatial coherence while keeping the expense of pathline integration low, a dynamic seeding controller is designed to decide whether to advect, copy, or reuse a pathline. At a negligible memory cost, AUFLIC is 9 times faster than UFLIC with comparable image quality.

  9. Do Convolutional Neural Networks Learn Class Hierarchy?

    PubMed

    Alsallakh, Bilal; Jourabloo, Amin; Ye, Mao; Liu, Xiaoming; Ren, Liu

    2017-08-29

    Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data.

  10. Metaheuristic Algorithms for Convolution Neural Network

    PubMed Central

    Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  11. Metaheuristic Algorithms for Convolution Neural Network.

    PubMed

    Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).

  12. Prediction of color changes using the time-temperature superposition principle in liquid formulations.

    PubMed

    Mochizuki, Koji; Takayama, Kozo

    2014-01-01

    This study reports the results of applying the time-temperature superposition principle (TTSP) to the prediction of color changes in liquid formulations. A sample solution consisting of L-tryptophan and glucose was used as the model liquid formulation for the Maillard reaction. After accelerated aging treatment at elevated temperatures, the Commission Internationale de l'Eclairage (CIE) LAB color parameters (a*, b*, L*, and E*ab) of the sample solution were measured using a spectrophotometer. The TTSP was then applied to a kinetic analysis of the color changes. The calculated values of the apparent activation energy of a*, b*, L*, and ΔE*ab were 105.2, 109.8, 91.6, and 103.7 kJ/mol, respectively. The predicted values of the color parameters at 40°C were calculated using Arrhenius plots for each of the color parameters. A comparison of the relationships between the experimental and predicted values of each color parameter revealed the coefficients of determination for a*, b*, L*, and ΔE*ab to be 0.961, 0.979, 0.960, and 0.979, respectively. All the R(2) values were sufficiently high, and these results suggested that the prediction was highly reliable. Kinetic analysis using the TTSP was successfully applied to calculating the apparent activation energy and to predicting the color changes at any temperature or duration.

  13. Verification of monitor unit calculations for non-IMRT clinical radiotherapy: report of AAPM Task Group 114.

    PubMed

    Stern, Robin L; Heaton, Robert; Fraser, Martin W; Goddu, S Murty; Kirby, Thomas H; Lam, Kwok Leung; Molineu, Andrea; Zhu, Timothy C

    2011-01-01

    The requirement of an independent verification of the monitor units (MU) or time calculated to deliver the prescribed dose to a patient has been a mainstay of radiation oncology quality assurance. The need for and value of such a verification was obvious when calculations were performed by hand using look-up tables, and the verification was achieved by a second person independently repeating the calculation. However, in a modern clinic using CT/MR/PET simulation, computerized 3D treatment planning, heterogeneity corrections, and complex calculation algorithms such as convolution/superposition and Monte Carlo, the purpose of and methodology for the MU verification have come into question. In addition, since the verification is often performed using a simpler geometrical model and calculation algorithm than the primary calculation, exact or almost exact agreement between the two can no longer be expected. Guidelines are needed to help the physicist set clinically reasonable action levels for agreement. This report addresses the following charges of the task group: (1) To re-evaluate the purpose and methods of the "independent second check" for monitor unit calculations for non-IMRT radiation treatment in light of the complexities of modern-day treatment planning. (2) To present recommendations on how to perform verification of monitor unit calculations in a modern clinic. (3) To provide recommendations on establishing action levels for agreement between primary calculations and verification, and to provide guidance in addressing discrepancies outside the action levels. These recommendations are to be used as guidelines only and shall not be interpreted as requirements.

  14. Verification of monitor unit calculations for non-IMRT clinical radiotherapy: Report of AAPM Task Group 114

    SciTech Connect

    Stern, Robin L.; Heaton, Robert; Fraser, Martin W.; and others

    2011-01-15

    The requirement of an independent verification of the monitor units (MU) or time calculated to deliver the prescribed dose to a patient has been a mainstay of radiation oncology quality assurance. The need for and value of such a verification was obvious when calculations were performed by hand using look-up tables, and the verification was achieved by a second person independently repeating the calculation. However, in a modern clinic using CT/MR/PET simulation, computerized 3D treatment planning, heterogeneity corrections, and complex calculation algorithms such as convolution/superposition and Monte Carlo, the purpose of and methodology for the MU verification have come into question. In addition, since the verification is often performed using a simpler geometrical model and calculation algorithm than the primary calculation, exact or almost exact agreement between the two can no longer be expected. Guidelines are needed to help the physicist set clinically reasonable action levels for agreement. This report addresses the following charges of the task group: (1) To re-evaluate the purpose and methods of the ''independent second check'' for monitor unit calculations for non-IMRT radiation treatment in light of the complexities of modern-day treatment planning. (2) To present recommendations on how to perform verification of monitor unit calculations in a modern clinic. (3) To provide recommendations on establishing action levels for agreement between primary calculations and verification, and to provide guidance in addressing discrepancies outside the action levels. These recommendations are to be used as guidelines only and shall not be interpreted as requirements.

  15. SU-E-T-371: Evaluating the Convolution Algorithm of a Commercially Available Radiosurgery Irradiator Using a Novel Phantom

    SciTech Connect

    Cates, J; Drzymala, R

    2015-06-15

    Purpose: The purpose of this study was to develop and use a novel phantom to evaluate the accuracy and usefulness of the Leskell Gamma Plan convolution-based dose calculation algorithm compared with the current TMR10 algorithm. Methods: A novel phantom was designed to fit the Leskell Gamma Knife G Frame which could accommodate various materials in the form of one inch diameter, cylindrical plugs. The plugs were split axially to allow EBT2 film placement. Film measurements were made during two experiments. The first utilized plans generated on a homogeneous acrylic phantom setup using the TMR10 algorithm, with various materials inserted into the phantom during film irradiation to assess the effect on delivered dose due to unplanned heterogeneities upstream in the beam path. The second experiment utilized plans made on CT scans of different heterogeneous setups, with one plan using the TMR10 dose calculation algorithm and the second using the convolution-based algorithm. Materials used to introduce heterogeneities included air, LDPE, polystyrene, Delrin, Teflon, and aluminum. Results: The data shows that, as would be expected, having heterogeneities in the beam path does induce dose delivery error when using the TMR10 algorithm, with the largest errors being due to the heterogeneities with electron densities most different from that of water, i.e. air, Teflon, and aluminum. Additionally, the Convolution algorithm did account for the heterogeneous material and provided a more accurate predicted dose, in extreme cases up to a 7–12% improvement over the TMR10 algorithm. The convolution algorithm expected dose was accurate to within 3% in all cases. Conclusion: This study proves that the convolution algorithm is an improvement over the TMR10 algorithm when heterogeneities are present. More work is needed to determine what the heterogeneity size/volume limits are where this improvement exists, and in what clinical and/or research cases this would be relevant.

  16. Scrambled coherent superposition for enhanced optical fiber communication in the nonlinear transmission regime.

    PubMed

    Liu, Xiang; Chandrasekhar, S; Winzer, P J; Chraplyvy, A R; Tkach, R W; Zhu, B; Taunay, T F; Fishteyn, M; DiGiovanni, D J

    2012-08-13

    Coherent superposition of light waves has long been used in various fields of science, and recent advances in digital coherent detection and space-division multiplexing have enabled the coherent superposition of information-carrying optical signals to achieve better communication fidelity on amplified-spontaneous-noise limited communication links. However, fiber nonlinearity introduces highly correlated distortions on identical signals and diminishes the benefit of coherent superposition in nonlinear transmission regime. Here we experimentally demonstrate that through coordinated scrambling of signal constellations at the transmitter, together with appropriate unscrambling at the receiver, the full benefit of coherent superposition is retained in the nonlinear transmission regime of a space-diversity fiber link based on an innovatively engineered multi-core fiber. This scrambled coherent superposition may provide the flexibility of trading communication capacity for performance in future optical fiber networks, and may open new possibilities in high-performance and secure optical communications.

  17. ASIC-based architecture for the real-time computation of 2D convolution with large kernel size

    NASA Astrophysics Data System (ADS)

    Shao, Rui; Zhong, Sheng; Yan, Luxin

    2015-12-01

    Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.

  18. Colonoscopic polyp detection using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Park, Sun Young; Sargent, Dusty

    2016-03-01

    Computer aided diagnosis (CAD) systems for medical image analysis rely on accurate and efficient feature extraction methods. Regardless of which type of classifier is used, the results will be limited if the input features are not diagnostically relevant and do not properly discriminate between the different classes of images. Thus, a large amount of research has been dedicated to creating feature sets that capture the salient features that physicians are able to observe in the images. Successful feature extraction reduces the semantic gap between the physician's interpretation and the computer representation of images, and helps to reduce the variability in diagnosis between physicians. Due to the complexity of many medical image classification tasks, feature extraction for each problem often requires domainspecific knowledge and a carefully constructed feature set for the specific type of images being classified. In this paper, we describe a method for automatic diagnostic feature extraction from colonoscopy images that may have general application and require a lower level of domain-specific knowledge. The work in this paper expands on our previous CAD algorithm for detecting polyps in colonoscopy video. In that work, we applied an eigenimage model to extract features representing polyps, normal tissue, diverticula, etc. from colonoscopy videos taken from various viewing angles and imaging conditions. Classification was performed using a conditional random field (CRF) model that accounted for the spatial and temporal adjacency relationships present in colonoscopy video. In this paper, we replace the eigenimage feature descriptor with features extracted from a convolutional neural network (CNN) trained to recognize the same image types in colonoscopy video. The CNN-derived features show greater invariance to viewing angles and image quality factors when compared to the eigenimage model. The CNN features are used as input to the CRF classifier as before. We report

  19. Noise-enhanced convolutional neural networks.

    PubMed

    Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart

    2016-06-01

    Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Comparison of dose calculation algorithms in phantoms with lung equivalent heterogeneities under conditions of lateral electronic disequilibrium

    SciTech Connect

    Carrasco, P.; Jornet, N.; Duch, M.A.; Weber, L.; Ginjaume, M.; Eudaldo, T.; Jurado, D.; Ruiz, A.; Ribas, M.

    2004-10-01

    An extensive set of benchmark measurement of PDDs and beam profiles was performed in a heterogeneous layer phantom, including a lung equivalent heterogeneity, by means of several detectors and compared against the predicted dose values by different calculation algorithms in two treatment planning systems. PDDs were measured with TLDs, plane parallel and cylindrical ionization chambers and beam profiles with films. Additionally, Monte Carlo simulations by meansof the PENELOPE code were performed. Four different field sizes (10x10, 5x5, 2x2, and1x1 cm{sup 2}) and two lung equivalent materials (CIRS, {rho}{sub e}{sup w}=0.195 and St. Bartholomew Hospital, London, {rho}{sub e}{sup w}=0.244-0.322) were studied. The performance of four correction-based algorithms and one based on convolution-superposition was analyzed. The correction-based algorithms were the Batho, the Modified Batho, and the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system and the TMS Pencil Beam from the Helax-TMS (Nucletron) treatment planning system. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. The only studied calculation methods that correlated successfully with the measured values with a 2% average inside all media were the Collapsed Cone and the Monte Carlo simulation. The biggest difference between the predicted and the delivered dose in the beam axis was found for the EqTAR algorithm inside the CIRS lung equivalent material in a 2x2 cm{sup 2} 18 MV x-ray beam. In these conditions, average and maximum difference against the TLD measurements were 32% and 39%, respectively. In the water equivalent part of the phantom every algorithm correctly predicted the dose (within 2%) everywhere except very close to the interfaces where differences up to 24% were found for 2x2 cm{sup 2} 18 MV photon beams. Consistent values were found between the reference detector (ionization chamber in water and TLD in lung) and Monte Carlo

  1. Comparison of dose calculation algorithms in phantoms with lung equivalent heterogeneities under conditions of lateral electronic disequilibrium.

    PubMed

    Carrasco, P; Jornet, N; Duch, M A; Weber, L; Ginjaume, M; Eudaldo, T; Jurado, D; Ruiz, A; Ribas, M

    2004-10-01

    An extensive set of benchmark measurement of PDDs and beam profiles was performed in a heterogeneous layer phantom, including a lung equivalent heterogeneity, by means of several detectors and compared against the predicted dose values by different calculation algorithms in two treatment planning systems. PDDs were measured with TLDs, plane parallel and cylindrical ionization chambers and beam profiles with films. Additionally, Monte Carlo simulations by means of the PENELOPE code were performed. Four different field sizes (10 x 10, 5 x 5, 2 x 2, and 1 x 1 cm2) and two lung equivalent materials (CIRS, p(w)e=0.195 and St. Bartholomew Hospital, London, p(w)e=0.244-0.322) were studied. The performance of four correction-based algorithms and one based on convolution-superposition was analyzed. The correction-based algorithms were the Batho, the Modified Batho, and the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system and the TMS Pencil Beam from the Helax-TMS (Nucletron) treatment planning system. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. The only studied calculation methods that correlated successfully with the measured values with a 2% average inside all media were the Collapsed Cone and the Monte Carlo simulation. The biggest difference between the predicted and the delivered dose in the beam axis was found for the EqTAR algorithm inside the CIRS lung equivalent material in a 2 x 2 cm2 18 MV x-ray beam. In these conditions, average and maximum difference against the TLD measurements were 32% and 39%, respectively. In the water equivalent part of the phantom every algorithm correctly predicted the dose (within 2%) everywhere except very close to the interfaces where differences up to 24% were found for 2 x 2 cm2 18 MV photon beams. Consistent values were found between the reference detector (ionization chamber in water and TLD in lung) and Monte Carlo simulations, yielding minimal

  2. Advanced superposition methods for high speed turbopump vibration analysis

    NASA Technical Reports Server (NTRS)

    Nielson, C. E.; Campany, A. D.

    1981-01-01

    The small, high pressure Mark 48 liquid hydrogen turbopump was analyzed and dynamically tested to determine the cause of high speed vibration at an operating speed of 92,400 rpm. This approaches the design point operating speed of 95,000 rpm. The initial dynamic analysis in the design stage and subsequent further analysis of the rotor only dynamics failed to predict the vibration characteristics found during testing. An advanced procedure for dynamics analysis was used in this investigation. The procedure involves developing accurate dynamic models of the rotor assembly and casing assembly by finite element analysis. The dynamically instrumented assemblies are independently rap tested to verify the analytical models. The verified models are then combined by modal superposition techniques to develop a completed turbopump model where dynamic characteristics are determined. The results of the dynamic testing and analysis obtained are presented and methods of moving the high speed vibration characteristics to speeds above the operating range are recommended. Recommendations for use of these advanced dynamic analysis procedures during initial design phases are given.

  3. Hybrid multi-Bernoulli CPHD filter for superpositional sensors

    NASA Astrophysics Data System (ADS)

    Nannuru, Santosh; Coates, Mark

    2014-06-01

    We propose, for the super-positional sensor scenario, a hybrid between the multi-Bernoulli filter and the cardinal­ized probability hypothesis density (CPHD) filter. We use a multi-Bernoulli random finite set (RFS) to model existing targets and we use an independent and identically distributed cluster (IIDC) RFS to model newborn targets and targets with low probability of existence. Our main contributions are providing the update equa­tions of the hybrid filter and identifying computationally tractable approximations. We achieve this by defining conditional probability hypothesis densities (PHDs), where the conditioning is on one of the targets having a specified state. The filter performs an approximate Bayes update of the conditional PHDs. In parallel, we perform a cardinality update of the IIDC RFS component in order to estimate the number of newborn targets. We provide an auxiliary particle filter based implementation of the proposed filter and compare it with CPHD and multi-Bernoulli filters in a simulated multitarget tracking application

  4. Statistical moments in superposition models and strongly intensive measures

    NASA Astrophysics Data System (ADS)

    Broniowski, Wojciech; Olszewski, Adam

    2017-06-01

    First, we present a concise glossary of formulas for composition of standard, cumulant, factorial, and factorial cumulant moments in superposition (compound) models, where final particles are created via independent emission from a collection of sources. Explicit mathematical formulas for the composed moments are given to all orders. We discuss the composition laws for various types of moments via the generating-function methods and list the formulas for the unfolding of the unwanted fluctuations. Second, the technique is applied to the difference of the scaled multiplicities of two particle types. This allows for a systematic derivation and a simple algebraic interpretation of the so-called strongly intensive fluctuation measures. With the help of the formalism we obtain several new strongly intensive measures involving higher-rank moments. The reviewed as well as the new results may be useful in investigations of mechanisms of particle production and event-by-event fluctuations in high-energy nuclear and hadronic collisions, and in particular in the search for signatures of the QCD phase transition at a finite baryon density.

  5. High-order harmonic generation via multicolor beam superposition

    NASA Astrophysics Data System (ADS)

    Sarikhani, S.; Batebi, S.

    2017-09-01

    In this article, femtosecond pulses, especially designed by multicolor beam superposition are used for high-order harmonic generation. To achieve this purpose, the spectral difference between the beams, and their width are taken to be small values, i.e., less than 1 nm. Applying a Gaussian distribution to the beam intensities leads to a more distinct pulses. Also, it is seen that these pulses have an intrinsic linear chirp. By changing the width of the Gaussian distributions, we can have several pulses with different bandwidths and hence various pulse duration. Thus, the study of these broadband pulse influences, in contrast with monochromatic pulses, on the atomic or molecular targets was achievable. So, we studied numerically the effect of these femtosecond pulses on behavior of the high-order harmonics generated after interaction between the pulse and the atomic hydrogen. For this study, we adjusted the beam intensities so that the produced pulse intensity be in the over-barrier ionization region. This makes the power spectrum of high-order harmonics more extensive. Cutoff frequency of the power spectrum along with the first harmonic intensity and its shift from the incident pulse are investigated. Additionally, maximum ionization probability with respect to the pulse bandwidth was also studied.

  6. Solar Supergranulation Revealed as a Superposition of Traveling Waves

    NASA Technical Reports Server (NTRS)

    Gizon, L.; Duvall, T. L., Jr.; Schou, J.; Oegerle, William (Technical Monitor)

    2002-01-01

    40 years ago two new solar phenomena were described: supergranulation and the five-minute solar oscillations. While the oscillations have since been explained and exploited to determine the properties of the solar interior, the supergranulation has remained unexplained. The supergranules, appearing as convective-like cellular patterns of horizontal outward flow with a characteristic diameter of 30 Mm and an apparent lifetime of 1 day, have puzzling properties, including their apparent superrotation and the minute temperature variations over the cells. Using a 60-day sequence of data from the MDI (Michelson-Doppler Imager) instrument onboard the SOHO (Solar and Heliospheric Observatory) spacecraft, we show that the supergranulation pattern is formed by a superposition of traveling waves with periods of 5-10 days. The wave power is anisotropic with excess power in the direction of rotation and toward the equator, leading to spurious rotation rates and north-south flows as derived from correlation analyses. These newly discovered waves could play an important role in maintaining differential rotation in the upper convection zone by transporting angular momentum towards the equator.

  7. Coherent Superposition of Multi - Exciton Complexes in Semiconductor Nanocrystals

    NASA Astrophysics Data System (ADS)

    Shabaev, Andrew

    2005-03-01

    Very efficient multi-exciton generation has been recently observed in nanocrystals where an optically excited electron-hole pair with an energy greater than the bandgap (Eg) produces one or more additional electron-hole pairs [1,2]. We present a theory of multiple exciton generation in nanocrystals. We have shown that very efficient and fast exciton generation in nanocrystals occurs by the optical excitation of a coherent superposition of multi-exciton states by a single photon. This model explains ultrafast dynamics of optical bleaching that arises from state filling including quantum beats between the multi-exciton states. We have also shown that although highly efficient multiple exciton generation begins at photon energy 3Eg, the threshold of multiple exciton generation is 2Eg not, 3Eg as was suggested previously. 1. R. Schaller and V. Klimov, Phys. Rev. Lett. 92, 186601 (2004). 2. R. J. Ellingson, M. C. Beard, P. Yu, O. I. Micic, A. J. Nozik, A. Shabaev, and Al. L. Efros, submitted.

  8. Superposition, Transition Probabilities and Primitive Observables in Infinite Quantum Systems

    NASA Astrophysics Data System (ADS)

    Buchholz, Detlev; Størmer, Erling

    2015-10-01

    The concepts of superposition and of transition probability, familiar from pure states in quantum physics, are extended to locally normal states on funnels of type I∞ factors. Such funnels are used in the description of infinite systems, appearing for example in quantum field theory or in quantum statistical mechanics; their respective constituents are interpreted as algebras of observables localized in an increasing family of nested spacetime regions. Given a generic reference state (expectation functional) on a funnel, e.g. a ground state or a thermal equilibrium state, it is shown that irrespective of the global type of this state all of its excitations, generated by the adjoint action of elements of the funnel, can coherently be superimposed in a meaningful manner. Moreover, these states are the extreme points of their convex hull and as such are analogues of pure states. As further support of this analogy, transition probabilities are defined, complete families of orthogonal states are exhibited and a one-to-one correspondence between the states and families of minimal projections on a Hilbert space is established. The physical interpretation of these quantities relies on a concept of primitive observables. It extends the familiar framework of observable algebras and avoids some counter intuitive features of that setting. Primitive observables admit a consistent statistical interpretation of corresponding measurements and their impact on states is described by a variant of the von Neumann-Lüders projection postulate.

  9. Vibration analysis of FG cylindrical shells with power-law index using discrete singular convolution technique

    NASA Astrophysics Data System (ADS)

    Mercan, Kadir; Demir, Çiǧdem; Civalek, Ömer

    2016-01-01

    In the present manuscript, free vibration response of circular cylindrical shells with functionally graded material (FGM) is investigated. The method of discrete singular convolution (DSC) is used for numerical solution of the related governing equation of motion of FGM cylindrical shell. The constitutive relations are based on the Love's first approximation shell theory. The material properties are graded in the thickness direction according to a volume fraction power law indexes. Frequency values are calculated for different types of boundary conditions, material and geometric parameters. In general, close agreement between the obtained results and those of other researchers has been found.

  10. SU-E-T-465: Dose Calculation Method for Dynamic Tumor Tracking Using a Gimbal-Mounted Linac

    SciTech Connect

    Sugimoto, S; Inoue, T; Kurokawa, C; Usui, K; Sasai, K; Utsunomiya, S; Ebe, K

    2014-06-01

    Purpose: Dynamic tumor tracking using the gimbal-mounted linac (Vero4DRT, Mitsubishi Heavy Industries, Ltd., Japan) has been available when respiratory motion is significant. The irradiation accuracy of the dynamic tumor tracking has been reported to be excellent. In addition to the irradiation accuracy, a fast and accurate dose calculation algorithm is needed to validate the dose distribution in the presence of respiratory motion because the multiple phases of it have to be considered. A modification of dose calculation algorithm is necessary for the gimbal-mounted linac due to the degrees of freedom of gimbal swing. The dose calculation algorithm for the gimbal motion was implemented using the linear transformation between coordinate systems. Methods: The linear transformation matrices between the coordinate systems with and without gimbal swings were constructed using the combination of translation and rotation matrices. The coordinate system where the radiation source is at the origin and the beam axis along the z axis was adopted. The transformation can be divided into the translation from the radiation source to the gimbal rotation center, the two rotations around the center relating to the gimbal swings, and the translation from the gimbal center to the radiation source. After operating the transformation matrix to the phantom or patient image, the dose calculation can be performed as the no gimbal swing. The algorithm was implemented in the treatment planning system, PlanUNC (University of North Carolina, NC). The convolution/superposition algorithm was used. The dose calculations with and without gimbal swings were performed for the 3 × 3 cm{sup 2} field with the grid size of 5 mm. Results: The calculation time was about 3 minutes per beam. No significant additional time due to the gimbal swing was observed. Conclusions: The dose calculation algorithm for the finite gimbal swing was implemented. The calculation time was moderate.

  11. Brain and art: illustrations of the cerebral convolutions. A review.

    PubMed

    Lazić, D; Marinković, S; Tomić, I; Mitrović, D; Starčević, A; Milić, I; Grujičić, M; Marković, B

    2014-08-01

    Aesthetics and functional significance of the cerebral cortical relief gave us the idea to find out how often the convolutions are presented in fine art, and in which techniques, conceptual meaning and pathophysiological aspect. We examined 27,614 art works created by 2,856 authors and presented in art literature, and in Google images search. The cerebral gyri were shown in 0.85% of the art works created by 2.35% of the authors. The concept of the brain was first mentioned in ancient Egypt some 3,700 years ago. The first artistic drawing of the convolutions was made by Leonardo da Vinci, and the first colour picture by an unknown Italian author. Rembrandt van Rijn was the first to paint the gyri. Dozens of modern authors, who are professional artists, medical experts or designers, presented the cerebralc onvolutions in drawings, paintings, digital works or sculptures, with various aesthetic, symbolic and metaphorical connotation. Some artistic compositions and natural forms show a gyral pattern. The convolutions, whose cortical layers enable the cognitive functions, can be affected by various disorders. Some artists suffered from those disorders, and some others presented them in their artworks. The cerebral convolutions or gyri, thanks to their extensive cortical mantle, are the specific morphological basis for the human mind, but also the structures with their own aesthetics. Contemporary authors relatively often depictor model the cerebral convolutions, either from the aesthetic or conceptual aspect. In this way, they make a connection between the neuroscience and fineart.

  12. Low solubility in drug development: de-convoluting the relative importance of solvation and crystal packing.

    PubMed

    Docherty, Robert; Pencheva, Klimentina; Abramov, Yuriy A

    2015-06-01

    An increasing trend towards low solubility is a major issue for drug development as formulation of low solubility compounds can be problematic. This paper presents a model which de-convolutes the solubility of pharmaceutical compounds into solvation and packing properties with the intention to understand the solubility limiting features. The Cambridge Crystallographic Database was the source of structural information. Lattice energies were calculated via force-field based approaches using Materials Studio. The solvation energies were calculated applying quantum chemistry models using Cosmotherm software. The solubilities of 54 drug-like compounds were mapped onto a solvation energy/crystal packing grid. Four quadrants were identified were different balances of solvation and packing were defining the solubility. A version of the model was developed which allows for the calculation of the two features even in absence of crystal structure. Although there are significant number of in-silico models, it has been proven very difficult to predict aqueous solubility accurately. Therefore, we have taken a different approach where the solubility is not predicted directly but is de-convoluted into two constituent features. © 2015 Royal Pharmaceutical Society.

  13. Inequalities and consequences of new convolutions for the fractional Fourier transform with Hermite weights

    NASA Astrophysics Data System (ADS)

    Anh, P. K.; Castro, L. P.; Thao, P. T.; Tuan, N. M.

    2017-01-01

    This paper presents new convolutions for the fractional Fourier transform which are somehow associated with the Hermite functions. Consequent inequalities and properties are derived for these convolutions, among which we emphasize two new types of Young's convolution inequalities. The results guarantee a general framework where the present convolutions are well-defined, allowing larger possibilities than the known ones for other convolutions. Furthermore, we exemplify the use of our convolutions by providing explicit solutions of some classes of integral equations which appear in engineering problems.

  14. A reciprocal space approach for locating symmetry elements in Patterson superposition maps

    SciTech Connect

    Hendrixson, T.

    1990-09-21

    A method for determining the location and possible existence of symmetry elements in Patterson superposition maps has been developed. A comparison of the original superposition map and a superposition map operated on by the symmetry element gives possible translations to the location of the symmetry element. A reciprocal space approach using structure factor-like quantities obtained from the Fourier transform of the superposition function is then used to determine the best'' location of the symmetry element. Constraints based upon the space group requirements are also used as a check on the locations. The locations of the symmetry elements are used to modify the Fourier transform coefficients of the superposition function to give an approximation of the structure factors, which are then refined using the EG relation. The analysis of several compounds using this method is presented. Reciprocal space techniques for locating multiple images in the superposition function are also presented, along with methods to remove the effect of multiple images in the Fourier transform coefficients of the superposition map. In addition, crystallographic studies of the extended chain structure of (NHC{sub 5}H{sub 5})SbI{sub 4} and of the twinning method of the orthorhombic form of the high-{Tc} superconductor YBa{sub 2}Cu{sub 3}O{sub 7-x} are presented. 54 refs.

  15. Output-sensitive 3D line integral convolution.

    PubMed

    Falk, Martin; Weiskopf, Daniel

    2008-01-01

    We propose an output-sensitive visualization method for 3D line integral convolution (LIC) whose rendering speed is largely independent of the data set size and mostly governed by the complexity of the output on the image plane. Our approach of view-dependent visualization tightly links the LIC generation with the volume rendering of the LIC result in order to avoid the computation of unnecessary LIC points: early-ray termination and empty-space leaping techniques are used to skip the computation of the LIC integral in a lazy-evaluation approach; both ray casting and texture slicing can be used as volume-rendering techniques. The input noise is modeled in object space to allow for temporal coherence under object and camera motion. Different noise models are discussed, covering dense representations based on filtered white noise all the way to sparse representations similar to oriented LIC. Aliasing artifacts are avoided by frequency control over the 3D noise and by employing a 3D variant of MIPmapping. A range of illumination models is applied to the LIC streamlines: different codimension-2 lighting models and a novel gradient-based illumination model that relies on precomputed gradients and does not require any direct calculation of gradients after the LIC integral is evaluated. We discuss the issue of proper sampling of the LIC and volume-rendering integrals by employing a frequency-space analysis of the noise model and the precomputed gradients. Finally, we demonstrate that our visualization approach lends itself to a fast graphics processing unit (GPU) implementation that supports both steady and unsteady flow. Therefore, this 3D LIC method allows users to interactively explore 3D flow by means of high-quality, view-dependent, and adaptive LIC volume visualization. Applications to flow visualization in combination with feature extraction and focus-and-context visualization are described, a comparison to previous methods is provided, and a detailed performance

  16. Spatially variant convolution with scaled B-splines.

    PubMed

    Muñoz-Barrutia, Arrate; Artaechevarria, Xabier; Ortiz-de-Solorzano, Carlos

    2010-01-01

    We present an efficient algorithm to compute multidimensional spatially variant convolutions--or inner products--between N-dimensional signals and B-splines--or their derivatives--of any order and arbitrary sizes. The multidimensional B-splines are computed as tensor products of 1-D B-splines, and the input signal is expressed in a B-spline basis. The convolution is then computed by using an adequate combination of integration and scaled finite differences as to have, for moderate and large scale values, a computational complexity that does not depend on the scaling factor. To show in practice the benefit of using our spatially variant convolution approach, we present an adaptive noise filter that adjusts the kernel size to the local image characteristics and a high sensitivity local ridge detector.

  17. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  18. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  19. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  20. The principle of superposition and its application in ground-water hydraulics

    USGS Publications Warehouse

    Reilly, T.E.; Franke, O.L.; Bennett, G.D.

    1984-01-01

    The principle of superposition, a powerful methematical technique for analyzing certain types of complex problems in many areas of science and technology, has important application in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that solutions to individual problems can be added together to obtain solutions to complex problems. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to groundwater hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader. (USGS)

  1. The principle of superposition and its application in ground-water hydraulics

    USGS Publications Warehouse

    Reilly, Thomas E.; Franke, O. Lehn; Bennett, Gordon D.

    1987-01-01

    The principle of superposition, a powerful mathematical technique for analyzing certain types of complex problems in many areas of science and technology, has important applications in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that problem solutions can be added together to obtain composite solutions. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to ground-water hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader.

  2. Structure of optical singularities in coaxial superpositions of Laguerre-Gaussian modes.

    PubMed

    Ando, Taro; Matsumoto, Naoya; Ohtake, Yoshiyuki; Takiguchi, Yu; Inoue, Takashi

    2010-12-01

    We investigate optical singularities in coaxial superpositions of two Laguerre-Gaussian (LG) modes with a common beam waist from the viewpoints of a general formulation of phase structure, experimental generation of various superposition beams, and evaluation of the generated beams' fidelity. By applying a holographic phase-amplitude modulation scheme using a phase-modulation-type spatial light modulator, output fidelity beyond 0.960 was observed under several typical conditions. Additionally, an elliptic-type folded singularity, which provides a different class of phase structures from familiar helical singularities, was predicted and observed in a superposition involving two LG modes of both radially and azimuthally higher orders.

  3. Spectral interpolation - Zero fill or convolution. [image processing

    NASA Technical Reports Server (NTRS)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  4. A high-order fast method for computing convolution integral with smooth kernel

    NASA Astrophysics Data System (ADS)

    Qiang, Ji

    2010-02-01

    In this paper we report on a high-order fast method to numerically calculate convolution integral with smooth non-periodic kernel. This method is based on the Newton-Cotes quadrature rule for the integral approximation and an FFT method for discrete summation. The method can have an arbitrarily high-order accuracy in principle depending on the number of points used in the integral approximation and a computational cost of O(Nlog(N)), where N is the number of grid points. For a three-point Simpson rule approximation, the method has an accuracy of O(h), where h is the size of the computational grid. Applications of the Simpson rule based algorithm to the calculation of a one-dimensional continuous Gauss transform and to the calculation of a two-dimensional electric field from a charged beam are also presented.

  5. A high-order fast method for computing convolution integral with smooth kernel

    SciTech Connect

    Qiang, Ji

    2009-09-28

    In this paper we report on a high-order fast method to numerically calculate convolution integral with smooth non-periodic kernel. This method is based on the Newton-Cotes quadrature rule for the integral approximation and an FFT method for discrete summation. The method can have an arbitrarily high-order accuracy in principle depending on the number of points used in the integral approximation and a computational cost of O(Nlog(N)), where N is the number of grid points. For a three-point Simpson rule approximation, the method has an accuracy of O(h{sup 4}), where h is the size of the computational grid. Applications of the Simpson rule based algorithm to the calculation of a one-dimensional continuous Gauss transform and to the calculation of a two-dimensional electric field from a charged beam are also presented.

  6. Tensor-polarized structure function b1 in the standard convolution description of the deuteron

    NASA Astrophysics Data System (ADS)

    Cosyn, W.; Dong, Yu-Bing; Kumano, S.; Sargsian, M.

    2017-04-01

    Tensor-polarized structure functions of a spin-1 hadron are additional observables, which do not exist for the spin-1 /2 nucleon. They could probe novel aspects of the internal hadron structure. Twist-2 tensor-polarized structure functions are b1 and b2, and they are related by the Callan-Gross-like relation in the Bjorken scaling limit. In this work, we theoretically calculate b1 in the standard convolution description for the deuteron. Two different theoretical models, a basic convolution description and a virtual nucleon approximation, are used for calculating b1, and their results are compared with the HERMES measurement. We found large differences between our theoretical results and the data. Although there is still room to improve by considering higher-twist effects and in the experimental extraction of b1 from the spin asymmetry Az z, there is a possibility that the large differences require physics beyond the standard deuteron model for their interpretation. Future b1 studies could shed light on a new field of hadron physics. In particular, detailed experimental studies of b1 will start soon at the Thomas Jefferson National Accelerator Facility. In addition, there are possibilities to investigate tensor-polarized parton distribution functions and b1 at Fermi National Accelerator Laboratory and a future electron-ion collider. Therefore, further theoretical studies are needed for understanding the tensor structure of the spin-1 deuteron, including a new mechanism to explain the large differences between the current data and our theoretical results.

  7. SU-E-T-328: The Volume Effect Correction of Probe-Type Dosimetric Detectors Derived From the Convolution Model

    SciTech Connect

    Looe, HK; Poppe, B; Harder, D

    2014-06-01

    Purpose: To derive and introduce a new correction factor kV, the “volume effect correction factor”, that accounts for not only the dose averaging over the detector's sensitive volume but also the secondary electron generation and transport inclusive of the disturbance of the field of secondary electrons within the detector. Materials and Methods: Mathematical convolutions and Fourier's convolution theorem have been used. Monte Carlo simulations of photon pencil beams were performed using EGSnrc. Detector constructions were adapted from manufacturers' information. Results: For the calculation of kV, the three basic convolution kernels have to be taken into account: the dose deposition kernel KD(x) (fluence to dose), the photon fluence response kernel KM(x) (photon fluence to detector signal) and the “dose response kernel” K(x) (dose to detector signal). K(x) is calculated from FT[K(x)] = [1/sqrt(2”)]FT[KM(x)]/FT[KD(x)], where the magnitude of kV(x) can be thereby calculated for arbitrary photon beam profiles and the areanormalized K(x). Conclusions: n order to take into account for the dimensions of dosimetric detectors in narrow photon beams, the “volume effect correction factor” kV has been introduced into the fundamental equation of probe-type dosimetry, and the convolution method has proven to be a method for the derivation of its numerical values. For narrow photon beams, whose width is comparable to the secondary electron ranges, kV can reach very high values, but it can be shown that the signals of small diamond detectors are well representing the absorbed dose to water averaged over the detector volume.

  8. Single crystal EPR, optical absorption and superposition model study of Cr 3+ doped ammonium dihydrogen phosphate

    NASA Astrophysics Data System (ADS)

    Kripal, Ram; Pandey, Sangita

    2010-06-01

    The electron paramagnetic resonance (EPR) studies are carried out on Cr 3+ ion doped ammonium dihydrogen phosphate (ADP) single crystals at room temperature. Four magnetically inequivalent sites for chromium are observed. No hyperfine structure is obtained. The crystal-field and spin Hamiltonian parameters are calculated from the resonance lines obtained at different angular rotations. The zero field and spin Hamiltonian parameters of Cr 3+ ion in ADP are calculated as: | D| = (257 ± 2) × 10 -4 cm -1, | E| = (79 ± 2) × 10 -4 cm -1, g = 1.9724 ± 0.0002 for site I; | D| = (257 ± 2) × 10 -4 cm -1, | E| = (77 ± 2) × 10 -4 cm -1, g = 1.9727 ± 0.0002 for site II; | D| = (259 ± 2) × 10 -4 cm -1, | E| = (78 ± 2) × 10 -4 cm -1, g = 1.9733 ± 0.0002 for site III; | D| = (259 ± 2) × 10 -4 cm -1, | E| = (77 ± 2) × 10 -4 cm -1, g = 1.973 ± 0.0002 for site IV, respectively. The site symmetry of Cr 3+ doped single crystal is discussed on the basis of EPR data. The Cr 3+ ion enters the lattice substitutionally replacing the NH 4+ sites. The optical absorption spectra are recorded in 195-925 nm wavelength range at room temperature. The energy values of different orbital levels are determined. On the basis of EPR and optical data, the nature of bonding in the crystal is discussed. The calculated values of Racah interelectronic repulsion parameters ( B and C), cubic crystal-field splitting parameter ( Dq) and nephelauxetic parameters ( h and k) are: B = 640, C = 3070, Dq = 2067 cm -1, h = 1.44 and k = 0.21, respectively. ZFS parameters are also determined using Bkq parameters from superposition model.

  9. Fast and robust chromatic dispersion estimation based on temporal auto-correlation after digital spectrum superposition.

    PubMed

    Yao, Shuchang; Eriksson, Tobias A; Fu, Songnian; Johannisson, Pontus; Karlsson, Magnus; Andrekson, Peter A; Ming, Tang; Liu, Deming

    2015-06-15

    We investigate and experimentally demonstrate a fast and robust chromatic dispersion (CD) estimation method based on temporal auto-correlation after digital spectrum superposition. The estimation process is fast, because neither tentative CD scanning based on CD compensation nor specific cost function calculations are used. Meanwhile, the proposed CD estimation method is robust against polarization mode dispersion (PMD), amplified spontaneous emission (ASE) noise and fiber nonlinearity. Furthermore, the proposed CD estimation method can be used for various modulation formats and digital pulse shaping technique. Only 4096 samples are necessary for CD estimation of single carrier either 112 Gbps DP-QPSK or 224 Gbps DP-16QAM signal with various pulse shapes. 8192 samples are sufficient for the root-raised-cosine pulse with roll-off factor of 0.1. As low as 50 ps/nm standard deviation together with a worst estimation error of about 160 ps/nm is experimentally obtained for 7×112 Gbps DP-QPSK WDM signal after the transmission through 480 km to 9120 km single mode fiber (SMF) loop using different launch powers.

  10. Bound eigenstates for the superposition of the Coulomb and the Yukawa potentials

    NASA Astrophysics Data System (ADS)

    Adamowski, Janusz

    1985-01-01

    The eigenvalue problem for two particles interacting through the potential being the superposition of the attractive Coulomb potential (-A/r) and the Yukawa potential B exp(-Cr)/r of arbitrary strength B and screening parameter C is solved by variational means. The energy levels Enl for the states 1s through 7i are calculated as functions of B and C. It is shown that for a given principal quantum number n the energy eigenvalues increase (decrease) with increasing azimuthal quantum number l if the Yukawa potential is attractive (repulsive), i.e., for l>l': Enl>=Enl' if B<0, and Enl<=Enl' if B>0. It leads to the crossing of the energy levels with n>=2. For B>0 the levels with larger n and l become lower than those with smaller n and l, e.g., E3d

  11. SUPERPOSE-An excel visual basic program for fracture modeling based on the stress superposition method

    NASA Astrophysics Data System (ADS)

    Ismail Ozkaya, Sait

    2014-03-01

    An Excel Visual Basic program, SUPERPOSE, is presented to predict the distribution, relative size and strike of tensile and shear fractures on anticlinal structures. The program is based on the concept of stress superposition; addition of curvature-related local tensile stress and regional far-field stress. The method accurately predicts fractures on many Middle East Oil Fields that were formed under a strike slip regime as duplexes, flower structures or inverted structures. The program operates on the Excel platform. The program reads the parameters and structural grid data from an Excel template and writes the results to the same template. The program has two routines to import structural grid data in the Eclipse and Zmap formats. The platform of SUPERPOSE is a single layer structural grid of a given cell size (e.g. 50×50 m). In the final output, a single tensile or two conjugate shear fractures are placed in each cell if fracturing criteria are satisfied; otherwise the cell is left blank. Strike of the representative fracture(s) is calculated and exact, but the length is an index of fracture porosity (fracture density×length×aperture) within that cell.

  12. Probing the conductance superposition law in single-molecule circuits with parallel paths.

    PubMed

    Vazquez, H; Skouta, R; Schneebeli, S; Kamenetska, M; Breslow, R; Venkataraman, L; Hybertsen, M S

    2012-10-01

    According to Kirchhoff's circuit laws, the net conductance of two parallel components in an electronic circuit is the sum of the individual conductances. However, when the circuit dimensions are comparable to the electronic phase coherence length, quantum interference effects play a critical role, as exemplified by the Aharonov-Bohm effect in metal rings. At the molecular scale, interference effects dramatically reduce the electron transfer rate through a meta-connected benzene ring when compared with a para-connected benzene ring. For longer conjugated and cross-conjugated molecules, destructive interference effects have been observed in the tunnelling conductance through molecular junctions. Here, we investigate the conductance superposition law for parallel components in single-molecule circuits, particularly the role of interference. We synthesize a series of molecular systems that contain either one backbone or two backbones in parallel, bonded together cofacially by a common linker on each end. Single-molecule conductance measurements and transport calculations based on density functional theory show that the conductance of a double-backbone molecular junction can be more than twice that of a single-backbone junction, providing clear evidence for constructive interference.

  13. Design and Evaluation of a Research-Based Teaching Sequence: The Superposition of Electric Field.

    ERIC Educational Resources Information Center

    Viennot, L.; Rainson, S.

    1999-01-01

    Illustrates an approach to research-based teaching strategies and their evaluation. Addresses a teaching sequence on the superposition of electric fields implemented at the college level in an institutional framework subject to severe constraints. Contains 28 references. (DDR)

  14. Superpositioning of Digital Elevation Data with Analog Imagery for Data Editing,

    DTIC Science & Technology

    1984-01-01

    The Topographic Developments Laboratory of the U.S. Army Engineer Topographic Laboratories (ETL) has established the Photogrammetric Technology ... Integration (PTI) testbed system for the evaluation of superpositioning techniques utilizing electronically scanned hardcopy imagery with overlayed digital

  15. Reproducible mesoscopic superpositions of Bose-Einstein condensates and mean-field chaos

    SciTech Connect

    Gertjerenken, Bettina; Arlinghaus, Stephan; Teichmann, Niklas; Weiss, Christoph

    2010-08-15

    In a parameter regime for which the mean-field (Gross-Pitaevskii) dynamics becomes chaotic, mesoscopic quantum superpositions in phase space can occur in a double-well potential, which is shaken periodically. For experimentally realistic initial states, such as the ground state of some 100 atoms, the emergence of mesoscopic quantum superpositions in phase space is investigated numerically. It is shown to be reproducible, even if the initial conditions change slightly. Although the final state is not a perfect superposition of two distinct phase states, the superposition is reached an order of magnitude faster than in the case of the collapse-and-revival phenomenon. Furthermore, a generator of entanglement is identified.

  16. Superposition states of ultracold bosons in rotating rings with a realistic potential barrier

    SciTech Connect

    Nunnenkamp, Andreas; Rey, Ana Maria; Burnett, Keith

    2011-11-15

    In a recent paper [Phys. Rev. A 82, 063623 (2010)] Hallwood et al. argued that it is feasible to create large superposition states with strongly interacting bosons in rotating rings. Here we investigate in detail how the superposition states in rotating-ring lattices depend on interaction strength and barrier height. With respect to the latter we find a trade-off between energy gap and quality of the superposition state. Most importantly, we go beyond the {delta}-function approximation for the barrier potential and show that the energy gap decreases exponentially with the number of particles for weak barrier potentials of finite width. These are crucial issues in the design of experiments to realize superposition states.

  17. Collapsing a perfect superposition to a chosen quantum state without measurement.

    PubMed

    Younes, Ahmed; Abdel-Aty, Mahmoud

    2014-01-01

    Given a perfect superposition of [Formula: see text] states on a quantum system of [Formula: see text] qubits. We propose a fast quantum algorithm for collapsing the perfect superposition to a chosen quantum state [Formula: see text] without applying any measurements. The basic idea is to use a phase destruction mechanism. Two operators are used, the first operator applies a phase shift and a temporary entanglement to mark [Formula: see text] in the superposition, and the second operator applies selective phase shifts on the states in the superposition according to their Hamming distance with [Formula: see text]. The generated state can be used as an excellent input state for testing quantum memories and linear optics quantum computers. We make no assumptions about the used operators and applied quantum gates, but our result implies that for this purpose the number of qubits in the quantum register offers no advantage, in principle, over the obvious measurement-based feedback protocol.

  18. Evidence for transcriptase quantum processing implies entanglement and decoherence of superposition proton states.

    PubMed

    Cooper, W Grant

    2009-08-01

    Evidence requiring transcriptase quantum processing is identified and elementary quantum methods are used to qualitatively describe origins and consequences of time-dependent coherent proton states populating informational DNA base pair sites in T4 phage, designated by G-C-->G'-C', G-C-->*G-*C and AT-->*A-*T. Coherent states at these 'point' DNA lesions are introduced as consequences of hydrogen bond arrangement, keto-amino-->enol-imine, where product protons are shared between two sets of indistinguishable electron lone-pairs, and thus, participate in coupled quantum oscillations at frequencies of approximately 10(13) s(-1). This quantum mixing of proton energy states introduces stability enhancements of approximately 0.25-7 kcal/mole. Transcriptase genetic specificity is determined by hydrogen bond components contributing to the formation of complementary interstrand hydrogen bonds which, in these cases, is variable due to coupled quantum oscillations of coherent enol-imine protons. The transcriptase deciphers and executes genetic specificity instructions by implementing measurements on superposition proton states at G'-C', *G-*C and *A-*T sites in an interval Deltat<10(-13) s. After initiation of transcriptase measurement, model calculations indicate proton decoherence time, tau(D), satisfies the relation DeltatT, G'-->C, *C-->T and *G-->A. Measurements of 37 degrees C lifetimes of the keto-amino DNA hydrogen bond indicate a range of approximately 3200-68,000 yrs. Arguments are presented that quantum uncertainty limits on amino protons may drive the keto-amino-->enol-imine arrangement. Data imply that natural selection at the quantum level has generated effective schemes (a) for introducing superposition proton states--at rates appropriate for DNA evolution--in decoherence-free subspaces and (b) for creating entanglement states that augment (i

  19. The superposition solitons for 3-coupled nonlinear Schrödinger equations

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-Min; Zhang, Ling-Ling

    2017-01-01

    In this paper, a Hirota bilinear method is developed for applying to the 3-coupled nonlinear Schrödinger equations. With a reasonable assumption the exact two-superposition-one-dark (TSD) and one-bright-two-superposition (BTS) soliton solutions are constructed analytically. It shows that they can transform into general mixed (dark-bright) soliton solutions in the special conditions. Moreover, the asymptotic behavior analysis shows that the collision of TSD and BTS two solitons are all elastic.

  20. Strong-Driving-Assisted Preparation of Superpositions of Two-Mode Coherent States in Cavity QED

    NASA Astrophysics Data System (ADS)

    Su, Wan-Jun; Huang, Jian-Min

    2011-09-01

    A scheme is proposed for preparing the superposition of two-mode coherent states with controllable weighting factors along a straight line for two-mode cavity field. In this scheme two-level atoms driven by classical field are sent through a two-mode cavity initially in the vacuum state. Then the detection of the atoms make the cavity field be in a two-mode superpositions of coherent states.

  1. Resilience to decoherence of the macroscopic quantum superpositions generated by universally covariant optimal quantum cloning

    SciTech Connect

    Spagnolo, Nicolo; Sciarrino, Fabio; De Martini, Francesco

    2010-09-15

    We show that the quantum states generated by universal optimal quantum cloning of a single photon represent a universal set of quantum superpositions resilient to decoherence. We adopt the Bures distance as a tool to investigate the persistence of quantum coherence of these quantum states. According to this analysis, the process of universal cloning realizes a class of quantum superpositions that exhibits a covariance property in lossy configuration over the complete set of polarization states in the Bloch sphere.

  2. Evaluation of Class II treatment by cephalometric regional superpositions versus conventional measurements.

    PubMed

    Efstratiadis, Stella; Baumrind, Sheldon; Shofer, Frances; Jacobsson-Hunt, Ulla; Laster, Larry; Ghafari, Joseph

    2005-11-01

    The aims of this study were (1) to evaluate cephalometric changes in subjects with Class II Division 1 malocclusion who were treated with headgear (HG) or Fränkel function regulator (FR) and (2) to compare findings from regional superpositions of cephalometric structures with those from conventional cephalometric measurements. Cephalographs were taken at baseline, after 1 year, and after 2 years of 65 children enrolled in a prospective randomized clinical trial. The spatial location of the landmarks derived from regional superpositions was evaluated in a coordinate system oriented on natural head position. The superpositions included the best anatomic fit of the anterior cranial base, maxillary base, and mandibular structures. Both the HG and the FR were effective in correcting the distoclusion, and they generated enhanced differential growth between the jaws. Differences between cranial and maxillary superpositions regarding mandibular displacement (Point B, pogonion, gnathion, menton) were noted: the HG had a more horizontal vector on maxillary superposition that was also greater (.0001 < P < .05) than the horizontal displacement observed with the FR. This discrepancy appeared to be related to (1) the clockwise (backward) rotation of the palatal and mandibular planes observed with the HG; the palatal plane's rotation, which was transferred through the occlusion to the mandibular plane, was factored out on maxillary superposition; and (2) the interaction between the inclination of the maxillary incisors and the forward movement of the mandible during growth. Findings from superpositions agreed with conventional angular and linear measurements regarding the basic conclusions for the primary effects of HG and FR. However, the results suggest that inferences of mandibular displacement are more reliable from maxillary than cranial superposition when evaluating occlusal changes during treatment.

  3. De-convoluting mixed crude oil in Prudhoe Bay Field, North Slope, Alaska

    USGS Publications Warehouse

    Peters, K.E.; Scott, Ramos L.; Zumberge, J.E.; Valin, Z.C.; Bird, K.J.

    2008-01-01

    Seventy-four crude oil samples from the Barrow arch on the North Slope of Alaska were studied to assess the relative volumetric contributions from different source rocks to the giant Prudhoe Bay Field. We applied alternating least squares to concentration data (ALS-C) for 46 biomarkers in the range C19-C35 to de-convolute mixtures of oil generated from carbonate rich Triassic Shublik Formation and clay rich Jurassic Kingak Shale and Cretaceous Hue Shale-gamma ray zone (Hue-GRZ) source rocks. ALS-C results for 23 oil samples from the prolific Ivishak Formation reservoir of the Prudhoe Bay Field indicate approximately equal contributions from Shublik Formation and Hue-GRZ source rocks (37% each), less from the Kingak Shale (26%), and little or no contribution from other source rocks. These results differ from published interpretations that most oil in the Prudhoe Bay Field originated from the Shublik Formation source rock. With few exceptions, the relative contribution of oil from the Shublik Formation decreases, while that from the Hue-GRZ increases in reservoirs along the Barrow arch from Point Barrow in the northwest to Point Thomson in the southeast (???250 miles or 400 km). The Shublik contribution also decreases to a lesser degree between fault blocks within the Ivishak pool from west to east across the Prudhoe Bay Field. ALS-C provides a robust means to calculate the relative amounts of two or more oil types in a mixture. Furthermore, ALS-C does not require that pure end member oils be identified prior to analysis or that laboratory mixtures of these oils be prepared to evaluate mixing. ALS-C of biomarkers reliably de-convolutes mixtures because the concentrations of compounds in mixtures vary as linear functions of the amount of each oil type. ALS of biomarker ratios (ALS-R) cannot be used to de-convolute mixtures because compound ratios vary as nonlinear functions of the amount of each oil type.

  4. Using least median of squares for structural superposition of flexible proteins

    PubMed Central

    Liu, Yu-Shen; Fang, Yi; Ramani, Karthik

    2009-01-01

    Background The conventional superposition methods use an ordinary least squares (LS) fit for structural comparison of two different conformations of the same protein. The main problem of the LS fit that it is sensitive to outliers, i.e. large displacements of the original structures superimposed. Results To overcome this problem, we present a new algorithm to overlap two protein conformations by their atomic coordinates using a robust statistics technique: least median of squares (LMS). In order to effectively approximate the LMS optimization, the forward search technique is utilized. Our algorithm can automatically detect and superimpose the rigid core regions of two conformations with small or large displacements. In contrast, most existing superposition techniques strongly depend on the initial LS estimating for the entire atom sets of proteins. They may fail on structural superposition of two conformations with large displacements. The presented LMS fit can be considered as an alternative and complementary tool for structural superposition. Conclusion The proposed algorithm is robust and does not require any prior knowledge of the flexible regions. Furthermore, we show that the LMS fit can be extended to multiple level superposition between two conformations with several rigid domains. Our fit tool has produced successful superpositions when applied to proteins for which two conformations are known. The binary executable program for Windows platform, tested examples, and database are available from . PMID:19159484

  5. Maximum-likelihood estimation of circle parameters via convolution.

    PubMed

    Zelniker, Emanuel E; Clarkson, I Vaughan L

    2006-04-01

    The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images.

  6. Convolutional virtual electric field for image segmentation using active contours.

    PubMed

    Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden

    2014-01-01

    Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images.

  7. Hardy's inequalities for the twisted convolution with Laguerre functions.

    PubMed

    Xiao, Jinsen; He, Jianxun

    2017-01-01

    In this article, two types of Hardy's inequalities for the twisted convolution with Laguerre functions are studied. The proofs are mainly based on an estimate for the Heisenberg left-invariant vectors of the special Hermite functions deduced by the Heisenberg group approach.

  8. Die and telescoping punch form convolutions in thin diaphragm

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Die and punch set forms convolutions in thin dished metal diaphragm without stretching the metal too thin at sharp curvatures. The die corresponds to the metal shape to be formed, and the punch consists of elements that progressively slide against one another under the restraint of a compressed-air cushion to mate with the die.

  9. An Interactive Graphics Program for Assistance in Learning Convolution.

    ERIC Educational Resources Information Center

    Frederick, Dean K.; Waag, Gary L.

    1980-01-01

    A program has been written for the interactive computer graphics facility at Rensselaer Polytechnic Institute that is designed to assist the user in learning the mathematical technique of convolving two functions. Because convolution can be represented graphically by a sequence of steps involving folding, shifting, multiplying, and integration, it…

  10. Stacked Convolutional Denoising Auto-Encoders for Feature Representation.

    PubMed

    Du, Bo; Xiong, Wei; Wu, Jia; Zhang, Lefei; Zhang, Liangpei; Tao, Dacheng

    2016-03-16

    Deep networks have achieved excellent performance in learning representation from visual data. However, the supervised deep models like convolutional neural network require large quantities of labeled data, which are very expensive to obtain. To solve this problem, this paper proposes an unsupervised deep network, called the stacked convolutional denoising auto-encoders, which can map images to hierarchical representations without any label information. The network, optimized by layer-wise training, is constructed by stacking layers of denoising auto-encoders in a convolutional way. In each layer, high dimensional feature maps are generated by convolving features of the lower layer with kernels learned by a denoising auto-encoder. The auto-encoder is trained on patches extracted from feature maps in the lower layer to learn robust feature detectors. To better train the large network, a layer-wise whitening technique is introduced into the model. Before each convolutional layer, a whitening layer is embedded to sphere the input data. By layers of mapping, raw images are transformed into high-level feature representations which would boost the performance of the subsequent support vector machine classifier. The proposed algorithm is evaluated by extensive experimentations and demonstrates superior classification performance to state-of-the-art unsupervised networks.

  11. Prediction of color changes in acetaminophen solution using the time-temperature superposition principle.

    PubMed

    Mochizuki, Koji; Takayama, Kozo

    2016-01-01

    A prediction method for color changes based on the time-temperature superposition principle (TTSP) was developed for acetaminophen solution. Color changes of acetaminophen solution are caused by the degradation of acetaminophen, such as hydrolysis and oxidation. In principle, the TTSP can be applied to only thermal aging. Therefore, the impact of oxidation on the color changes of acetaminophen solution was verified. The results of our experiment suggested that the oxidation products enhanced the color changes in acetaminophen solution. Next, the color changes of acetaminophen solution samples of the same head space volume after accelerated aging at various temperatures were investigated using the Commission Internationale de l'Eclairage (CIE) LAB color space (a*, b*, L* and ΔE*ab), following which the TTSP was adopted to kinetic analysis of the color changes. The apparent activation energies using the time-temperature shift factor of a*, b*, L* and ΔE*ab were calculated as 72.4, 69.2, 72.3 and 70.9 (kJ/mol), respectively, which are similar to the values for acetaminophen hydrolysis reported in the literature. The predicted values of a*, b*, L* and ΔE*ab at 40 °C were obtained by calculation using Arrhenius plots. A comparison between the experimental and predicted values for each color parameter revealed sufficiently high R(2) values (>0.98), suggesting the high reliability of the prediction. The kinetic analysis using TTSP was successfully applied to predicting the color changes under the controlled oxygen amount at any temperature and for any length of time.

  12. Design and development of a new micro-beam treatment planning system: effectiveness of algorithms of optimization and dose calculations and potential of micro-beam treatment.

    PubMed

    Tachibana, Hidenobu; Kojima, Hiroyuki; Yusa, Noritaka; Miyajima, Satoshi; Tsuda, Akihisa; Yamashita, Takashi

    2012-07-01

    A new treatment planning system (TPS) was designed and developed for a new treatment system, which consisted of a micro-beam-enabled linac with robotics and a real-time tracking system. We also evaluated the effectiveness of the implemented algorithms of optimization and dose calculations in the TPS for the new treatment system. In the TPS, the optimization procedure consisted of the pseudo Beam's-Eye-View method for finding the optimized beam directions and the steepest-descent method for determination of beam intensities. We used the superposition-/convolution-based (SC-based) algorithm and Monte Carlo-based (MC-based) algorithm to calculate dose distributions using CT image data sets. In the SC-based algorithm, dose density scaling was applied for the calculation of inhomogeneous corrections. The MC-based algorithm was implemented with Geant4 toolkit and a phase-based approach using a network-parallel computing. From the evaluation of the TPS, the system can optimize the direction and intensity of individual beams. The accuracy of the dose calculated by the SC-based algorithm was less than 1% on average with the calculation time of 15 s for one beam. However, the MC-based algorithm needed 72 min for one beam using the phase-based approach, even though the MC-based algorithm with the parallel computing could decrease multiple beam calculations and had 18.4 times faster calculation speed using the parallel computing. The SC-based algorithm could be practically acceptable for the dose calculation in terms of the accuracy and computation time. Additionally, we have found a dosimetric advantage of proton Bragg peak-like dose distribution in micro-beam treatment.

  13. SU-E-T-374: Evaluation and Verification of Dose Calculation Accuracy with Different Dose Grid Sizes for Intracranial Stereotactic Radiosurgery

    SciTech Connect

    Han, C; Schultheiss, T

    2015-06-15

    Purpose: In this study, we aim to evaluate the effect of dose grid size on the accuracy of calculated dose for small lesions in intracranial stereotactic radiosurgery (SRS), and to verify dose calculation accuracy with radiochromic film dosimetry. Methods: 15 intracranial lesions from previous SRS patients were retrospectively selected for this study. The planning target volume (PTV) ranged from 0.17 to 2.3 cm{sup 3}. A commercial treatment planning system was used to generate SRS plans using the volumetric modulated arc therapy (VMAT) technique using two arc fields. Two convolution-superposition-based dose calculation algorithms (Anisotropic Analytical Algorithm and Acuros XB algorithm) were used to calculate volume dose distribution with dose grid size ranging from 1 mm to 3 mm with 0.5 mm step size. First, while the plan monitor units (MU) were kept constant, PTV dose variations were analyzed. Second, with 95% of the PTV covered by the prescription dose, variations of the plan MUs as a function of dose grid size were analyzed. Radiochomic films were used to compare the delivered dose and profile with the calculated dose distribution with different dose grid sizes. Results: The dose to the PTV, in terms of the mean dose, maximum, and minimum dose, showed steady decrease with increasing dose grid size using both algorithms. With 95% of the PTV covered by the prescription dose, the total MU increased with increasing dose grid size in most of the plans. Radiochromic film measurements showed better agreement with dose distributions calculated with 1-mm dose grid size. Conclusion: Dose grid size has significant impact on calculated dose distribution in intracranial SRS treatment planning with small target volumes. Using the default dose grid size could lead to under-estimation of delivered dose. A small dose grid size should be used to ensure calculation accuracy and agreement with QA measurements.

  14. Adaptive Estimation of Active Contour Parameters Using Convolutional Neural Networks and Texture Analysis.

    PubMed

    Hoogi, Assaf; Subramaniam, Arjun; Veerapaneni, Rishi; Rubin, Daniel

    2016-11-11

    In this paper, we propose a generalization of the level set segmentation approach by supplying a novel method for adaptive estimation of active contour parameters. The presented segmentation method is fully automatic once the lesion has been detected. First, the location of the level set contour relative to the lesion is estimated using a convolutional neural network (CNN). The CNN has two convolutional layers for feature extraction, which lead into dense layers for classification. Second, the output CNN probabilities are then used to adaptively calculate the parameters of the active contour functional during the segmentation process. Finally, the adaptive window size surrounding each contour point is re-estimated by an iterative process that considers lesion size and spatial texture. We demonstrate the capabilities of our method on a dataset of 164 MRI and 112 CT images of liver lesions that includes low contrast and heterogeneous lesions as well as noisy images. To illustrate the strength of our method, we evaluated it against state of the art CNNbased and active contour techniques. For all cases, our method, as assessed by Dice similarity coefficients, performed significantly better than currently available methods. An average Dice improvement of 0.27 was found across the entire dataset over all comparisons. We also analyzed two challenging subsets of lesions and obtained a significant Dice improvement of ����.�������� with our method (p < 0.001, Wilcoxon).

  15. Adaptive Estimation of Active Contour Parameters Using Convolutional Neural Networks and Texture Analysis.

    PubMed

    Hoogi, Assaf; Subramaniam, Arjun; Veerapaneni, Rishi; Rubin, Daniel

    2016-11-11

    In this paper, we propose a generalization of the level set segmentation approach by supplying a novel method for adaptive estimation of active contour parameters. The presented segmentation method is fully automatic once the lesion has been detected. First, the location of the level set contour relative to the lesion is estimated using a convolutional neural network (CNN). The CNN has two convolutional layers for feature extraction, which lead into dense layers for classification. Second, the output CNN probabilities are then used to adaptively calculate the parameters of the active contour functional during the segmentation process. Finally, the adaptive window size surrounding each contour point is re-estimated by an iterative process that considers lesion size and spatial texture. We demonstrate the capabilities of our method on a dataset of 164 MRI and 112 CT images of liver lesions that includes low contrast and heterogeneous lesions as well as noisy images. To illustrate the strength of our method, we evaluated it against state of the art CNNbased and active contour techniques. For all cases, our method, as assessed by Dice similarity coefficients, performed significantly better than currently available methods. An average Dice improvement of 0.27 was found across the entire dataset over all comparisons. We also analyzed two challenging subsets of lesions and obtained a significant Dice improvement of 0.24 with our method (p < 0.001, Wilcoxon).

  16. Attosecond probing of state-resolved ionization and superpositions of atoms and molecules

    NASA Astrophysics Data System (ADS)

    Leone, Stephen

    2016-05-01

    Isolated attosecond pulses in the extreme ultraviolet are used to probe strong field ionization and to initiate electronic and vibrational superpositions in atoms and small molecules. Few-cycle 800 nm pulses produce strong-field ionization of Xe atoms, and the attosecond probe is used to measure the risetimes of the two spin orbit states of the ion on the 4d inner shell transitions to the 5p vacancies in the valence shell. Step-like features in the risetimes due to the subcycles of the 800 nm pulse are observed and compared with theory to elucidate the instantaneous and effective hole dynamics. Isolated attosecond pulses create massive superpositions of electronic states in Ar and nitrogen as well as vibrational superpositions among electronic states in nitrogen. An 800 nm pulse manipulates the superpositions, and specific subcycle interferences, level shifting, and quantum beats are imprinted onto the attosecond pulse as a function of time delay. Detailed outcomes are compared to theory for measurements of time-dynamic superpositions by attosecond transient absorption. Supported by DOE, NSF, ARO, AFOSR, and DARPA.

  17. Nonadiabatic creation of macroscopic superpositions with strongly correlated one-dimensional bosons in a ring trap

    SciTech Connect

    Schenke, C.; Minguzzi, A.; Hekking, F. W. J.

    2011-11-15

    We consider a strongly interacting quasi-one-dimensional Bose gas on a tight ring trap subjected to a localized barrier potential. We explore the possibility of forming a macroscopic superposition of a rotating and a nonrotating state under nonequilibrium conditions, achieved by a sudden quench of the barrier velocity. Using an exact solution for the dynamical evolution in the impenetrable-boson (Tonks-Girardeau) limit, we find an expression for the many-body wave function corresponding to a superposition state. The superposition is formed when the barrier velocity is tuned close to multiples of an integer or half-integer number of Coriolis flux quanta. As a consequence of the strong interactions, we find that (i) the state of the system can be mapped onto a macroscopic superposition of two Fermi spheres rather than two macroscopically occupied single-particle states as in a weakly interacting gas, and (ii) the barrier velocity should be larger than the sound velocity to better discriminate the two components of the superposition.

  18. Calculation of potential flow past airship bodies in yaw

    NASA Technical Reports Server (NTRS)

    Lotz, I

    1932-01-01

    An outline of Von Karman's method of computing the potential flow of airships in yaw by means of partially constant dipolar superposition on the axis of the body is followed by several considerations for beginning and end of the superposition. Then this method is improved by postulating a continuous, in part linearly variable dipolar superposition on the axis. The second main part of the report brings the calculation of the potential flow by means of sources and sinks, arranged on the surface of the airship body. The integral equation which must satisfy this surface superposition is posed, and the core reduced to functions developed from whole elliptical normal integrals. The functions are shown diagrammatically. The integration is resolvable by iteration. The consequence of the method is good. The formulas for computing the velocity on the surface and of the potential for any point conclude the report.

  19. Method and apparatus for decoding compatible convolutional codes

    NASA Technical Reports Server (NTRS)

    Doland, G. D. (Inventor)

    1974-01-01

    This invention relates to learning decoders for decoding compatible convolutional codes. The decoder decodes signals which have been encoded by a convolutional coder and allows performance near the theoretical limit of performance for coded data systems. The decoder includes a sub-bit shift register wherein the received sub-bits are entered after regeneration and shifted in synchronization with a clock signal recovered from the received sub-bit stream. The received sub-bits are processed by a sub-bit decision circuit, entered into a sub-bit shift register, decoded by a decision circuit, entered into a data shift register, and updated to reduce data errors. The bit decision circuit utilizes stored sub-bits and stored data bits to determine subsequent data-bits. Data errors are reduced by using at least one up-date circuit.

  20. Miniaturized Band Stop FSS Using Convoluted Swastika Structure

    NASA Astrophysics Data System (ADS)

    Bilvam, Sridhar; Sivasamy, Ramprabhu; Kanagasabai, Malathi; Alsath M, Gulam Nabi; Baisakhiya, Sanjay

    2017-01-01

    This paper presents a miniaturized frequency selective surface (FSS) with stop band characteristics at the resonant frequency of 5.12 GHz. The unit cell size of the proposed FSS design is in the order of 0.095 λ×0.095 λ. The proposed unit cell is obtained by convoluting the arms of the basic swastika structure. The design provides fractional bandwidth of 9.0 % at the center frequency of 5.12 GHz in the 20 dB reference level of insertion loss. The symmetrical aspect of the design delivers identical response for both transverse electric (TE) and transverse magnetic (TM) modes thereby exhibiting polarization independent operation. The miniaturized design provides good angular independency for various incident angles. The dispersion analysis is done to substantiate the band stop operation of the convoluted swastika FSS. The proposed FSS is fabricated and its working is validated through measurements.

  1. Two-dimensional convolute integers for analytical instrumentation

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.

    1982-01-01

    As new analytical instruments and techniques emerge with increased dimensionality, a corresponding need is seen for data processing logic which can appropriately address the data. Two-dimensional measurements reveal enhanced unknown mixture analysis capability as a result of the greater spectral information content over two one-dimensional methods taken separately. It is noted that two-dimensional convolute integers are merely an extension of the work by Savitzky and Golay (1964). It is shown that these low-pass, high-pass and band-pass digital filters are truly two-dimensional and that they can be applied in a manner identical with their one-dimensional counterpart, that is, a weighted nearest-neighbor, moving average with zero phase shifting, convoluted integer (universal number) weighting coefficients.

  2. A new computational decoding complexity measure of convolutional codes

    NASA Astrophysics Data System (ADS)

    Benchimol, Isaac B.; Pimentel, Cecilio; Souza, Richard Demo; Uchôa-Filho, Bartolomeu F.

    2014-12-01

    This paper presents a computational complexity measure of convolutional codes well suitable for software implementations of the Viterbi algorithm (VA) operating with hard decision. We investigate the number of arithmetic operations performed by the decoding process over the conventional and minimal trellis modules. A relation between the complexity measure defined in this work and the one defined by McEliece and Lin is investigated. We also conduct a refined computer search for good convolutional codes (in terms of distance spectrum) with respect to two minimal trellis complexity measures. Finally, the computational cost of implementation of each arithmetic operation is determined in terms of machine cycles taken by its execution using a typical digital signal processor widely used for low-power telecommunications applications.

  3. UFLIC: A Line Integral Convolution Algorithm for Visualizing Unsteady Flows

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Kao, David L.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents an algorithm, UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Using the Line Integral Convolution (LIC) as the underlying method, a new convolution algorithm is proposed that can effectively trace the flow's global features over time. The new algorithm consists of a time-accurate value depositing scheme and a successive feed-forward method. The value depositing scheme accurately models the flow advection, and the successive feed-forward method maintains the coherence between animation frames. Our new algorithm can produce time-accurate, highly coherent flow animations to highlight global features in unsteady flow fields. CFD scientists, for the first time, are able to visualize unsteady surface flows using our algorithm.

  4. The analysis of VERITAS muon images using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Feng, Qi; Lin, Tony T. Y.; VERITAS Collaboration

    2017-06-01

    Imaging atmospheric Cherenkov telescopes (IACTs) are sensitive to rare gamma-ray photons, buried in the background of charged cosmic-ray (CR) particles, the flux of which is several orders of magnitude greater. The ability to separate gamma rays from CR particles is important, as it is directly related to the sensitivity of the instrument. This gamma-ray/CR-particle classification problem in IACT data analysis can be treated with the rapidly-advancing machine learning algorithms, which have the potential to outperform the traditional box-cut methods on image parameters. We present preliminary results of a precise classification of a small set of muon events using a convolutional neural networks model with the raw images as input features. We also show the possibility of using the convolutional neural networks model for regression problems, such as the radius and brightness measurement of muon events, which can be used to calibrate the throughput efficiency of IACTs.

  5. Self-Taught convolutional neural networks for short text clustering.

    PubMed

    Xu, Jiaming; Xu, Bo; Wang, Peng; Zheng, Suncong; Tian, Guanhua; Zhao, Jun; Xu, Bo

    2017-04-01

    Short text clustering is a challenging problem due to its sparseness of text representation. Here we propose a flexible Self-Taught Convolutional neural network framework for Short Text Clustering (dubbed STC(2)), which can flexibly and successfully incorporate more useful semantic features and learn non-biased deep text representation in an unsupervised manner. In our framework, the original raw text features are firstly embedded into compact binary codes by using one existing unsupervised dimensionality reduction method. Then, word embeddings are explored and fed into convolutional neural networks to learn deep feature representations, meanwhile the output units are used to fit the pre-trained binary codes in the training process. Finally, we get the optimal clusters by employing K-means to cluster the learned representations. Extensive experimental results demonstrate that the proposed framework is effective, flexible and outperform several popular clustering methods when tested on three public short text datasets.

  6. Deep learning for steganalysis via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu

    2015-03-01

    Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.

  7. Spectral density of generalized Wishart matrices and free multiplicative convolution

    NASA Astrophysics Data System (ADS)

    Młotkowski, Wojciech; Nowak, Maciej A.; Penson, Karol A.; Życzkowski, Karol

    2015-07-01

    We investigate the level density for several ensembles of positive random matrices of a Wishart-like structure, W =X X† , where X stands for a non-Hermitian random matrix. In particular, making use of the Cauchy transform, we study the free multiplicative powers of the Marchenko-Pastur (MP) distribution, MP⊠s, which for an integer s yield Fuss-Catalan distributions corresponding to a product of s -independent square random matrices, X =X1⋯Xs . New formulas for the level densities are derived for s =3 and s =1 /3 . Moreover, the level density corresponding to the generalized Bures distribution, given by the free convolution of arcsine and MP distributions, is obtained. We also explain the reason of such a curious convolution. The technique proposed here allows for the derivation of the level densities for several other cases.

  8. Rationale-Augmented Convolutional Neural Networks for Text Classification

    PubMed Central

    Zhang, Ye; Marshall, Iain; Wallace, Byron C.

    2016-01-01

    We present a new Convolutional Neural Network (CNN) model for text classification that jointly exploits labels on documents and their constituent sentences. Specifically, we consider scenarios in which annotators explicitly mark sentences (or snippets) that support their overall document categorization, i.e., they provide rationales. Our model exploits such supervision via a hierarchical approach in which each document is represented by a linear combination of the vector representations of its component sentences. We propose a sentence-level convolutional model that estimates the probability that a given sentence is a rationale, and we then scale the contribution of each sentence to the aggregate document representation in proportion to these estimates. Experiments on five classification datasets that have document labels and associated rationales demonstrate that our approach consistently outperforms strong baselines. Moreover, our model naturally provides explanations for its predictions. PMID:28191551

  9. Statistical Downscaling using Super Resolution Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Vandal, T.; Ganguly, S.; Ganguly, A. R.; Kodra, E.

    2016-12-01

    We present a novel approach to statistical downscaling using image super-resolution and convolutional neural networks. Image super-resolution (SR), a widely researched topic in the machine learning community, aims to increase the resolution of low resolution images, similar to the goal of downscaling Global Circulation Models (GCMs). With SR we are able to capture and generalize spatial patterns in the climate by representing each climate state as an "image". In particular, we show the applicability of Super Resolution Convolutional Neural Networks (SRCNN) to downscaling daily precipitation in the United States. SRCNN is a state-of-the-art single image SR method and has the advantage of utilizing multiple input variables, known as channels. We apply SRCNN to downscaling precipitation by using low resolution precipitation and high resolution elevation as inputs and compare to bias correction spatial disaggregation (BCSD).

  10. Fully convolutional neural networks for polyp segmentation in colonoscopy

    NASA Astrophysics Data System (ADS)

    Brandao, Patrick; Mazomenos, Evangelos; Ciuti, Gastone; Caliò, Renato; Bianchi, Federico; Menciassi, Arianna; Dario, Paolo; Koulaouzidis, Anastasios; Arezzo, Alberto; Stoyanov, Danail

    2017-03-01

    Colorectal cancer (CRC) is one of the most common and deadliest forms of cancer, accounting for nearly 10% of all forms of cancer in the world. Even though colonoscopy is considered the most effective method for screening and diagnosis, the success of the procedure is highly dependent on the operator skills and level of hand-eye coordination. In this work, we propose to adapt fully convolution neural networks (FCN), to identify and segment polyps in colonoscopy images. We converted three established networks into a fully convolution architecture and fine-tuned their learned representations to the polyp segmentation task. We validate our framework on the 2015 MICCAI polyp detection challenge dataset, surpassing the state-of-the-art in automated polyp detection. Our method obtained high segmentation accuracy and a detection precision and recall of 73.61% and 86.31%, respectively.

  11. Spectral density of generalized Wishart matrices and free multiplicative convolution.

    PubMed

    Młotkowski, Wojciech; Nowak, Maciej A; Penson, Karol A; Życzkowski, Karol

    2015-07-01

    We investigate the level density for several ensembles of positive random matrices of a Wishart-like structure, W=XX(†), where X stands for a non-Hermitian random matrix. In particular, making use of the Cauchy transform, we study the free multiplicative powers of the Marchenko-Pastur (MP) distribution, MP(⊠s), which for an integer s yield Fuss-Catalan distributions corresponding to a product of s-independent square random matrices, X=X(1)⋯X(s). New formulas for the level densities are derived for s=3 and s=1/3. Moreover, the level density corresponding to the generalized Bures distribution, given by the free convolution of arcsine and MP distributions, is obtained. We also explain the reason of such a curious convolution. The technique proposed here allows for the derivation of the level densities for several other cases.

  12. UFLIC: A Line Integral Convolution Algorithm for Visualizing Unsteady Flows

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Kao, David L.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents an algorithm, UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Using the Line Integral Convolution (LIC) as the underlying method, a new convolution algorithm is proposed that can effectively trace the flow's global features over time. The new algorithm consists of a time-accurate value depositing scheme and a successive feed-forward method. The value depositing scheme accurately models the flow advection, and the successive feed-forward method maintains the coherence between animation frames. Our new algorithm can produce time-accurate, highly coherent flow animations to highlight global features in unsteady flow fields. CFD scientists, for the first time, are able to visualize unsteady surface flows using our algorithm.

  13. Image interpolation by two-dimensional parametric cubic convolution.

    PubMed

    Shi, Jiazheng; Reichenbach, Stephen E

    2006-07-01

    Cubic convolution is a popular method for image interpolation. Traditionally, the piecewise-cubic kernel has been derived in one dimension with one parameter and applied to two-dimensional (2-D) images in a separable fashion. However, images typically are statistically nonseparable, which motivates this investigation of nonseparable cubic convolution. This paper derives two new nonseparable, 2-D cubic-convolution kernels. The first kernel, with three parameters (designated 2D-3PCC), is the most general 2-D, piecewise-cubic interpolator defined on [-2, 2] x [-2, 2] with constraints for biaxial symmetry, diagonal (or 90 degrees rotational) symmetry, continuity, and smoothness. The second kernel, with five parameters (designated 2D-5PCC), relaxes the constraint of diagonal symmetry, based on the observation that many images have rotationally asymmetric statistical properties. This paper also develops a closed-form solution for determining the optimal parameter values for parametric cubic-convolution kernels with respect to ensembles of scenes characterized by autocorrelation (or power spectrum). This solution establishes a practical foundation for adaptive interpolation based on local autocorrelation estimates. Quantitative fidelity analyses and visual experiments indicate that these new methods can outperform several popular interpolation methods. An analysis of the error budgets for reconstruction error associated with blurring and aliasing illustrates that the methods improve interpolation fidelity for images with aliased components. For images with little or no aliasing, the methods yield results similar to other popular methods. Both 2D-3PCC and 2D-5PCC are low-order polynomials with small spatial support and so are easy to implement and efficient to apply.

  14. New syndrome decoder for (n, 1) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.

  15. Convolution using guided acoustooptical interaction in thin-film waveguides

    NASA Technical Reports Server (NTRS)

    Chang, W. S. C.; Becker, R. A.; Tsai, C. S.; Yao, I. W.

    1977-01-01

    Interaction of two antiparallel acoustic surface waves (ASW) with an optical guided wave has been investigated theoretically as well as experimentally to obtain the convolution of two ASW signals. The maximum time-bandwidth product that can be achieved by such a convolver is shown to be of the order of 1000 or more. The maximum dynamic range can be as large as 83 dB.

  16. Image data compression using cubic convolution spline interpolation.

    PubMed

    Truong, T K; Wang, L J; Reed, I S; Hsieh, W S

    2000-01-01

    A new cubic convolution spline interpolation (CCSI )for both one-dimensional (1-D) and two-dimensional (2-D) signals is developed in order to subsample signal and image compression data. The CCSI yields a very accurate algorithm for smoothing. It is also shown that this new and fast smoothing filter for CCSI can be used with the JPEG standard to design an improved JPEG encoder-decoder for a high compression ratio.

  17. Fast Electron Correlation Methods for Molecular Clusters without Basis Set Superposition Errors

    SciTech Connect

    Kamiya, Muneaki; Hirata, So; Valiev, Marat

    2008-02-19

    Two critical extensions to our fast, accurate, and easy-to-implement binary or ternary interaction method for weakly-interacting molecular clusters [Hirata et al. Mol. Phys. 103, 2255 (2005)] have been proposed, implemented, and applied to water hexamers, hydrogen fluoride chains and rings, and neutral and zwitterionic glycine–water clusters with an excellent result for an initial performance assessment. Our original method included up to two- or three-body Coulomb, exchange, and correlation energies exactly and higher-order Coulomb energies in the dipole–dipole approximation. In this work, the dipole moments are replaced by atom-centered point charges determined so that they reproduce the electrostatic potentials of the cluster subunits as closely as possible and also self-consistently with one another in the cluster environment. They have been shown to lead to dramatic improvement in the description of short-range electrostatic potentials not only of large, charge-separated subunits like zwitterionic glycine but also of small subunits. Furthermore, basis set superposition errors (BSSE) known to plague direct evaluation of weak interactions have been eliminated by com-bining the Valiron–Mayer function counterpoise (VMFC) correction with our binary or ternary interaction method in an economical fashion (quadratic scaling n2 with respect to the number of subunits n when n is small and linear scaling when n is large). A new variant of VMFC has also been proposed in which three-body and all higher-order Coulomb effects on BSSE are estimated approximately. The BSSE-corrected ternary interaction method with atom-centered point charges reproduces the VMFC-corrected results of conventional electron correlation calculations within 0.1 kcal/mol. The proposed method is significantly more accurate and also efficient than conventional correlation methods uncorrected of BSSE.

  18. Generalization of susceptibility of RF systems through far-field pattern superposition

    NASA Astrophysics Data System (ADS)

    Verdin, B.; Debroux, P.

    2015-05-01

    The purpose of this paper is to perform an analysis of RF (Radio Frequency) communication systems in a large electromagnetic environment to identify its susceptibility to jamming systems. We propose a new method that incorporates the use of reciprocity and superposition of the far-field radiation pattern of the RF system and the far-field radiation pattern of the jammer system. By using this method we can find the susceptibility pattern of RF systems with respect to the elevation and azimuth angles. A scenario was modeled with HFSS (High Frequency Structural Simulator) where the radiation pattern of the jammer was simulated as a cylindrical horn antenna. The RF jamming entry point used was a half-wave dipole inside a cavity with apertures that approximates a land-mobile vehicle, the dipole approximates a leaky coax cable. Because of the limitation of the simulation method, electrically large electromagnetic environments cannot be quickly simulated using HFSS's finite element method (FEM). Therefore, the combination of the transmit antenna radiation pattern (horn) superimposed onto the receive antenna pattern (dipole) was performed in MATLAB. A 2D or 3D susceptibility pattern is obtained with respect to the azimuth and elevation angles. In addition, by incorporating the jamming equation into this algorithm, the received jamming power as a function of distance at the RF receiver Pr(Φr, θr) can be calculated. The received power depends on antenna properties, propagation factor and system losses. Test cases include: a cavity with four apertures, a cavity above an infinite ground plane, and a land-mobile vehicle approximation. By using the proposed algorithm a susceptibility analysis of RF systems in electromagnetic environments can be performed.

  19. SU-E-T-31: A Fast Finite Size Pencil Beam (FSPB) Convolution Algorithm for a New Co-60 Arc Therapy Machine

    SciTech Connect

    Chibani, O; Eldib, A; Ma, C

    2015-06-15

    Purpose: Present a fast Finite Size Pencil Beam (FSPB) convolution algorithm for a new Co-60 arc therapy machine. The FSPB algorithm accounts for (i) strong angular divergence (short SAD), (ii) heterogeneity effect for primary attenuation, and (iii) source energy spectrum. Methods: The FSPB algorithm is based on a 0.5×0.5-cm2 dose kernel calculated using the GEPTS (Gamma Electron and Positron Transport System) Monte Carlo code. The dose kernel is tabulated using a thin XYZ mesh (0.1 mm steps in lateral directions) for radius less than 1 cm and using an RZ mesh (with varying steps) for larger radial distance. To account for SSD effect, 11 dose kernels with SSDs varying between 30 cm to 80 cm are calculated. Maynord factor and “lateral stretching” are applied to account for differences between closest and actual SSD. Appropriate rotations and second order interpolation are used to calculate the dose from a given beamlet to a point. Results: Accuracy: Dose distributions in water with 80 cm SSD are calculated using the new FSPB convolution algorithm and full Monte Carlo simulation (gold standard). Figs 1–4 show excellent agreements between FSPB and Monte Carlo calculations for different field sizes and at different depths. The dose distribution for a prostate case is calculated using FSPB (Fig.5). Sixty conformal beams with rectum blocking are assumed. Figs 6–8 show the comparison with Monte Carlo simulation based on the same beam apertures. The excellent agreement demonstrates the accuracy of the new algorithm in handling SSD variation, oblique incidence, and scatter contribution.Speed: The FSPB convolution algorithm calculates 28 million dose points per second using a single 2.2-GHz CPU. The present algorithm is seven times faster than a similar algorithm from Gu et al. (Phys. Med. Biol. 54, 2009, 6287–6297). Conclusion: A fast and accurate FSPB convolution algorithm was developed and benchmarked.

  20. Automatic localization of vertebrae based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Shen, Wei; Yang, Feng; Mu, Wei; Yang, Caiyun; Yang, Xin; Tian, Jie

    2015-03-01

    Localization of the vertebrae is of importance in many medical applications. For example, the vertebrae can serve as the landmarks in image registration. They can also provide a reference coordinate system to facilitate the localization of other organs in the chest. In this paper, we propose a new vertebrae localization method using convolutional neural networks (CNN). The main advantage of the proposed method is the removal of hand-crafted features. We construct two training sets to train two CNNs that share the same architecture. One is used to distinguish the vertebrae from other tissues in the chest, and the other is aimed at detecting the centers of the vertebrae. The architecture contains two convolutional layers, both of which are followed by a max-pooling layer. Then the output feature vector from the maxpooling layer is fed into a multilayer perceptron (MLP) classifier which has one hidden layer. Experiments were performed on ten chest CT images. We used leave-one-out strategy to train and test the proposed method. Quantitative comparison between the predict centers and ground truth shows that our convolutional neural networks can achieve promising localization accuracy without hand-crafted features.

  1. A model of traffic signs recognition with convolutional neural network

    NASA Astrophysics Data System (ADS)

    Hu, Haihe; Li, Yujian; Zhang, Ting; Huo, Yi; Kuang, Wenqing

    2016-10-01

    In real traffic scenes, the quality of captured images are generally low due to some factors such as lighting conditions, and occlusion on. All of these factors are challengeable for automated recognition algorithms of traffic signs. Deep learning has provided a new way to solve this kind of problems recently. The deep network can automatically learn features from a large number of data samples and obtain an excellent recognition performance. We therefore approach this task of recognition of traffic signs as a general vision problem, with few assumptions related to road signs. We propose a model of Convolutional Neural Network (CNN) and apply the model to the task of traffic signs recognition. The proposed model adopts deep CNN as the supervised learning model, directly takes the collected traffic signs image as the input, alternates the convolutional layer and subsampling layer, and automatically extracts the features for the recognition of the traffic signs images. The proposed model includes an input layer, three convolutional layers, three subsampling layers, a fully-connected layer, and an output layer. To validate the proposed model, the experiments are implemented using the public dataset of China competition of fuzzy image processing. Experimental results show that the proposed model produces a recognition accuracy of 99.01 % on the training dataset, and yield a record of 92% on the preliminary contest within the fourth best.

  2. Fine-grained representation learning in convolutional autoencoders

    NASA Astrophysics Data System (ADS)

    Luo, Chang; Wang, Jie

    2016-03-01

    Convolutional autoencoders (CAEs) have been widely used as unsupervised feature extractors for high-resolution images. As a key component in CAEs, pooling is a biologically inspired operation to achieve scale and shift invariances, and the pooled representation directly affects the CAEs' performance. Fine-grained pooling, which uses small and dense pooling regions, encodes fine-grained visual cues and enhances local characteristics. However, it tends to be sensitive to spatial rearrangements. In most previous works, pooled features were obtained by empirically modulating parameters in CAEs. We see the CAE as a whole and propose a fine-grained representation learning law to extract better fine-grained features. This representation learning law suggests two directions for improvement. First, we probabilistically evaluate the discrimination-invariance tradeoff with fine-grained granularity in the pooled feature maps, and suggest the proper filter scale in the convolutional layer and appropriate whitening parameters in preprocessing step. Second, pooling approaches are combined with the sparsity degree in pooling regions, and we propose the preferable pooling approach. Experimental results on two independent benchmark datasets demonstrate that our representation learning law could guide CAEs to extract better fine-grained features and performs better in multiclass classification task. This paper also provides guidance for selecting appropriate parameters to obtain better fine-grained representation in other convolutional neural networks.

  3. Fast convolution quadrature for the wave equation in three dimensions

    NASA Astrophysics Data System (ADS)

    Banjai, L.; Kachanovska, M.

    2014-12-01

    This work addresses the numerical solution of time-domain boundary integral equations arising from acoustic and electromagnetic scattering in three dimensions. The semidiscretization of the time-domain boundary integral equations by Runge-Kutta convolution quadrature leads to a lower triangular Toeplitz system of size N. This system can be solved recursively in an almost linear time (O(Nlog2⁡N)), but requires the construction of O(N) dense spatial discretizations of the single layer boundary operator for the Helmholtz equation. This work introduces an improvement of this algorithm that allows to solve the scattering problem in an almost linear time. The new approach is based on two main ingredients: the near-field reuse and the application of data-sparse techniques. Exponential decay of Runge-Kutta convolution weights wnh(d) outside of a neighborhood of d≈nh (where h is a time step) allows to avoid constructing the near-field (i.e. singular and near-singular integrals) for most of the discretizations of the single layer boundary operators (near-field reuse). The far-field of these matrices is compressed with the help of data-sparse techniques, namely, H-matrices and the high-frequency fast multipole method. Numerical experiments indicate the efficiency of the proposed approach compared to the conventional Runge-Kutta convolution quadrature algorithm.

  4. On the growth and form of cortical convolutions

    NASA Astrophysics Data System (ADS)

    Tallinen, Tuomas; Chung, Jun Young; Rousseau, François; Girard, Nadine; Lefèvre, Julien; Mahadevan, L.

    2016-06-01

    The rapid growth of the human cortex during development is accompanied by the folding of the brain into a highly convoluted structure. Recent studies have focused on the genetic and cellular regulation of cortical growth, but understanding the formation of the gyral and sulcal convolutions also requires consideration of the geometry and physical shaping of the growing brain. To study this, we use magnetic resonance images to build a 3D-printed layered gel mimic of the developing smooth fetal brain; when immersed in a solvent, the outer layer swells relative to the core, mimicking cortical growth. This relative growth puts the outer layer into mechanical compression and leads to sulci and gyri similar to those in fetal brains. Starting with the same initial geometry, we also build numerical simulations of the brain modelled as a soft tissue with a growing cortex, and show that this also produces the characteristic patterns of convolutions over a realistic developmental course. All together, our results show that although many molecular determinants control the tangential expansion of the cortex, the size, shape, placement and orientation of the folds arise through iterations and variations of an elementary mechanical instability modulated by early fetal brain geometry.

  5. Deep Convolutional Neural Network for Inverse Problems in Imaging

    NASA Astrophysics Data System (ADS)

    Jin, Kyong Hwan; McCann, Michael T.; Froustey, Emmanuel; Unser, Michael

    2017-09-01

    In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyper parameter selection. The starting point of our work is the observation that unrolled iterative methods have the form of a CNN (filtering followed by point-wise non-linearity) when the normal operator (H*H, the adjoint of H times H) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill-posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a 512 x 512 image on GPU.

  6. Calcium transport in the rabbit superficial proximal convoluted tubule

    SciTech Connect

    Ng, R.C.; Rouse, D.; Suki, W.N.

    1984-09-01

    Calcium transport was studied in isolated S2 segments of rabbit superficial proximal convoluted tubules. 45Ca was added to the perfusate for measurement of lumen-to-bath flux (JlbCa), to the bath for bath-to-lumen flux (JblCa), and to both perfusate and bath for net flux (JnetCa). In these studies, the perfusate consisted of an equilibrium solution that was designed to minimize water flux or electrochemical potential differences (PD). Under these conditions, JlbCa (9.1 +/- 1.0 peq/mm X min) was not different from JblCa (7.3 +/- 1.3 peq/mm X min), and JnetCa was not different from zero, which suggests that calcium transport in the superficial proximal convoluted tubule is due primarily to passive transport. The efflux coefficient was 9.5 +/- 1.2 X 10(-5) cm/s, which was not significantly different from the influx coefficient, 7.0 +/- 1.3 X 10(-5) cm/s. When the PD was made positive or negative with use of different perfusates, net calcium absorption or secretion was demonstrated, respectively, which supports a major role for passive transport. These results indicate that in the superficial proximal convoluted tubule of the rabbit, passive driving forces are the major determinants of calcium transport.

  7. Robust hepatic vessel segmentation using multi deep convolution network

    NASA Astrophysics Data System (ADS)

    Kitrungrotsakul, Titinunt; Han, Xian-Hua; Iwamoto, Yutaro; Foruzan, Amir Hossein; Lin, Lanfen; Chen, Yen-Wei

    2017-03-01

    Extraction of blood vessels of the organ is a challenging task in the area of medical image processing. It is really difficult to get accurate vessel segmentation results even with manually labeling by human being. The difficulty of vessels segmentation is the complicated structure of blood vessels and its large variations that make them hard to recognize. In this paper, we present deep artificial neural network architecture to automatically segment the hepatic vessels from computed tomography (CT) image. We proposed novel deep neural network (DNN) architecture for vessel segmentation from a medical CT volume, which consists of three deep convolution neural networks to extract features from difference planes of CT data. The three networks have share features at the first convolution layer but will separately learn their own features in the second layer. All three networks will join again at the top layer. To validate effectiveness and efficiency of our proposed method, we conduct experiments on 12 CT volumes which training data are randomly generate from 5 CT volumes and 7 using for test. Our network can yield an average dice coefficient 0.830, while 3D deep convolution neural network can yield around 0.7 and multi-scale can yield only 0.6.

  8. Construction of the Superposition of Displaced Fock States and Entangled Displaced Fock States

    NASA Astrophysics Data System (ADS)

    Karimi, Amir

    2017-09-01

    In this paper, at first we will construct the superposition of two displaced Fock states and two-mode entangled displaced Fock states mathematically by presenting theoretical methods. In these methods, we will introduce new operators using the parity and displacement operators. It will be observed that the superposition of two displaced Fock states and two-mode entangled displaced Fock states are constructed via the action of the introduced operators on one-mode and two-mode Fock states, respectively. Next, we will show that the presented methods have the potential ability to produce the superposition consist of more than two displaced Fock states and multi-mode entangled displaced Fock states, too.

  9. Oblique superposition of two elliptically polarized lightwaves using geometric algebra: is energy-momentum conserved?

    PubMed

    Sze, Michelle Wynne C; Sugon, Quirino M; McNamara, Daniel J

    2010-11-01

    In this paper, we use Clifford (geometric) algebra Cl(3,0) to verify if electromagnetic energy-momentum density is still conserved for oblique superposition of two elliptically polarized plane waves with the same frequency. We show that energy-momentum conservation is valid at any time only for the superposition of two counter-propagating elliptically polarized plane waves. We show that the time-average energy-momentum of the superposition of two circularly polarized waves with opposite handedness is conserved regardless of the propagation directions of the waves. And, we show that the resulting momentum density of the superposed waves generally has a vector component perpendicular to the momentum densities of the individual waves.

  10. [Superposition of the motor commands during creation of static efforts by human hand muscles].

    PubMed

    Vereshchaka, I V; Horkovenko, A V

    2012-01-01

    The features of superposition of central motor commands (CMCs) have been studied during generation of the "two-joint" isometric efforts by hand. The electromyogram (EMG) amplitudes which were recorded from the humeral belt and shoulder muscles have been used for estimation of the CMCs intensity. The forces were generated in the horizontal plane of the work space; the position of arm was fixed. Two vectors of equal amplitudes and close direction and their geometrical sum were compared. The hypothesis of the CMCs' superposition in the task of the force vector summation has been examined. The directions of the constituent and resulting forces with satisfactory superposition of the CMCs were defined. Differences in the co-activation patterns for flexor and extensor muscles of both joints were shown. The high level of the flexor muscles activity has been observed during extension efforts, while the flexion directions demonstrated much weaker activation of the extensor muscles.

  11. Quantum tic-tac-toe: A teaching metaphor for superposition in quantum mechanics

    NASA Astrophysics Data System (ADS)

    Goff, Allan

    2006-11-01

    Quantum tic-tac-toe was developed as a metaphor for the counterintuitive nature of superposition exhibited by quantum systems. It offers a way of introducing quantum physics without advanced mathematics, provides a conceptual foundation for understanding the meaning of quantum mechanics, and is fun to play. A single superposition rule is added to the child's game of classical tic-tac-toe. Each move consists of a pair of marks subscripted by the number of the move ("spooky" marks) that must be placed in different squares. When a measurement occurs, one spooky mark becomes real and the other disappears. Quantum tic-tac-toe illustrates a number of quantum principles including states, superposition, collapse, nonlocality, entanglement, the correspondence principle, interference, and decoherence. The game can be played on paper or on a white board. A Web-based version provides a refereed playing board to facilitate the mechanics of play, making it ideal for classrooms with a computer projector.

  12. Space-variant polarization patterns of non-collinear Poincaré superpositions

    NASA Astrophysics Data System (ADS)

    Galvez, E. J.; Beach, K.; Zeosky, J. J.; Khajavi, B.

    2015-03-01

    We present analysis and measurements of the polarization patterns produced by non-collinear superpositions of Laguerre-Gauss spatial modes in orthogonal polarization states, which are known as Poincaré modes. Our findings agree with predictions (I. Freund Opt. Lett. 35, 148-150 (2010)), that superpositions containing a C-point lead to a rotation of the polarization ellipse in 3-dimensions. Here we do imaging polarimetry of superpositions of first- and zero-order spatial modes at relative beam angles of 0-4 arcmin. We find Poincaré-type polarization patterns showing fringes in polarization orientation, but which preserve the polarization-singularity index for all three cases of C-points: lemons, stars and monstars.

  13. Experimental implementation of the Deutsch-Jozsa algorithm for three-qubit functions using pure coherent molecular superpositions

    SciTech Connect

    Vala, Jiri; Kosloff, Ronnie; Amitay, Zohar; Zhang Bo; Leone, Stephen R.

    2002-12-01

    The Deutsch-Jozsa algorithm is experimentally demonstrated for three-qubit functions using pure coherent superpositions of Li{sub 2} rovibrational eigenstates. The function's character, either constant or balanced, is evaluated by first imprinting the function, using a phase-shaped femtosecond pulse, on a coherent superposition of the molecular states, and then projecting the superposition onto an ionic final state, using a second femtosecond pulse at a specific time delay.

  14. Coherent scattering of a multiphoton quantum superposition by a mirror BEC.

    PubMed

    De Martini, Francesco; Sciarrino, Fabio; Vitelli, Chiara; Cataliotti, Francesco S

    2010-02-05

    We present the proposition of an experiment in which the multiphoton quantum superposition consisting of N approximately 10{5} particles generated by a quantum-injected optical parametric amplifier, seeded by a single-photon belonging to an Einstein-Podolsky-Rosen entangled pair, is made to interact with a mirror-Bose-Einstein condensate (BEC) shaped as a Bragg interference structure. The overall process will realize a macroscopic quantum superposition involving a microscopic single-photon state of polarization entangled with the coherent macroscopic transfer of momentum to the BEC structure, acting in spacelike separated distant places.

  15. Geometric measure of pairwise quantum discord for superpositions of multipartite generalized coherent states

    NASA Astrophysics Data System (ADS)

    Daoud, M.; Ahl Laamara, R.

    2012-07-01

    We give the explicit expressions of the pairwise quantum correlations present in superpositions of multipartite coherent states. A special attention is devoted to the evaluation of the geometric quantum discord. The dynamics of quantum correlations under a dephasing channel is analyzed. A comparison of geometric measure of quantum discord with that of concurrence shows that quantum discord in multipartite coherent states is more resilient to dissipative environments than is quantum entanglement. To illustrate our results, we consider some special superpositions of Weyl-Heisenberg, SU(2) and SU(1,1) coherent states which interpolate between Werner and Greenberger-Horne-Zeilinger states.

  16. Quantum decoherence time scales for ionic superposition states in ion channels.

    PubMed

    Salari, V; Moradi, N; Sajadi, M; Fazileh, F; Shahbazi, F

    2015-03-01

    There are many controversial and challenging discussions about quantum effects in microscopic structures in neurons of the brain and their role in cognitive processing. In this paper, we focus on a small, nanoscale part of ion channels which is called the "selectivity filter" and plays a key role in the operation of an ion channel. Our results for superposition states of potassium ions indicate that decoherence times are of the order of picoseconds. This decoherence time is not long enough for cognitive processing in the brain, however, it may be adequate for quantum superposition states of ions in the filter to leave their quantum traces on the selectivity filter and action potentials.

  17. Quantum decoherence time scales for ionic superposition states in ion channels

    NASA Astrophysics Data System (ADS)

    Salari, V.; Moradi, N.; Sajadi, M.; Fazileh, F.; Shahbazi, F.

    2015-03-01

    There are many controversial and challenging discussions about quantum effects in microscopic structures in neurons of the brain and their role in cognitive processing. In this paper, we focus on a small, nanoscale part of ion channels which is called the "selectivity filter" and plays a key role in the operation of an ion channel. Our results for superposition states of potassium ions indicate that decoherence times are of the order of picoseconds. This decoherence time is not long enough for cognitive processing in the brain, however, it may be adequate for quantum superposition states of ions in the filter to leave their quantum traces on the selectivity filter and action potentials.

  18. Entanglement of arbitrary superpositions of modes within two-dimensional orbital angular momentum state spaces

    SciTech Connect

    Jack, B.; Leach, J.; Franke-Arnold, S.; Ireland, D. G.; Padgett, M. J.; Yao, A. M.; Barnett, S. M.; Romero, J.

    2010-04-15

    We use spatial light modulators (SLMs) to measure correlations between arbitrary superpositions of orbital angular momentum (OAM) states generated by spontaneous parametric down-conversion. Our technique allows us to fully access a two-dimensional OAM subspace described by a Bloch sphere, within the higher-dimensional OAM Hilbert space. We quantify the entanglement through violations of a Bell-type inequality for pairs of modal superpositions that lie on equatorial, polar, and arbitrary great circles of the Bloch sphere. Our work shows that SLMs can be used to measure arbitrary spatial states with a fidelity sufficient for appropriate quantum information processing systems.

  19. Complex periodic non-diffracting beams generated by superposition of two identical periodic wave fields

    NASA Astrophysics Data System (ADS)

    Gao, Yuanmei; Wen, Zengrun; Zheng, Liren; Zhao, Lina

    2017-04-01

    A method has been proposed to generate complex periodic discrete non-diffracting beams (PDNBs) via superposition of two identical simple PDNBs at a particular angle. As for special cases, we studied the superposition of the two identical squares (;4+4;) and two hexagonal (;6+6;) periodic wave fields at specific angles, respectively, and obtained a series of interesting complex PDNBs. New PDNBs were also obtained by modulating the initial phase difference between adjacent interfering beams. In the experiment, a 4 f Fourier filter system and a phase-only spatial light modulator imprinting synthesis phase patterns of these PDNBs were used to produce desired wave fields.

  20. Algorithms used in heterogeneous dose calculations show systematic differences as measured with the Radiological Physics Center’s anthropomorphic thorax phantom used for RTOG credentialing

    PubMed Central

    Kry, Stephen F.; Alvarez, Paola; Molineu, Andrea; Amador, Carrie; Galvin, James; Followill, David S.

    2012-01-01

    Purpose To determine the impact of treatment planning algorithm on the accuracy of heterogeneous dose calculations in the Radiological Physics Center (RPC) thorax phantom. Methods and Materials We retrospectively analyzed the results of 304 irradiations of the RPC thorax phantom at 221 different institutions as part of credentialing for RTOG clinical trials; the irradiations were all done using 6-MV beams. Treatment plans included those for intensity-modulated radiation therapy (IMRT) as well as 3D conformal therapy (3D CRT). Heterogeneous plans were developed using Monte Carlo (MC), convolution/superposition (CS) and the anisotropic analytic algorithm (AAA), as well as pencil beam (PB) algorithms. For each plan and delivery, the absolute dose measured in the center of a lung target was compared to the calculated dose, as was the planar dose in 3 orthogonal planes. The difference between measured and calculated dose was examined as a function of planning algorithm as well as use of IMRT. Results PB algorithms overestimated the dose delivered to the center of the target by 4.9% on average. Surprisingly, CS algorithms and AAA also showed a systematic overestimation of the dose to the center of the target, by 3.7% on average. In contrast, the MC algorithm dose calculations agreed with measurement within 0.6% on average. There was no difference observed between IMRT and 3D CRT calculation accuracy. Conclusion Unexpectedly, advanced treatment planning systems (those using CS and AAA algorithms) overestimated the dose that was delivered to the lung target. This issue requires attention in terms of heterogeneity calculations and potentially in terms of clinical practice. PMID:23237006

  1. Torsional random walk statistics on lattices using convolution on crystallographic motion groups.

    PubMed Central

    Skliros, Aris; Chirikjian, Gregory S.

    2007-01-01

    This paper presents a new algorithm for generating the conformational statistics of lattice polymer models. The inputs to the algorithm are the distributions of poses (positions and orientations) of reference frames attached to sequentially proximal bonds in the chain as it undergoes all possible torsional motions in the lattice. If z denotes the number of discrete torsional motions allowable around each of the n bonds, our method generates the probability distribution in end-to-end pose corresponding to all of the zn independent lattice conformations in O(nD+1) arithmetic operations for lattices in D-dimensional space. This is achieved by dividing the chain into short segments and performing multiple generalized convolutions of the pose distribution functions for each segment. The convolution is performed with respect to the crystallographic space group for the lattice on which the chain is defined. The formulation is modified to include the effects of obstacles (excluded volumes), and to calculate the frequency of the occurrence of each conformation when the effects of pairwise conformational energy are included. In the latter case (which is for 3 dimensional lattices only) the computational cost is O(z4n4). This polynomial complexity is a vast improvement over the O(zn) exponential complexity associated with the brute force enumeration of all conformations. The distribution of end-to-end distances and average radius of gyration are calculated easily once the pose distribution for the full chain is found. The method is demonstrated with square, hexagonal, cubic and tetrahedral lattices. PMID:17898862

  2. Super-resolution reconstruction algorithm based on adaptive convolution kernel size selection

    NASA Astrophysics Data System (ADS)

    Gao, Hang; Chen, Qian; Sui, Xiubao; Zeng, Junjie; Zhao, Yao

    2016-09-01

    Restricted by the detector technology and optical diffraction limit, the spatial resolution of infrared imaging system is difficult to achieve significant improvement. Super-Resolution (SR) reconstruction algorithm is an effective way to solve this problem. Among them, the SR algorithm based on multichannel blind deconvolution (MBD) estimates the convolution kernel only by low resolution observation images, according to the appropriate regularization constraints introduced by a priori assumption, to realize the high resolution image restoration. The algorithm has been shown effective when each channel is prime. In this paper, we use the significant edges to estimate the convolution kernel and introduce an adaptive convolution kernel size selection mechanism, according to the uncertainty of the convolution kernel size in MBD processing. To reduce the interference of noise, we amend the convolution kernel in an iterative process, and finally restore a clear image. Experimental results show that the algorithm can meet the convergence requirement of the convolution kernel estimation.

  3. Approaches to reducing photon dose calculation errors near metal implants.

    PubMed

    Huang, Jessie Y; Followill, David S; Howell, Rebecca M; Liu, Xinming; Mirkovic, Dragan; Stingo, Francesco C; Kry, Stephen F

    2016-09-01

    Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well as two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare's o-mar, GE Healthcare's monochromatic gemstone spectral imaging (gsi) using dual-energy CT, and gsi with metal artifact reduction software (mars) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact reduction methods investigated

  4. Approaches to reducing photon dose calculation errors near metal implants

    PubMed Central

    Huang, Jessie Y.; Followill, David S.; Howell, Rebecca M.; Liu, Xinming; Mirkovic, Dragan; Stingo, Francesco C.; Kry, Stephen F.

    2016-01-01

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well as two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s o-mar, GE Healthcare’s monochromatic gemstone spectral imaging (gsi) using dual-energy CT, and gsi with metal artifact reduction software (mars) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact

  5. Chaos and Complexities Theories. Superposition and Standardized Testing: Are We Coming or Going?

    ERIC Educational Resources Information Center

    Erwin, Susan

    2005-01-01

    The purpose of this paper is to explore the possibility of using the principle of "superposition of states" (commonly illustrated by Schrodinger's Cat experiment) to understand the process of using standardized testing to measure a student's learning. Comparisons from literature, neuroscience, and Schema Theory will be used to expound upon the…

  6. Active two-pulse superposition technique of a pulsed Nd:YAG laser

    NASA Astrophysics Data System (ADS)

    Kim, Hee-Je; Joung, Jong-Han; Lee, Dong-Hoon; Kim, Dong-Hyun

    1998-06-01

    In manufacturing processes, various and suitable pulse shapes are required for the purpose of materials processing, and the pulse shape is regarded as a dominant factor according to the specific property of materials. Therefore, this study has undertaken to generate the correct ones by using the two-pulse superposition technique. The pulsed Nd:YAG laser using a multi-mesh network and the pulse superposition technique is described. We have tried to superpose two pulses with the single shot multivibrator (SSM: 74LS123) having two SCRs which control the delay time of the two gate turn-on signals. One is a main rectangular pulse, and the other is a superposed sinusoidal pulse. The use of this technique might impart a number of advantages to people who need a suitable pulse shape for particular applications such as welding, cutting, and drilling. In addition, we compared the case of no superposition and applied input energy of 100 J in only the main circuit with that of superposition and applied input energy of 75 J in the main circuit and 25 J in the superposing circuit. It is found that the maximum efficiency of 4.5% is obtained within the range of delay time less than 230 microsecond(s) by adjusting the delay time from 0 to 1 ms.

  7. Application of time-temperature-stress superposition on creep of wood-plastic composites

    NASA Astrophysics Data System (ADS)

    Chang, Feng-Cheng; Lam, Frank; Kadla, John F.

    2013-08-01

    Time-temperature-stress superposition principle (TTSSP) was widely applied in studies of viscoelastic properties of materials. It involves shifting curves at various conditions to construct master curves. To extend the application of this principle, a temperature-stress hybrid shift factor and a modified Williams-Landel-Ferry (WLF) equation that incorporated variables of stress and temperature for the shift factor fitting were studied. A wood-plastic composite (WPC) was selected as the test subject to conduct a series of short-term creep tests. The results indicate that the WPC were rheologically simple materials and merely a horizontal shift was needed for the time-temperature superposition, whereas vertical shifting would be needed for time-stress superposition. The shift factor was independent of the stress for horizontal shifts in time-temperature superposition. In addition, the temperature- and stress-shift factors used to construct master curves were well fitted with the WLF equation. Furthermore, the parameters of the modified WLF equation were also successfully calibrated. The application of this method and equation can be extended to curve shifting that involves the effects of both temperature and stress simultaneously.

  8. Using Musical Intervals to Demonstrate Superposition of Waves and Fourier Analysis

    ERIC Educational Resources Information Center

    LoPresto, Michael C.

    2013-01-01

    What follows is a description of a demonstration of superposition of waves and Fourier analysis using a set of four tuning forks mounted on resonance boxes and oscilloscope software to create, capture and analyze the waveforms and Fourier spectra of musical intervals.

  9. Drawings and Ideas of Physics Teacher Candidates Relating to the Superposition Principle on a Continuous Rope

    ERIC Educational Resources Information Center

    Sengoren, Serap Kaya; Tanel, Rabia; Kavcar, Nevzat

    2006-01-01

    The superposition principle is used to explain many phenomena in physics. Incomplete knowledge about this topic at a basic level leads to physics students having problems in the future. As long as prospective physics teachers have difficulties in the subject, it is inevitable that high school students will have the same difficulties. The aim of…

  10. On sufficient statistics of least-squares superposition of vector sets.

    PubMed

    Konagurthu, Arun S; Kasarapu, Parthan; Allison, Lloyd; Collier, James H; Lesk, Arthur M

    2015-06-01

    The problem of superposition of two corresponding vector sets by minimizing their sum-of-squares error under orthogonal transformation is a fundamental task in many areas of science, notably structural molecular biology. This problem can be solved exactly using an algorithm whose time complexity grows linearly with the number of correspondences. This efficient solution has facilitated the widespread use of the superposition task, particularly in studies involving macromolecular structures. This article formally derives a set of sufficient statistics for the least-squares superposition problem. These statistics are additive. This permits a highly efficient (constant time) computation of superpositions (and sufficient statistics) of vector sets that are composed from its constituent vector sets under addition or deletion operation, where the sufficient statistics of the constituent sets are already known (that is, the constituent vector sets have been previously superposed). This results in a drastic improvement in the run time of the methods that commonly superpose vector sets under addition or deletion operations, where previously these operations were carried out ab initio (ignoring the sufficient statistics). We experimentally demonstrate the improvement our work offers in the context of protein structural alignment programs that assemble a reliable structural alignment from well-fitting (substructural) fragment pairs. A C++ library for this task is available online under an open-source license.

  11. Adiabatic generation of arbitrary coherent superpositions of two quantum states: Exact and approximate solutions

    NASA Astrophysics Data System (ADS)

    Zlatanov, Kaloyan N.; Vitanov, Nikolay V.

    2017-07-01

    The common objective of the application of adiabatic techniques in the field of quantum control is to transfer a quantum system from one discrete energy state to another. These techniques feature both high efficiency and insensitivity to variations in the experimental parameters, e.g., variations in the driving field amplitude, duration, frequency, and shape, as well as fluctuations in the environment. Here we explore the potential of adiabatic techniques for creating arbitrary predefined coherent superpositions of two quantum states. We show that an equally weighted coherent superposition can be created by temporal variation of the ratio between the Rabi frequency Ω (t ) and the detuning Δ (t ) from 0 to ∞ (case 1) or vice versa (case 2), as it is readily deduced from the explicit adiabatic solution for the Bloch vector. We infer important differences between cases 1 and 2 in the composition of the created coherent superposition: The latter depends on the dynamical phase of the process in case 2, while it does not depend on this phase in case 1. Furthermore, an arbitrary coherent superposition of unequal weights can be created by using asymptotic ratios of Ω (t )/Δ (t ) different from 0 and ∞ . We supplement the general adiabatic solution with analytic solutions for three exactly soluble models: two trigonometric models and the hyperbolic Demkov-Kunike model. They allow us not only to demonstrate the general predictions in specific cases but also to derive the nonadiabatic corrections to the adiabatic solutions.

  12. Using musical intervals to demonstrate superposition of waves and Fourier analysis

    NASA Astrophysics Data System (ADS)

    LoPresto, Michael C.

    2013-09-01

    What follows is a description of a demonstration of superposition of waves and Fourier analysis using a set of four tuning forks mounted on resonance boxes and oscilloscope software to create, capture and analyze the waveforms and Fourier spectra of musical intervals.

  13. Using Musical Intervals to Demonstrate Superposition of Waves and Fourier Analysis

    ERIC Educational Resources Information Center

    LoPresto, Michael C.

    2013-01-01

    What follows is a description of a demonstration of superposition of waves and Fourier analysis using a set of four tuning forks mounted on resonance boxes and oscilloscope software to create, capture and analyze the waveforms and Fourier spectra of musical intervals.

  14. Anomalous Lack of Decoherence of the Macroscopic Quantum Superpositions Based on Phase-Covariant Quantum Cloning

    NASA Astrophysics Data System (ADS)

    de Martini, Francesco; Sciarrino, Fabio; Spagnolo, Nicolò

    2009-09-01

    We show that all macroscopic quantum superpositions (MQS) based on phase-covariant quantum cloning are characterized by an anomalous high resilence to the decoherence processes. The analysis supports the results of recent MQS experiments and leads to conceive a useful conjecture regarding the realization of complex decoherence-free structures for quantum information, such as the quantum computer.

  15. Anomalous lack of decoherence of the macroscopic quantum superpositions based on phase-covariant quantum cloning.

    PubMed

    De Martini, Francesco; Sciarrino, Fabio; Spagnolo, Nicolò

    2009-09-04

    We show that all macroscopic quantum superpositions (MQS) based on phase-covariant quantum cloning are characterized by an anomalous high resilence to the decoherence processes. The analysis supports the results of recent MQS experiments and leads to conceive a useful conjecture regarding the realization of complex decoherence-free structures for quantum information, such as the quantum computer.

  16. Reservoir engineering of a mechanical resonator: generating a macroscopic superposition state and monitoring its decoherence

    NASA Astrophysics Data System (ADS)

    Asjad, Muhammad; Vitali, David

    2014-02-01

    A deterministic scheme for generating a macroscopic superposition state of a nanomechanical resonator is proposed. The nonclassical state is generated through a suitably engineered dissipative dynamics exploiting the optomechanical quadratic interaction with a bichromatically driven optical cavity mode. The resulting driven dissipative dynamics can be employed for monitoring and testing the decoherence processes affecting the nanomechanical resonator under controlled conditions.

  17. GPU-based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.

    PubMed

    Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd

    2016-11-07

    In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92% (CPU) to 96% (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.

  18. Measurement-based generation of shaped single photons and coherent state superpositions in optical cavities

    NASA Astrophysics Data System (ADS)

    Lecamwasam, Ruvindha L.; Hush, Michael R.; James, Matthew R.; Carvalho, André R. R.

    2017-01-01

    We propose related schemes to generate arbitrarily shaped single photons, i.e., photons with an arbitrary temporal profile, and coherent state superpositions using simple optical elements. The first system consists of two coupled cavities, a memory cavity and a shutter cavity, containing a second-order optical nonlinearity and electro-optic modulator (EOM), respectively. Photodetection events of the shutter cavity output herald preparation of a single photon in the memory cavity, which may be stored by immediately changing the optical length of the shutter cavity with the EOM after detection. On-demand readout of the photon, with arbitrary shaping, can be achieved through modulation of the EOM. The second scheme consists of a memory cavity with two outputs, which are interfered, phase shifted, and measured. States that closely approximate a coherent state superposition can be produced through postselection for sequences of detection events, with more photon detection events leading to a larger superposition. We furthermore demonstrate that no-knowledge feedback can be easily implemented in this system and used to preserve the superposition state, as well as provide an extra control mechanism for state generation.

  19. Identification of the Hereditary Kernels of Isotropic Linear Viscoelastic Materials in Combined Stress state. 1. Superposition of Shear and Bulk creep

    NASA Astrophysics Data System (ADS)

    Golub, V. P.; Maslov, B. P.; Fernati, P. V.

    2016-03-01

    Relations between the shear and bulk creep kernels of an isotropic linear viscoelastic material in combined stress state and the longitudinal and shear creep kernels constructed from data of creep tests under uniaxial tension and pure torsion are formulated. The constitutive equations of viscoelasticity for the combined stress state are chosen in the form of a superposition of the equation for shear strains and the equation for bulk strains. The hereditary kernels are described by Rabotnov's fractional-exponential functions. The creep strains of thin-walled pipes under a combination of tension and torsion or tension and internal pressure are calculated

  20. Improved iterative image reconstruction using variable projection binning and abbreviated convolution.

    PubMed

    Schmidlin, P

    1994-09-01

    Noise propagation in iterative reconstruction can be reduced by exact data projection. This can be done by area-weighted projection using the convolution method. Large arrays have to be convolved in order to achieve satisfactory image quality. Two procedures are described which improve the convolution method used so far. Variable binning helps to reduce the size of the convolution arrays without loss of image quality. Computation time is further reduced by abbreviated convolution. The effects of the procedures are illustrated by means of phantom measurements.

  1. Operational and convolution properties of three-dimensional Fourier transforms in spherical polar coordinates.

    PubMed

    Baddour, Natalie

    2010-10-01

    For functions that are best described with spherical coordinates, the three-dimensional Fourier transform can be written in spherical coordinates as a combination of spherical Hankel transforms and spherical harmonic series. However, to be as useful as its Cartesian counterpart, a spherical version of the Fourier operational toolset is required for the standard operations of shift, multiplication, convolution, etc. This paper derives the spherical version of the standard Fourier operation toolset. In particular, convolution in various forms is discussed in detail as this has important consequences for filtering. It is shown that standard multiplication and convolution rules do apply as long as the correct definition of convolution is applied.

  2. Direct phase-domain calculation of transmission line transients using two-sided recursions

    SciTech Connect

    Angelidis, G.; Semlyen, A.

    1995-04-01

    This paper presents a new method for the simulation of electromagnetic transients on transmission lines. Instead of using convolutions of the input variables only, the authors perform short convolutions with both input and output variables. The result is a method of Two-Sided Recursions (TSR), which is comparable in efficiency with the existing recursive convolutions or with their equivalent state variable formulations. It is, however, conceptually simpler and can be applied, in addition to fast modal-domain solutions, to the direct phase-domain calculation of transmission line transients with very accurate results.

  3. Validation of GPU based TomoTherapy dose calculation engine.

    PubMed

    Chen, Quan; Lu, Weiguo; Chen, Yu; Chen, Mingli; Henderson, Douglas; Sterpin, Edmond

    2012-04-01

    The graphic processing unit (GPU) based TomoTherapy convolution/superposition(C/S) dose engine (GPU dose engine) achieves a dramatic performance improvement over the traditional CPU-cluster based TomoTherapy dose engine (CPU dose engine). Besides the architecture difference between the GPU and CPU, there are several algorithm changes from the CPU dose engine to the GPU dose engine. These changes made the GPU dose slightly different from the CPU-cluster dose. In order for the commercial release of the GPU dose engine, its accuracy has to be validated. Thirty eight TomoTherapy phantom plans and 19 patient plans were calculated with both dose engines to evaluate the equivalency between the two dose engines. Gamma indices (Γ) were used for the equivalency evaluation. The GPU dose was further verified with the absolute point dose measurement with ion chamber and film measurements for phantom plans. Monte Carlo calculation was used as a reference for both dose engines in the accuracy evaluation in heterogeneous phantom and actual patients. The GPU dose engine showed excellent agreement with the current CPU dose engine. The majority of cases had over 99.99% of voxels with Γ(1%, 1 mm) < 1. The worst case observed in the phantom had 0.22% voxels violating the criterion. In patient cases, the worst percentage of voxels violating the criterion was 0.57%. For absolute point dose verification, all cases agreed with measurement to within ±3% with average error magnitude within 1%. All cases passed the acceptance criterion that more than 95% of the pixels have Γ(3%, 3 mm) < 1 in film measurement, and the average passing pixel percentage is 98.5%-99%. The GPU dose engine also showed similar degree of accuracy in heterogeneous media as the current TomoTherapy dose engine. It is verified and validated that the ultrafast TomoTherapy GPU dose engine can safely replace the existing TomoTherapy cluster based dose engine without degradation in dose accuracy.

  4. Finding strong lenses in CFHTLS using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Jacobs, C.; Glazebrook, K.; Collett, T.; More, A.; McCarthy, C.

    2017-10-01

    We train and apply convolutional neural networks, a machine learning technique developed to learn from and classify image data, to Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) imaging for the identification of potential strong lensing systems. An ensemble of four convolutional neural networks was trained on images of simulated galaxy-galaxy lenses. The training sets consisted of a total of 62 406 simulated lenses and 64 673 non-lens negative examples generated with two different methodologies. An ensemble of trained networks was applied to all of the 171 deg2 of the CFHTLS wide field image data, identifying 18 861 candidates including 63 known and 139 other potential lens candidates. A second search of 1.4 million early-type galaxies selected from the survey catalogue as potential deflectors, identified 2465 candidates including 117 previously known lens candidates, 29 confirmed lenses/high-quality lens candidates, 266 novel probable or potential lenses and 2097 candidates we classify as false positives. For the catalogue-based search we estimate a completeness of 21-28 per cent with respect to detectable lenses and a purity of 15 per cent, with a false-positive rate of 1 in 671 images tested. We predict a human astronomer reviewing candidates produced by the system would identify 20 probable lenses and 100 possible lenses per hour in a sample selected by the robot. Convolutional neural networks are therefore a promising tool for use in the search for lenses in current and forthcoming surveys such as the Dark Energy Survey and the Large Synoptic Survey Telescope.

  5. Medical image fusion using the convolution of Meridian distributions.

    PubMed

    Agrawal, Mayank; Tsakalides, Panagiotis; Achim, Alin

    2010-01-01

    The aim of this paper is to introduce a novel non-Gaussian statistical model-based approach for medical image fusion based on the Meridian distribution. The paper also includes a new approach to estimate the parameters of generalized Cauchy distribution. The input images are first decomposed using the Dual-Tree Complex Wavelet Transform (DT-CWT) with the subband coefficients modelled as Meridian random variables. Then, the convolution of Meridian distributions is applied as a probabilistic prior to model the fused coefficients, and the weights used to combine the source images are optimised via Maximum Likelihood (ML) estimation. The superior performance of the proposed method is demonstrated using medical images.

  6. Faster GPU-based convolutional gridding via thread coarsening

    NASA Astrophysics Data System (ADS)

    Merry, B.

    2016-07-01

    Convolutional gridding is a processor-intensive step in interferometric imaging. While it is possible to use graphics processing units (GPUs) to accelerate this operation, existing methods use only a fraction of the available flops. We apply thread coarsening to improve the efficiency of an existing algorithm, and observe performance gains of up to 3.2 × for single-polarization gridding and 1.9 × for quad-polarization gridding on a GeForce GTX 980, and smaller but still significant gains on a Radeon R9 290X.

  7. Convolutional neural networks for synthetic aperture radar classification

    NASA Astrophysics Data System (ADS)

    Profeta, Andrew; Rodriguez, Andres; Clouse, H. Scott

    2016-05-01

    For electro-optical object recognition, convolutional neural networks (CNNs) are the state-of-the-art. For large datasets, CNNs are able to learn meaningful features used for classification. However, their application to synthetic aperture radar (SAR) has been limited. In this work we experimented with various CNN architectures on the MSTAR SAR dataset. As the input to the CNN we used the magnitude and phase (2 channels) of the SAR imagery. We used the deep learning toolboxes CAFFE and Torch7. Our results show that we can achieve 93% accuracy on the MSTAR dataset using CNNs.

  8. New syndrome decoding techniques for the (n, k) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964

  9. New Syndrome Decoding Techniques for the (n, K) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.

  10. Simplified Syndrome Decoding of (n, 1) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.

  11. Surrogacy theory and models of convoluted organic systems.

    PubMed

    Konopka, Andrzej K

    2007-03-01

    The theory of surrogacy is briefly outlined as one of the conceptual foundations of systems biology that has been developed for the last 30 years in the context of Hertz-Rosen modeling relationship. Conceptual foundations of modeling convoluted (biologically complex) systems are briefly reviewed and discussed in terms of current and future research in systems biology. New as well as older results that pertain to the concepts of modeling relationship, sequence of surrogacies, cascade of representations, complementarity, analogy, metaphor, and epistemic time are presented together with a classification of models in a cascade. Examples of anticipated future applications of surrogacy theory in life sciences are briefly discussed.

  12. A Fortran 90 code for magnetohydrodynamics. Part 1, Banded convolution

    SciTech Connect

    Walker, D.W.

    1992-03-01

    This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.

  13. Convolution Algebra for Fluid Modes with Finite Energy

    DTIC Science & Technology

    1992-04-01

    PHILLIPS LABORATORY AIR FORCE SYSTEMS COMMAND UNITED STATES AIR FORCE HANSCOM AIR FORCE BASE, MASSACHIUSETTS 01731-5000 94-22604 "This technical report ’-as...with finite spatial and temporal extents. At Boston University, we have developed a full form of wavelet expansion which has the advantage over more...distribution: 00 bX =00 0l if, TZ< VPf (X) = V •a,,,’(x) = E bnb 𔄀(x) where b, =otherwise (34) V=o ,i=o a._, otherwise 7 The convolution of two

  14. Continuous speech recognition based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Qing-qing; Liu, Yong; Pan, Jie-lin; Yan, Yong-hong

    2015-07-01

    Convolutional Neural Networks (CNNs), which showed success in achieving translation invariance for many image processing tasks, are investigated for continuous speech recognitions in the paper. Compared to Deep Neural Networks (DNNs), which have been proven to be successful in many speech recognition tasks nowadays, CNNs can reduce the NN model sizes significantly, and at the same time achieve even better recognition accuracies. Experiments on standard speech corpus TIMIT showed that CNNs outperformed DNNs in the term of the accuracy when CNNs had even smaller model size.

  15. Convolution seal for transition duct in turbine system

    DOEpatents

    Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray

    2015-05-26

    A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface feature for interfacing with an adjacent transition duct. The turbine system further includes a convolution seal contacting the interface feature to provide a seal between the interface feature and the adjacent transition duct.

  16. A digital model for streamflow routing by convolution methods

    USGS Publications Warehouse

    Doyle, W.H.; Shearman, H.O.; Stiltner, G.J.; Krug, W.O.

    1984-01-01

    U.S. Geological Survey computer model, CONROUT, for routing streamflow by unit-response convolution flow-routing techniques from an upstream channel location to a downstream channel location has been developed and documented. Calibration and verification of the flow-routing model and subsequent use of the model for simulation is also documented. Three hypothetical examples and two field applications are presented to illustrate basic flow-routing concepts. Most of the discussion is limited to daily flow routing since, to date, all completed and current studies of this nature involve daily flow routing. However, the model is programmed to accept hourly flow-routing data. (USGS)

  17. Tandem mass spectrometry data quality assessment by self-convolution

    PubMed Central

    Choo, Keng Wah; Tham, Wai Mun

    2007-01-01

    Background Many algorithms have been developed for deciphering the tandem mass spectrometry (MS) data sets. They can be essentially clustered into two classes. The first performs searches on theoretical mass spectrum database, while the second based itself on de novo sequencing from raw mass spectrometry data. It was noted that the quality of mass spectra affects significantly the protein identification processes in both instances. This prompted the authors to explore ways to measure the quality of MS data sets before subjecting them to the protein identification algorithms, thus allowing for more meaningful searches and increased confidence level of proteins identified. Results The proposed method measures the qualities of MS data sets based on the symmetric property of b- and y-ion peaks present in a MS spectrum. Self-convolution on MS data and its time-reversal copy was employed. Due to the symmetric nature of b-ions and y-ions peaks, the self-convolution result of a good spectrum would produce a highest mid point intensity peak. To reduce processing time, self-convolution was achieved using Fast Fourier Transform and its inverse transform, followed by the removal of the "DC" (Direct Current) component and the normalisation of the data set. The quality score was defined as the ratio of the intensity at the mid point to the remaining peaks of the convolution result. The method was validated using both theoretical mass spectra, with various permutations, and several real MS data sets. The results were encouraging, revealing a high percentage of positive prediction rates for spectra with good quality scores. Conclusion We have demonstrated in this work a method for determining the quality of tandem MS data set. By pre-determining the quality of tandem MS data before subjecting them to protein identification algorithms, spurious protein predictions due to poor tandem MS data are avoided, giving scientists greater confidence in the predicted results. We conclude that

  18. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

    1996-01-01

    We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

  19. Convolution seal for transition duct in turbine system

    DOEpatents

    Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray

    2015-03-10

    A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface member for interfacing with a turbine section. The turbine system further includes a convolution seal contacting the interface member to provide a seal between the interface member and the turbine section.

  20. Aquifer response to stream-stage and recharge variations. II. Convolution method and applications

    USGS Publications Warehouse

    Barlow, P.M.; DeSimone, L.A.; Moench, A.F.

    2000-01-01

    In this second of two papers, analytical step-response functions, developed in the companion paper for several cases of transient hydraulic interaction between a fully penetrating stream and a confined, leaky, or water-table aquifer, are used in the convolution integral to calculate aquifer heads, streambank seepage rates, and bank storage that occur in response to streamstage fluctuations and basinwide recharge or evapotranspiration. Two computer programs developed on the basis of these step-response functions and the convolution integral are applied to the analysis of hydraulic interaction of two alluvial stream-aquifer systems in the northeastern and central United States. These applications demonstrate the utility of the analytical functions and computer programs for estimating aquifer and streambank hydraulic properties, recharge rates, streambank seepage rates, and bank storage. Analysis of the water-table aquifer adjacent to the Blackstone River in Massachusetts suggests that the very shallow depth of water table and associated thin unsaturated zone at the site cause the aquifer to behave like a confined aquifer (negligible specific yield). This finding is consistent with previous studies that have shown that the effective specific yield of an unconfined aquifer approaches zero when the capillary fringe, where sediment pores are saturated by tension, extends to land surface. Under this condition, the aquifer's response is determined by elastic storage only. Estimates of horizontal and vertical hydraulic conductivity, specific yield, specific storage, and recharge for a water-table aquifer adjacent to the Cedar River in eastern Iowa, determined by the use of analytical methods, are in close agreement with those estimated by use of a more complex, multilayer numerical model of the aquifer. Streambank leakance of the semipervious streambank materials also was estimated for the site. The streambank-leakance parameter may be considered to be a general (or lumped

  1. SU-E-T-607: An Experimental Validation of Gamma Knife Based Convolution Algorithm On Solid Acrylic Anthropomorphic Phantom

    SciTech Connect

    Gopishankar, N; Bisht, R K

    2014-06-01

    Purpose: To perform dosimetric evaluation of convolution algorithm in Gamma Knife (Perfexion Model) using solid acrylic anthropomorphic phantom. Methods: An in-house developed acrylic phantom with ion chamber insert was used for this purpose. The middle insert was designed to fit ion chamber from top(head) as well as from bottom(neck) of the phantom, henceforth measurement done at two different positions simultaneously. Leksell frame fixed to phantom simulated patient treatment. Prior to dosimetric study, hounsfield units and electron density of acrylic material were incorporated into the calibration curve in the TPS for convolution algorithm calculation. A CT scan of phantom with ion chamber (PTW Freiberg, 0.125cc) was obtained with following scanning parameters: Tube voltage-110kV, Slice thickness-1mm and FOV-240mm. Three separate single shot plans were generated in LGP TPS (Version 10.1.) with collimators 16mm, 8mm and 4mm respectively for both ion chamber positions. Both TMR10 and Convolution algorithm based planning (CABP) were used for dose calculation. A dose of 6Gy at 100% isodose was prescribed at centre of ion chamber visible in the CT scan. The phantom with ion chamber was positioned in the treatment couch for dose delivery. Results: The ion chamber measured dose was 5.98Gy for 16mm collimator shot plan with less than 1% deviation for convolution algorithm whereas with TMR10 measured dose was 5.6Gy. For 8mm and 4mm collimator plan merely a dose of 3.86Gy and 2.18Gy respectively were delivered at TPS calculated time for CABP. Conclusion: CABP is expected to perform accurate prediction of time for dose delivery for all collimators, but significant variation in measured dose was observed for 8mm and 4mm collimator which may be due collimator size effect. Effect of metal artifacts caused by pins and frame on the CT scan also may have role in misinterpreting CABP. The study carried out requires further investigation.

  2. Free-flow reabsorption of glucose, sodium, osmoles and water in rat proximal convoluted tubule.

    PubMed Central

    Bishop, J H; Green, R; Thomas, S

    1979-01-01

    1. Reabsorption of glucose, sodium, total solute (osmoles) and water in the rat proximal tubule (pars convoluta) were studied by free-flow micropuncture at normal (saline-infused), suppressed (saline with phlorizin) and elevated (glucose infusion) glucose reabsorption rates. 2. Phlorizin completely inhibited net glucose reabsorption, approximately halved reabsorption of sodium, total solutes and water, and reduced single nephron glomerular filtration rate (SNGFR). 3. In saline and glucose-infused groups, there were no significant differences between SNGFR nor between reabsorptions (fractional and absolute) of either sodium, total solute or water, which were uniformly distributed along segments assessible to micropuncture. 4. Glucose reabsorptive capacity existed along the entire pars convoluta, with highest reabsorptive rates in convolutions closest to the glomerulus (in saline-infused rats, 90% fractional reabsorption at 2 mm, over 95% at end pars convoluta; in glucose-infused rats, 55 and 90%, respectively). 5. In saline and glucose infused rats, a significant correlation existed between net glucose and sodium reabsorption, but the regression slopes differed and correlations became non-significant when the reabsorptive fluxes were factored by SNGFR. 6. For all groups, the majority of tubular fluid (TF) concentrations of osmoles and sodium were lower than those in plasma (over-all mean TFosm)Posm = 0.973 +/- 0.004, P less than 0.001; TFNa /PNa = 0.964 +/- 0.005, P less than 0.001). 7. Correspondingly, calculated osmolal and sodium concentrations in the reabsorbate were greater than those in plasma, and were significantly correlated with distance to puncture site with maximal values in the most proximal convolutions (for osmolality, approximately +79 m-osmole kg-1 water at 1 mm). PMID:469722

  3. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    PubMed

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  4. Nonlinear superposition of strong gravitational field of compact stars

    NASA Astrophysics Data System (ADS)

    Chen, Shao-Guang

    According to QFT it is deduced that the gravitation is likely to originate from the polarization effect of Dirac vacuum fluctuation (Chen Shao-Guang, Nuovo Cimento B 104, 611, 1989) . In Dirac vacuum the lowest-energy virtual neutrinos nu possess most number, which exert isotropic colliding pressure to isolated mass-point A (m) , the net force on A is zero. For another mass-point B (M) near A to obstruct nu flux shooting to A, the nu number along the line connecting A and B will decrease and destroy isotropic distribution of nu , which leads to not only the change in momentum P (produces net nu flux and net force Fp) but also the change in energy E or rest mass m (produces net force Fm) because in QFT the rest mass is not the bare mass but the physical mass of renormalization which contains nu with energy. From the definition of force: F ≡ d (m v) /d t = m ( d v / d t ) + v (d m / d t ) = Fp + Fm (1) , on A (or B) net force (quasi-Casimir pressure of weak interaction) is: F Q = Fp + Fm = - K (m M /r 2 )((r /r ) + (v /c )) (2). According to the change in masses caused by Bondi's inductive transfer of energy in GR (H. Bondi, Proc. R. Soc. London A 427, 249, 1990) and Eq. (1) a new gravitational formula is deduced: F G = Fp +Fm = - G (m M /r 2 )( (r /r ) + (v /c )) (3). F G is equivalent to Einstein's equation. Then we can solve the multi-bodies gravitational problems. K calculated from the weak-electromagnetism unified theory (W-EUT) has the same order of magnitude as experimental gravitational constant G. F G and F Q as a bridge joins QFT and GR. If K ≡ G, gravitational theory would be merged into W-EUT. The gravitational laws predicted by FG and F Q are identical except that F Q has quantum effects but F G has not, F G has Lense-Thirring effect but F Q has not. The change in masses of A and B caused by the nonlinearity of Einstein's equation or by mass renormalization of QFT will influence their forces on third object C (as self-shielding effect of gravities

  5. An optimal nonorthogonal separation of the anisotropic Gaussian convolution filter.

    PubMed

    Lampert, Christoph H; Wirjadi, Oliver

    2006-11-01

    We give an analytical and geometrical treatment of what it means to separate a Gaussian kernel along arbitrary axes in R(n), and we present a separation scheme that allows us to efficiently implement anisotropic Gaussian convolution filters for data of arbitrary dimensionality. Based on our previous analysis we show that this scheme is optimal with regard to the number of memory accesses and interpolation operations needed. The proposed method relies on nonorthogonal convolution axes and works completely in image space. Thus, it avoids the need for a fast Fourier transform (FFT)-subroutine. Depending on the accuracy and speed requirements, different interpolation schemes and methods to implement the one-dimensional Gaussian (finite impulse response and infinite impulse response) can be integrated. Special emphasis is put on analyzing the performance and accuracy of the new method. In particular, we show that without any special optimization of the source code, it can perform anisotropic Gaussian filtering faster than methods relying on the FFT.

  6. Thermalnet: a Deep Convolutional Network for Synthetic Thermal Image Generation

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Gorbatsevich, V. S.; Mizginov, V. A.

    2017-05-01

    Deep convolutional neural networks have dramatically changed the landscape of the modern computer vision. Nowadays methods based on deep neural networks show the best performance among image recognition and object detection algorithms. While polishing of network architectures received a lot of scholar attention, from the practical point of view the preparation of a large image dataset for a successful training of a neural network became one of major challenges. This challenge is particularly profound for image recognition in wavelengths lying outside the visible spectrum. For example no infrared or radar image datasets large enough for successful training of a deep neural network are available to date in public domain. Recent advances of deep neural networks prove that they are also capable to do arbitrary image transformations such as super-resolution image generation, grayscale image colorisation and imitation of style of a given artist. Thus a natural question arise: how could be deep neural networks used for augmentation of existing large image datasets? This paper is focused on the development of the Thermalnet deep convolutional neural network for augmentation of existing large visible image datasets with synthetic thermal images. The Thermalnet network architecture is inspired by colorisation deep neural networks.

  7. Trainable Convolution Filters and Their Application to Face Recognition.

    PubMed

    Kumar, Ritwik; Banerjee, Arunava; Vemuri, Baba C; Pfister, Hanspeter

    2012-07-01

    In this paper, we present a novel image classification system that is built around a core of trainable filter ensembles that we call Volterra kernel classifiers. Our system treats images as a collection of possibly overlapping patches and is composed of three components: (1) A scheme for a single patch classification that seeks a smooth, possibly nonlinear, functional mapping of the patches into a range space, where patches of the same class are close to one another, while patches from different classes are far apart-in the L_2 sense. This mapping is accomplished using trainable convolution filters (or Volterra kernels) where the convolution kernel can be of any shape or order. (2) Given a corpus of Volterra classifiers with various kernel orders and shapes for each patch, a boosting scheme for automatically selecting the best weighted combination of the classifiers to achieve higher per-patch classification rate. (3) A scheme for aggregating the classification information obtained for each patch via voting for the parent image classification. We demonstrate the effectiveness of the proposed technique using face recognition as an application area and provide extensive experiments on the Yale, CMU PIE, Extended Yale B, Multi-PIE, and MERL Dome benchmark face data sets. We call the Volterra kernel classifiers applied to face recognition Volterrafaces. We show that our technique, which falls into the broad class of embedding-based face image discrimination methods, consistently outperforms various state-of-the-art methods in the same category.

  8. Enhancing Neutron Beam Production with a Convoluted Moderator

    SciTech Connect

    Iverson, Erik B; Baxter, David V; Muhrer, Guenter; Ansell, Stuart; Gallmeier, Franz X; Dalgliesh, Robert; Lu, Wei; Kaiser, Helmut

    2014-10-01

    We describe a new concept for a neutron moderating assembly resulting in the more efficient production of slow neutron beams. The Convoluted Moderator, a heterogeneous stack of interleaved moderating material and nearly transparent single-crystal spacers, is a directionally-enhanced neutron beam source, improving beam effectiveness over an angular range comparable to the range accepted by neutron beam lines and guides. We have demonstrated gains of 50% in slow neutron intensity for a given fast neutron production rate while simultaneously reducing the wavelength-dependent emission time dispersion by 25%, both coming from a geometric effect in which the neutron beam lines view a large surface area of moderating material in a relatively small volume. Additionally, we have confirmed a Bragg-enhancement effect arising from coherent scattering within the single-crystal spacers. We have not observed hypothesized refractive effects leading to additional gains at long wavelength. In addition to confirmation of the validity of the Convoluted Moderator concept, our measurements provide a series of benchmark experiments suitable for developing simulation and analysis techniques for practical optimization and eventual implementation at slow neutron source facilities.

  9. Generalized type II hybrid ARQ scheme using punctured convolutional coding

    NASA Astrophysics Data System (ADS)

    Kallel, Samir; Haccoun, David

    1990-11-01

    A method is presented to construct rate-compatible convolutional (RCC) codes from known high-rate punctured convolutional codes, obtained from best-rate 1/2 codes. The construction method is rather simple and straightforward, and still yields good codes. Moreover, low-rate codes can be obtained without any limit on the lowest achievable code rate. Based on the RCC codes, a generalized type-II hybrid ARQ scheme, which combines the benefits of the modified type-II hybrid ARQ strategy of Hagenauer (1988) with the code-combining ARQ strategy of Chase (1985), is proposed and analyzed. With the proposed generalized type-II hybrid ARQ strategy, the throughput increases as the starting coding rate increases, and as the channel degrades, it tends to merge with the throughput of rate 1/2 type-II hybrid ARQ schemes with code combining, thus allowing the system to be flexible and adaptive to channel conditions, even under wide noise variations and severe degradations.

  10. Classifications of Multispectral Colorectal Cancer Tissues Using Convolution Neural Network

    PubMed Central

    Haj-Hassan, Hawraa; Chaddad, Ahmad; Harkouss, Youssef; Desrosiers, Christian; Toews, Matthew; Tanougast, Camel

    2017-01-01

    Background: Colorectal cancer (CRC) is the third most common cancer among men and women. Its diagnosis in early stages, typically done through the analysis of colon biopsy images, can greatly improve the chances of a successful treatment. This paper proposes to use convolution neural networks (CNNs) to predict three tissue types related to the progression of CRC: benign hyperplasia (BH), intraepithelial neoplasia (IN), and carcinoma (Ca). Methods: Multispectral biopsy images of thirty CRC patients were retrospectively analyzed. Images of tissue samples were divided into three groups, based on their type (10 BH, 10 IN, and 10 Ca). An active contour model was used to segment image regions containing pathological tissues. Tissue samples were classified using a CNN containing convolution, max-pooling, and fully-connected layers. Available tissue samples were split into a training set, for learning the CNN parameters, and test set, for evaluating its performance. Results: An accuracy of 99.17% was obtained from segmented image regions, outperforming existing approaches based on traditional feature extraction, and classification techniques. Conclusions: Experimental results demonstrate the effectiveness of CNN for the classification of CRC tissue types, in particular when using presegmented regions of interest. PMID:28400990

  11. Selective Convolutional Descriptor Aggregation for Fine-Grained Image Retrieval.

    PubMed

    Wei, Xiu-Shen; Luo, Jian-Hao; Wu, Jianxin; Zhou, Zhi-Hua

    2017-03-27

    Deep convolutional neural network models pretrained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, let alone the unsupervised retrieval task. We propose the Selective Convolutional Descriptor Aggregation (SCDA) method. SCDA firstly localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and dimensionality reduced into a short feature vector using the best practices we found. SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained datasets confirm the effectiveness of SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA's high mean average precision in fine-grained retrieval. Moreover, on general image retrieval datasets, SCDA achieves comparable retrieval results with state-of-the-art general image retrieval approaches.

  12. Coronary artery calcification (CAC) classification with deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Liu, Xiuming; Wang, Shice; Deng, Yufeng; Chen, Kuan

    2017-03-01

    Coronary artery calcification (CAC) is a typical marker of the coronary artery disease, which is one of the biggest causes of mortality in the U.S. This study evaluates the feasibility of using a deep convolutional neural network (DCNN) to automatically detect CAC in X-ray images. 1768 posteroanterior (PA) view chest X-Ray images from Sichuan Province Peoples Hospital, China were collected retrospectively. Each image is associated with a corresponding diagnostic report written by a trained radiologist (907 normal, 861 diagnosed with CAC). Onequarter of the images were randomly selected as test samples; the rest were used as training samples. DCNN models consisting of 2,4,6 and 8 convolutional layers were designed using blocks of pre-designed CNN layers. Each block was implemented in Theano with Graphics Processing Units (GPU). Human-in-the-loop learning was also performed on a subset of 165 images with framed arteries by trained physicians. The results from the DCNN models were compared to the diagnostic reports. The average diagnostic accuracies for models with 2,4,6,8 layers were 0.85, 0.87, 0.88, and 0.89 respectively. The areas under the curve (AUC) were 0.92, 0.95, 0.95, and 0.96. As the model grows deeper, the AUC or diagnostic accuracies did not have statistically significant changes. The results of this study indicate that DCNN models have promising potential in the field of intelligent medical image diagnosis practice.

  13. Transforming Musical Signals through a Genre Classifying Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Geng, S.; Ren, G.; Ogihara, M.

    2017-05-01

    Convolutional neural networks (CNNs) have been successfully applied on both discriminative and generative modeling for music-related tasks. For a particular task, the trained CNN contains information representing the decision making or the abstracting process. One can hope to manipulate existing music based on this 'informed' network and create music with new features corresponding to the knowledge obtained by the network. In this paper, we propose a method to utilize the stored information from a CNN trained on musical genre classification task. The network was composed of three convolutional layers, and was trained to classify five-second song clips into five different genres. After training, randomly selected clips were modified by maximizing the sum of outputs from the network layers. In addition to the potential of such CNNs to produce interesting audio transformation, more information about the network and the original music could be obtained from the analysis of the generated features since these features indicate how the network 'understands' the music.

  14. Multichannel Convolutional Neural Network for Biological Relation Extraction

    PubMed Central

    Quan, Chanqin; Sun, Xiao; Bai, Wenjun

    2016-01-01

    The plethora of biomedical relations which are embedded in medical logs (records) demands researchers' attention. Previous theoretical and practical focuses were restricted on traditional machine learning techniques. However, these methods are susceptible to the issues of “vocabulary gap” and data sparseness and the unattainable automation process in feature extraction. To address aforementioned issues, in this work, we propose a multichannel convolutional neural network (MCCNN) for automated biomedical relation extraction. The proposed model has the following two contributions: (1) it enables the fusion of multiple (e.g., five) versions in word embeddings; (2) the need for manual feature engineering can be obviated by automated feature learning with convolutional neural network (CNN). We evaluated our model on two biomedical relation extraction tasks: drug-drug interaction (DDI) extraction and protein-protein interaction (PPI) extraction. For DDI task, our system achieved an overall f-score of 70.2% compared to the standard linear SVM based system (e.g., 67.0%) on DDIExtraction 2013 challenge dataset. And for PPI task, we evaluated our system on Aimed and BioInfer PPI corpus; our system exceeded the state-of-art ensemble SVM system by 2.7% and 5.6% on f-scores. PMID:28053977

  15. Classification of Histology Sections via Multispectral Convolutional Sparse Coding.

    PubMed

    Zhou, Yin; Chang, Hang; Barner, Kenneth; Spellman, Paul; Parvin, Bahram

    2014-06-01

    Image-based classification of histology sections plays an important role in predicting clinical outcomes. However this task is very challenging due to the presence of large technical variations (e.g., fixation, staining) and biological heterogeneities (e.g., cell type, cell state). In the field of biomedical imaging, for the purposes of visualization and/or quantification, different stains are typically used for different targets of interest (e.g., cellular/subcellular events), which generates multi-spectrum data (images) through various types of microscopes and, as a result, provides the possibility of learning biological-component-specific features by exploiting multispectral information. We propose a multispectral feature learning model that automatically learns a set of convolution filter banks from separate spectra to efficiently discover the intrinsic tissue morphometric signatures, based on convolutional sparse coding (CSC). The learned feature representations are then aggregated through the spatial pyramid matching framework (SPM) and finally classified using a linear SVM. The proposed system has been evaluated using two large-scale tumor cohorts, collected from The Cancer Genome Atlas (TCGA). Experimental results show that the proposed model 1) outperforms systems utilizing sparse coding for unsupervised feature learning (e.g., PSD-SPM [5]); 2) is competitive with systems built upon features with biological prior knowledge (e.g., SMLSPM [4]).

  16. Deep Convolutional Neural Networks for large-scale speech tasks.

    PubMed

    Sainath, Tara N; Kingsbury, Brian; Saon, George; Soltau, Hagen; Mohamed, Abdel-rahman; Dahl, George; Ramabhadran, Bhuvana

    2015-04-01

    Convolutional Neural Networks (CNNs) are an alternative type of neural network that can be used to reduce spectral variations and model spectral correlations which exist in signals. Since speech signals exhibit both of these properties, we hypothesize that CNNs are a more effective model for speech compared to Deep Neural Networks (DNNs). In this paper, we explore applying CNNs to large vocabulary continuous speech recognition (LVCSR) tasks. First, we determine the appropriate architecture to make CNNs effective compared to DNNs for LVCSR tasks. Specifically, we focus on how many convolutional layers are needed, what is an appropriate number of hidden units, what is the best pooling strategy. Second, investigate how to incorporate speaker-adapted features, which cannot directly be modeled by CNNs as they do not obey locality in frequency, into the CNN framework. Third, given the importance of sequence training for speech tasks, we introduce a strategy to use ReLU+dropout during Hessian-free sequence training of CNNs. Experiments on 3 LVCSR tasks indicate that a CNN with the proposed speaker-adapted and ReLU+dropout ideas allow for a 12%-14% relative improvement in WER over a strong DNN system, achieving state-of-the art results in these 3 tasks. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Convolutional Neural Network Based Fault Detection for Rotating Machinery

    NASA Astrophysics Data System (ADS)

    Janssens, Olivier; Slavkovikj, Viktor; Vervisch, Bram; Stockman, Kurt; Loccufier, Mia; Verstockt, Steven; Van de Walle, Rik; Van Hoecke, Sofie

    2016-09-01

    Vibration analysis is a well-established technique for condition monitoring of rotating machines as the vibration patterns differ depending on the fault or machine condition. Currently, mainly manually-engineered features, such as the ball pass frequencies of the raceway, RMS, kurtosis an crest, are used for automatic fault detection. Unfortunately, engineering and interpreting such features requires a significant level of human expertise. To enable non-experts in vibration analysis to perform condition monitoring, the overhead of feature engineering for specific faults needs to be reduced as much as possible. Therefore, in this article we propose a feature learning model for condition monitoring based on convolutional neural networks. The goal of this approach is to autonomously learn useful features for bearing fault detection from the data itself. Several types of bearing faults such as outer-raceway faults and lubrication degradation are considered, but also healthy bearings and rotor imbalance are included. For each condition, several bearings are tested to ensure generalization of the fault-detection system. Furthermore, the feature-learning based approach is compared to a feature-engineering based approach using the same data to objectively quantify their performance. The results indicate that the feature-learning system, based on convolutional neural networks, significantly outperforms the classical feature-engineering based approach which uses manually engineered features and a random forest classifier. The former achieves an accuracy of 93.61 percent and the latter an accuracy of 87.25 percent.

  18. Single-Cell Phenotype Classification Using Deep Convolutional Neural Networks.

    PubMed

    Dürr, Oliver; Sick, Beate

    2016-10-01

    Deep learning methods are currently outperforming traditional state-of-the-art computer vision algorithms in diverse applications and recently even surpassed human performance in object recognition. Here we demonstrate the potential of deep learning methods to high-content screening-based phenotype classification. We trained a deep learning classifier in the form of convolutional neural networks with approximately 40,000 publicly available single-cell images from samples treated with compounds from four classes known to lead to different phenotypes. The input data consisted of multichannel images. The construction of appropriate feature definitions was part of the training and carried out by the convolutional network, without the need for expert knowledge or handcrafted features. We compare our results against the recent state-of-the-art pipeline in which predefined features are extracted from each cell using specialized software and then fed into various machine learning algorithms (support vector machine, Fisher linear discriminant, random forest) for classification. The performance of all classification approaches is evaluated on an untouched test image set with known phenotype classes. Compared to the best reference machine learning algorithm, the misclassification rate is reduced from 8.9% to 6.6%. © 2016 Society for Laboratory Automation and Screening.

  19. Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks

    PubMed Central

    Yu, Haiyang; Wu, Zhihai; Wang, Shuqin; Wang, Yunpeng; Ma, Xiaolei

    2017-01-01

    Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction. PMID:28672867

  20. Multiple deep convolutional neural networks averaging for face alignment

    NASA Astrophysics Data System (ADS)

    Zhang, Shaohua; Yang, Hua; Yin, Zhouping

    2015-05-01

    Face alignment is critical for face recognition, and the deep learning-based method shows promise for solving such issues, given that competitive results are achieved on benchmarks with additional benefits, such as dispensing with handcrafted features and initial shape. However, most existing deep learning-based approaches are complicated and quite time-consuming during training. We propose a compact face alignment method for fast training without decreasing its accuracy. Rectified linear unit is employed, which allows all networks approximately five times faster convergence than a tanh neuron. An eight learnable layer deep convolutional neural network (DCNN) based on local response normalization and a padding convolutional layer (PCL) is designed to provide reliable initial values during prediction. A model combination scheme is presented to further reduce errors, while showing that only two network architectures and hyperparameter selection procedures are required in our approach. A three-level cascaded system is ultimately built based on the DCNNs and model combination mode. Extensive experiments validate the effectiveness of our method and demonstrate comparable accuracy with state-of-the-art methods on BioID, labeled face parts in the wild, and Helen datasets.