Science.gov

Sample records for convolution superposition calculations

  1. A convolution-superposition dose calculation engine for GPUs

    SciTech Connect

    Hissoiny, Sami; Ozell, Benoit; Despres, Philippe

    2010-03-15

    Purpose: Graphic processing units (GPUs) are increasingly used for scientific applications, where their parallel architecture and unprecedented computing power density can be exploited to accelerate calculations. In this paper, a new GPU implementation of a convolution/superposition (CS) algorithm is presented. Methods: This new GPU implementation has been designed from the ground-up to use the graphics card's strengths and to avoid its weaknesses. The CS GPU algorithm takes into account beam hardening, off-axis softening, kernel tilting, and relies heavily on raytracing through patient imaging data. Implementation details are reported as well as a multi-GPU solution. Results: An overall single-GPU acceleration factor of 908x was achieved when compared to a nonoptimized version of the CS algorithm implemented in PlanUNC in single threaded central processing unit (CPU) mode, resulting in approximatively 2.8 s per beam for a 3D dose computation on a 0.4 cm grid. A comparison to an established commercial system leads to an acceleration factor of approximately 29x or 0.58 versus 16.6 s per beam in single threaded mode. An acceleration factor of 46x has been obtained for the total energy released per mass (TERMA) calculation and a 943x acceleration factor for the CS calculation compared to PlanUNC. Dose distributions also have been obtained for a simple water-lung phantom to verify that the implementation gives accurate results. Conclusions: These results suggest that GPUs are an attractive solution for radiation therapy applications and that careful design, taking the GPU architecture into account, is critical in obtaining significant acceleration factors. These results potentially can have a significant impact on complex dose delivery techniques requiring intensive dose calculations such as intensity-modulated radiation therapy (IMRT) and arc therapy. They also are relevant for adaptive radiation therapy where dose results must be obtained rapidly.

  2. Using a photon phase-space source for convolution/superposition dose calculations in radiation therapy

    NASA Astrophysics Data System (ADS)

    Naqvi, Shahid A.; D'Souza, Warren D.; Earl, Matthew A.; Ye, Sung-Joon; Shih, Rompin; Li, X. Allen

    2005-09-01

    For a given linac design, the dosimetric characteristics of a photon beam are determined uniquely by the energy and radial distributions of the electron beam striking the x-ray target. However, in the usual commissioning of a beam from measured data, a large number of variables can be independently tuned, making it difficult to derive a unique and self-consistent beam model. For example, the measured dosimetric penumbra in water may be attributed in various proportions to the lateral secondary electron range, the focal spot size and the transmission through the tips of a non-divergent collimator; the head-scatter component in the tails of the transverse profiles may not be easy to resolve from phantom scatter and head leakage; and the head-scatter tails corresponding to a certain extra-focal source model may not agree self-consistently with in-air output factors measured on the central axis. To reduce the number of adjustable variables in beam modelling, we replace the focal and extra-focal sources with a single phase-space plane scored just above the highest adjustable collimator in a EGS/BEAM simulation of the linac. The phase-space plane is then used as photon source in a stochastic convolution/superposition dose engine. A photon sampled from the uncollimated phase-space plane is first propagated through an arbitrary collimator arrangement and then interacted in the simulation phantom. Energy deposition kernel rays are then randomly issued from the interaction points and dose is deposited along these rays. The electrons in the phase-space file are used to account for electron contamination. 6 MV and 18 MV photon beams from an Elekta SL linac are used as representative examples. Except for small corrections for monitor backscatter and collimator forward scatter for large field sizes (<0.5% with <20 × 20 cm2 field size), we found that the use of a single phase-space photon source provides accurate and self-consistent results for both relative and absolute dose

  3. A 3D photon superposition/convolution algorithm and its foundation on results of Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Ulmer, W.; Pyyry, J.; Kaissl, W.

    2005-04-01

    Based on previous publications on a triple Gaussian analytical pencil beam model and on Monte Carlo calculations using Monte Carlo codes GEANT-Fluka, versions 95, 98, 2002, and BEAMnrc/EGSnrc, a three-dimensional (3D) superposition/convolution algorithm for photon beams (6 MV, 18 MV) is presented. Tissue heterogeneity is taken into account by electron density information of CT images. A clinical beam consists of a superposition of divergent pencil beams. A slab-geometry was used as a phantom model to test computed results by measurements. An essential result is the existence of further dose build-up and build-down effects in the domain of density discontinuities. These effects have increasing magnitude for field sizes <=5.5 cm2 and densities <=0.25 g cm-3, in particular with regard to field sizes considered in stereotaxy. They could be confirmed by measurements (mean standard deviation 2%). A practical impact is the dose distribution at transitions from bone to soft tissue, lung or cavities. This work has partially been presented at WC 2003, Sydney.

  4. Dose calculations using convolution and superposition principles: the orientation of dose spread kernels in divergent x-ray beams.

    PubMed

    Sharpe, M B; Battista, J J

    1993-01-01

    The convolution/superposition method of dose calculation has the potential to become the preferred technique for radiotherapy treatment planning. When this approach is used for therapeutic x-ray beams, the dose spread kernels are usually aligned parallel to the central axis of the incident beam. While this reduces the computational burden, it is more rigorous to tilt the kernel axis to align it with the diverging beam rays that define the incident direction of primary photons. We have assessed the validity of the parallel kernel approximation by computing dose distributions using parallel and tilted kernels for monoenergetic photons of 2, 6, and 10 MeV; source-to-surface distances (SSDs) of 50, 80, and 100 cm; and for field sizes of 5 x 5, 15 x 15, and 30 x 30 cm2. Over most of the irradiated volume, the parallel kernel approximation yields results that differ from tilted kernel calculations by 3% or less for SSDs greater than 80 cm. Under extreme conditions of a short SSD, a large field size and high incident photon energy, the parallel kernel approximation results in discrepancies that may be clinically unacceptable. For 10-MeV photons, we have observed that the parallel kernel approximation can overestimate the dose by up to 4.4% of the maximum on the central axis for a field size of 30 x 30 cm2 applied with a SSD of 50 cm. Very localized dose underestimations of up to 27% of the maximum dose occurred in the penumbral region of a 30 x 30-cm2 field of 10-MeV photons applied with a SSD of 50 cm. PMID:8309441

  5. Accuracy of patient dose calculation for lung IMRT: A comparison of Monte Carlo, convolution/superposition, and pencil beam computations

    SciTech Connect

    Vanderstraeten, Barbara; Reynaert, Nick; Paelinck, Leen; Madani, Indira; Wagter, Carlos de; Gersem, Werner de; Neve, Wilfried de; Thierens, Hubert

    2006-09-15

    The accuracy of dose computation within the lungs depends strongly on the performance of the calculation algorithm in regions of electronic disequilibrium that arise near tissue inhomogeneities with large density variations. There is a lack of data evaluating the performance of highly developed analytical dose calculation algorithms compared to Monte Carlo computations in a clinical setting. We compared full Monte Carlo calculations (performed by our Monte Carlo dose engine MCDE) with two different commercial convolution/superposition (CS) implementations (Pinnacle-CS and Helax-TMS's collapsed cone model Helax-CC) and one pencil beam algorithm (Helax-TMS's pencil beam model Helax-PB) for 10 intensity modulated radiation therapy (IMRT) lung cancer patients. Treatment plans were created for two photon beam qualities (6 and 18 MV). For each dose calculation algorithm, patient, and beam quality, the following set of clinically relevant dose-volume values was reported: (i) minimal, median, and maximal dose (D{sub min}, D{sub 50}, and D{sub max}) for the gross tumor and planning target volumes (GTV and PTV); (ii) the volume of the lungs (excluding the GTV) receiving at least 20 and 30 Gy (V{sub 20} and V{sub 30}) and the mean lung dose; (iii) the 33rd percentile dose (D{sub 33}) and D{sub max} delivered to the heart and the expanded esophagus; and (iv) D{sub max} for the expanded spinal cord. Statistical analysis was performed by means of one-way analysis of variance for repeated measurements and Tukey pairwise comparison of means. Pinnacle-CS showed an excellent agreement with MCDE within the target structures, whereas the best correspondence for the organs at risk (OARs) was found between Helax-CC and MCDE. Results from Helax-PB were unsatisfying for both targets and OARs. Additionally, individual patient results were analyzed. Within the target structures, deviations above 5% were found in one patient for the comparison of MCDE and Helax-CC, while all differences

  6. Accuracy of patient dose calculation for lung IMRT: A comparison of Monte Carlo, convolution/superposition, and pencil beam computations.

    PubMed

    Vanderstraeten, Barbara; Reynaert, Nick; Paelinck, Leen; Madani, Indira; De Wagter, Carlos; De Gersem, Werner; De Neve, Wilfried; Thierens, Hubert

    2006-09-01

    The accuracy of dose computation within the lungs depends strongly on the performance of the calculation algorithm in regions of electronic disequilibrium that arise near tissue inhomogeneities with large density variations. There is a lack of data evaluating the performance of highly developed analytical dose calculation algorithms compared to Monte Carlo computations in a clinical setting. We compared full Monte Carlo calculations (performed by our Monte Carlo dose engine MCDE) with two different commercial convolution/superposition (CS) implementations (Pinnacle-CS and Helax-TMS's collapsed cone model Helax-CC) and one pencil beam algorithm (Helax-TMS's pencil beam model Helax-PB) for 10 intensity modulated radiation therapy (IMRT) lung cancer patients. Treatment plans were created for two photon beam qualities (6 and 18 MV). For each dose calculation algorithm, patient, and beam quality, the following set of clinically relevant dose-volume values was reported: (i) minimal, median, and maximal dose (Dmin, D50, and Dmax) for the gross tumor and planning target volumes (GTV and PTV); (ii) the volume of the lungs (excluding the GTV) receiving at least 20 and 30 Gy (V20 and V30) and the mean lung dose; (iii) the 33rd percentile dose (D33) and Dmax delivered to the heart and the expanded esophagus; and (iv) Dmax for the expanded spinal cord. Statistical analysis was performed by means of one-way analysis of variance for repeated measurements and Tukey pairwise comparison of means. Pinnacle-CS showed an excellent agreement with MCDE within the target structures, whereas the best correspondence for the organs at risk (OARs) was found between Helax-CC and MCDE. Results from Helax-PB were unsatisfying for both targets and OARs. Additionally, individual patient results were analyzed. Within the target structures, deviations above 5% were found in one patient for the comparison of MCDE and Helax-CC, while all differences between MCDE and Pinnacle-CS were below 5%. For both

  7. Use of convolution/superposition-based treatment planning system for dose calculations in the kilovoltage energy range

    NASA Astrophysics Data System (ADS)

    Alaei, Parham

    2000-11-01

    A number of procedures in diagnostic radiology and cardiology make use of long exposures to x rays from fluoroscopy units. Adverse effects of these long exposure times on the patients' skin have been documented in recent years. These include epilation, erythema, and, in severe cases, moist desquamation and tissue necrosis. Potential biological effects from these exposures to other organs include radiation-induced cataracts and pneumonitis. Although there have been numerous studies to measure or calculate the dose to skin from these procedures, there have only been a handful of studies to determine the dose to other organs. Therefore, there is a need for accurate methods to measure the dose in tissues and organs other than the skin. This research was concentrated in devising a method to determine accurately the radiation dose to these tissues and organs. The work was performed in several stages: First, a three dimensional (3D) treatment planning system used in radiation oncology was modified and complemented to make it usable with the low energies of x rays used in diagnostic radiology. Using the system for low energies required generation of energy deposition kernels using Monte Carlo methods. These kernels were generated using the EGS4 Monte Carlo system of codes and added to the treatment planning system. Following modification, the treatment planning system was evaluated for its accuracy of calculations in low energies within homogeneous and heterogeneous media. A study of the effects of lungs and bones on the dose distribution was also performed. The next step was the calculation of dose distributions in humanoid phantoms using this modified system. The system was used to calculate organ doses in these phantoms and the results were compared to those obtained from other methods. These dose distributions can subsequently be used to create dose-volume histograms (DVHs) for internal organs irradiated by these beams. Using this data and the concept of normal tissue

  8. Real-time dose computation: GPU-accelerated source modeling and superposition/convolution

    SciTech Connect

    Jacques, Robert; Wong, John; Taylor, Russell; McNutt, Todd

    2011-01-15

    Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for the total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3

  9. Fluence-convolution broad-beam (FCBB) dose calculation.

    PubMed

    Lu, Weiguo; Chen, Mingli

    2010-12-01

    IMRT optimization requires a fast yet relatively accurate algorithm to calculate the iteration dose with small memory demand. In this paper, we present a dose calculation algorithm that approaches these goals. By decomposing the infinitesimal pencil beam (IPB) kernel into the central axis (CAX) component and lateral spread function (LSF) and taking the beam's eye view (BEV), we established a non-voxel and non-beamlet-based dose calculation formula. Both LSF and CAX are determined by a commissioning procedure using the collapsed-cone convolution/superposition (CCCS) method as the standard dose engine. The proposed dose calculation involves a 2D convolution of a fluence map with LSF followed by ray tracing based on the CAX lookup table with radiological distance and divergence correction, resulting in complexity of O(N(3)) both spatially and temporally. This simple algorithm is orders of magnitude faster than the CCCS method. Without pre-calculation of beamlets, its implementation is also orders of magnitude smaller than the conventional voxel-based beamlet-superposition (VBS) approach. We compared the presented algorithm with the CCCS method using simulated and clinical cases. The agreement was generally within 3% for a homogeneous phantom and 5% for heterogeneous and clinical cases. Combined with the 'adaptive full dose correction', the algorithm is well suitable for calculating the iteration dose during IMRT optimization. PMID:21081826

  10. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    SciTech Connect

    Huang, Jessie Y.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.; Eklund, David; Childress, Nathan L.

    2013-12-15

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm.Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels.Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found

  11. Validation of the Pinnacle³ photon convolution-superposition algorithm applied to fast neutron beams.

    PubMed

    Kalet, Alan M; Sandison, George A; Phillips, Mark H; Parvathaneni, Upendra

    2013-01-01

    We evaluate a photon convolution-superposition algorithm used to model a fast neutron therapy beam in a commercial treatment planning system (TPS). The neutron beam modeled was the Clinical Neutron Therapy System (CNTS) fast neutron beam produced by 50 MeV protons on a Be target at our facility, and we implemented the Pinnacle3 dose calculation model for computing neutron doses. Measured neutron data were acquired by an IC30 ion chamber flowing 5 cc/min of tissue equivalent gas. Output factors and profile scans for open and wedged fields were measured according to the Pinnacle physics reference guide recommendations for photon beams in a Wellhofer water tank scanning system. Following the construction of a neutron beam model, computed doses were then generated using 100 monitor units (MUs) beams incident on a water-equivalent phantom for open and wedged square fields, as well as multileaf collimator (MLC)-shaped irregular fields. We compared Pinnacle dose profiles, central axis doses, and off-axis doses (in irregular fields) with 1) doses computed using the Prism treatment planning system, and 2) doses measured in a water phantom and having matching geometry to the computation setup. We found that the Pinnacle photon model may be used to model most of the important dosimetric features of the CNTS fast neutron beam. Pinnacle-calculated dose points among open and wedged square fields exhibit dose differences within 3.9 cGy of both Prism and measured doses along the central axis, and within 5 cGy difference of measurement in the penumbra region. Pinnacle dose point calculations using irregular treatment type fields showed a dose difference up to 9 cGy from measured dose points, although most points of comparison were below 5 cGy. Comparisons of dose points that were chosen from cases planned in both Pinnacle and Prism show an average dose difference less than 0.6%, except in certain fields which incorporate both wedges and heavy blocking of the central axis. All

  12. Monte Carlo evaluation of the convolution/superposition algorithm of Hi-Art tomotherapy in heterogeneous phantoms and clinical cases

    SciTech Connect

    Sterpin, E.; Salvat, F.; Olivera, G.; Vynckier, S.

    2009-05-15

    The reliability of the convolution/superposition (C/S) algorithm of the Hi-Art tomotherapy system is evaluated by using the Monte Carlo model TomoPen, which has been already validated for homogeneous phantoms. The study was performed in three stages. First, measurements with EBT Gafchromic film for a 1.25x2.5 cm{sup 2} field in a heterogeneous phantom consisting of two slabs of polystyrene separated with Styrofoam were compared to simulation results from TomoPen. The excellent agreement found in this comparison justifies the use of TomoPen as the reference for the remaining parts of this work. Second, to allow analysis and interpretation of the results in clinical cases, dose distributions calculated with TomoPen and C/S were compared for a similar phantom geometry, with multiple slabs of various densities. Even in conditions of lack of lateral electronic equilibrium, overall good agreement was obtained between C/S and TomoPen results, with deviations within 3%/2 mm, showing that the C/S algorithm accounts for modifications in secondary electron transport due to the presence of a low density medium. Finally, calculations were performed with TomoPen and C/S of dose distributions in various clinical cases, from large bilateral head and neck tumors to small lung tumors with diameter of <3 cm. To ensure a ''fair'' comparison, identical dose calculation grid and dose-volume histogram calculator were used. Very good agreement was obtained for most of the cases, with no significant differences between the DVHs obtained from both calculations. However, deviations of up to 4% for the dose received by 95% of the target volume were found for the small lung tumors. Therefore, the approximations in the C/S algorithm slightly influence the accuracy in small lung tumors even though the C/S algorithm of the tomotherapy system shows very good overall behavior.

  13. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures

    SciTech Connect

    Neylon, J. Sheng, K.; Yu, V.; Low, D. A.; Kupelian, P.; Santhanam, A.; Chen, Q.

    2014-10-15

    , respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. Results: The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. Conclusions: The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.

  14. On the quantification of the dosimetric accuracy of collapsed cone convolution superposition (CCCS) algorithm for small lung volumes using IMRT.

    PubMed

    Calvo, Oscar I; Gutiérrez, Alonso N; Stathakis, Sotirios; Esquivel, Carlos; Papanikolaou, Nikos

    2012-01-01

    Specialized techniques that make use of small field dosimetry are common practice in today's clinics. These new techniques represent a big challenge to the treatment planning systems due to the lack of lateral electronic equilibrium. Because of this, the necessity of planning systems to overcome such difficulties and provide an accurate representation of the true value is of significant importance. Pinnacle3 is one such planning system. During the IMRT optimization process, Pinnacle3 treatment planning system allows the user to specify a minimum segment size which results in multiple beams composed of several subsets of different widths. In this study, the accuracy of the engine dose calculation, collapsed cone convolution superposition algorithm (CCCS) used by Pinnacle3, was quantified by Monte Carlo simulations, ionization chamber, and Kodak extended dose range film (EDR2) measurements for 11 SBRT lung patients. Lesions were < 3.0 cm in maximal diameter and <27.0cm3 in volume. The Monte Carlo EGSnrc\\BEAMnrc and EGS4\\MCSIM were used in the comparison. The minimum segment size allowable during optimization had a direct impact on the number of monitor units calculated for each beam. Plans with the smallest minimum segment size (0.1 cm2 to 2.0 cm2) had the largest number of MUs. Although PTV coverage remained unaffected, the segment size did have an effect on the dose to the organs at risk. Pinnacle3-calculated PTV mean doses were in agreement with Monte Carlo-calculated mean doses to within 5.6% for all plans. On average, the mean dose difference between Monte Carlo and Pinnacle3 for all 88 plans was 1.38%. The largest discrepancy in maximum dose was 5.8%, and was noted for one of the plans using a minimum segment size of 1.0 cm2. For minimum dose to the PTV, a maximum discrepancy between Monte Carlo and Pinnacle3 was noted of 12.5% for a plan using a 6.0 cm2 minimum segment size. Agreement between point dose measurements and Pinnacle3-calculated doses were on

  15. A 3D pencil-beam-based superposition algorithm for photon dose calculation in heterogeneous media

    NASA Astrophysics Data System (ADS)

    Tillikainen, L.; Helminen, H.; Torsti, T.; Siljamäki, S.; Alakuijala, J.; Pyyry, J.; Ulmer, W.

    2008-07-01

    In this work, a novel three-dimensional superposition algorithm for photon dose calculation is presented. The dose calculation is performed as a superposition of pencil beams, which are modified based on tissue electron densities. The pencil beams have been derived from Monte Carlo simulations, and are separated into lateral and depth-directed components. The lateral component is modeled using exponential functions, which allows accurate modeling of lateral scatter in heterogeneous tissues. The depth-directed component represents the total energy deposited on each plane, which is spread out using the lateral scatter functions. Finally, convolution in the depth direction is applied to account for tissue interface effects. The method can be used with the previously introduced multiple-source model for clinical settings. The method was compared against Monte Carlo simulations in several phantoms including lung- and bone-type heterogeneities. Comparisons were made for several field sizes for 6 and 18 MV energies. The deviations were generally within (2%, 2 mm) of the field central axis dmax. Significantly larger deviations (up to 8%) were found only for the smallest field in the lung slab phantom for 18 MV. The presented method was found to be accurate in a wide range of conditions making it suitable for clinical planning purposes.

  16. FAST-PT: Convolution integrals in cosmological perturbation theory calculator

    NASA Astrophysics Data System (ADS)

    McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.; Blazek, Jonathan A.

    2016-03-01

    FAST-PT calculates 1-loop corrections to the matter power spectrum in cosmology. The code utilizes Fourier methods combined with analytic expressions to reduce the computation time down to scale as N log N, where N is the number of grid point in the input linear power spectrum. FAST-PT is extremely fast, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation.

  17. NOTE: Verification of lung dose in an anthropomorphic phantom calculated by the collapsed cone convolution method

    NASA Astrophysics Data System (ADS)

    Butson, Martin J.; Elferink, Rebecca; Cheung, Tsang; Yu, Peter K. N.; Stokes, Michael; You Quach, Kim; Metcalfe, Peter

    2000-11-01

    Verification of calculated lung dose in an anthropomorphic phantom is performed using two dosimetry media. Dosimetry is complicated by factors such as variations in density at slice interfaces and appropriate position on CT scanning slice to accommodate these factors. Dose in lung for a 6 MV and 10 MV anterior-posterior field was calculated with a collapsed cone convolution method using an ADAC Pinnacle, 3D planning system. Up to 5% variations between doses calculated at the centre and near the edge of the 2 cm phantom slice positioned at the beam central axis were seen, due to the composition of each phantom slice. Validation of dose was performed with LiF thermoluminescent dosimeters (TLDs) and X-Omat V radiographic film. Both dosimetry media produced dose results which agreed closely with calculated results nearest their physical positioning in the phantom. The collapsed cone convolution method accurately calculates dose within inhomogeneous lung regions at 6 MV and 10 MV x-ray energy.

  18. Essentially exact ground-state calculations by superpositions of nonorthogonal Slater determinants

    NASA Astrophysics Data System (ADS)

    Goto, Hidekazu; Kojo, Masashi; Sasaki, Akira; Hirose, Kikuji

    2013-05-01

    An essentially exact ground-state calculation algorithm for few-electron systems based on superposition of nonorthogonal Slater determinants (SDs) is described, and its convergence properties to ground states are examined. A linear combination of SDs is adopted as many-electron wave functions, and all one-electron wave functions are updated by employing linearly independent multiple correction vectors on the basis of the variational principle. The improvement of the convergence performance to the ground state given by the multi-direction search is shown through comparisons with the conventional steepest descent method. The accuracy and applicability of the proposed scheme are also demonstrated by calculations of the potential energy curves of few-electron molecular systems, compared with the conventional quantum chemistry calculation techniques.

  19. Final Aperture Superposition Technique applied to fast calculation of electron output factors and depth dose curves

    SciTech Connect

    Faddegon, B.A.; Villarreal-Barajas, J.E.

    2005-11-15

    The Final Aperture Superposition Technique (FAST) is described and applied to accurate, near instantaneous calculation of the relative output factor (ROF) and central axis percentage depth dose curve (PDD) for clinical electron beams used in radiotherapy. FAST is based on precalculation of dose at select points for the two extreme situations of a fully open final aperture and a final aperture with no opening (fully shielded). This technique is different than conventional superposition of dose deposition kernels: The precalculated dose is differential in position of the electron or photon at the downstream surface of the insert. The calculation for a particular aperture (x-ray jaws or MLC, insert in electron applicator) is done with superposition of the precalculated dose data, using the open field data over the open part of the aperture and the fully shielded data over the remainder. The calculation takes explicit account of all interactions in the shielded region of the aperture except the collimator effect: Particles that pass from the open part into the shielded part, or visa versa. For the clinical demonstration, FAST was compared to full Monte Carlo simulation of 10x10,2.5x2.5, and 2x8 cm{sup 2} inserts. Dose was calculated to 0.5% precision in 0.4x0.4x0.2 cm{sup 3} voxels, spaced at 0.2 cm depth intervals along the central axis, using detailed Monte Carlo simulation of the treatment head of a commercial linear accelerator for six different electron beams with energies of 6-21 MeV. Each simulation took several hours on a personal computer with a 1.7 Mhz processor. The calculation for the individual inserts, done with superposition, was completed in under a second on the same PC. Since simulations for the pre calculation are only performed once, higher precision and resolution can be obtained without increasing the calculation time for individual inserts. Fully shielded contributions were largest for small fields and high beam energy, at the surface, reaching a

  20. GPU-accelerated Monte Carlo convolution∕superposition implementation for dose calculation

    PubMed Central

    Zhou, Bo; Yu, Cedric X.; Chen, Danny Z.; Hu, X. Sharon

    2010-01-01

    Purpose: Dose calculation is a key component in radiation treatment planning systems. Its performance and accuracy are crucial to the quality of treatment plans as emerging advanced radiation therapy technologies are exerting ever tighter constraints on dose calculation. A common practice is to choose either a deterministic method such as the convolution∕superposition (CS) method for speed or a Monte Carlo (MC) method for accuracy. The goal of this work is to boost the performance of a hybrid Monte Carlo convolution∕superposition (MCCS) method by devising a graphics processing unit (GPU) implementation so as to make the method practical for day-to-day usage. Methods: Although the MCCS algorithm combines the merits of MC fluence generation and CS fluence transport, it is still not fast enough to be used as a day-to-day planning tool. To alleviate the speed issue of MC algorithms, the authors adopted MCCS as their target method and implemented a GPU-based version. In order to fully utilize the GPU computing power, the MCCS algorithm is modified to match the GPU hardware architecture. The performance of the authors’ GPU-based implementation on an Nvidia GTX260 card is compared to a multithreaded software implementation on a quad-core system. Results: A speedup in the range of 6.7–11.4× is observed for the clinical cases used. The less than 2% statistical fluctuation also indicates that the accuracy of the authors’ GPU-based implementation is in good agreement with the results from the quad-core CPU implementation. Conclusions: This work shows that GPU is a feasible and cost-efficient solution compared to other alternatives such as using cluster machines or field-programmable gate arrays for satisfying the increasing demands on computation speed and accuracy of dose calculation. But there are also inherent limitations of using GPU for accelerating MC-type applications, which are also analyzed in detail in this article. PMID:21158271

  1. Analyzing astrophysical neutrino signals using realistic nuclear structure calculations and the convolution procedure

    NASA Astrophysics Data System (ADS)

    Tsakstara, V.; Kosmas, T. S.

    2011-12-01

    Convoluted differential and total cross sections of inelastic ν scattering on 128,130Te isotopes are computed from the original cross sections calculated previously using the quasiparticle random-phase approximation. We adopt various spectral distributions for the neutrino energy spectra such as the common two-parameter Fermi-Dirac and power-law distributions appropriate to explore nuclear detector responses to supernova neutrino spectra. We also concentrate on the use of low-energy β-beam neutrinos, originating from boosted β--radioactive 6He ions, to decompose original supernova (anti)neutrino spectra that are subsequently employed to simulate total cross sections of the reactions 130Te(ν˜,ν˜')130Te*. The concrete nuclear regimes selected, 128,130Te, are contents of the multipurpose CUORE and COBRA rare event detectors. Our present investigation may provide useful information about the efficiency of the Te detector medium of the above experiments in their potential use in supernova neutrino searches.

  2. A new approach for dose calculation in targeted radionuclide therapy (TRT) based on collapsed cone superposition: validation with (90)Y.

    PubMed

    Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud

    2014-09-01

    To speed-up the absorbed dose (AD) computation while accounting for tissue heterogeneities, a Collapsed Cone (CC) superposition algorithm was developed and validated for (90)Y. The superposition was implemented with an Energy Deposition Kernel scaled with the radiological distance, along with CC acceleration. The validation relative to Monte Carlo simulations was performed on 6 phantoms involving soft tissue, lung and bone, a radioembolisation treatment and a simulated bone metastasis treatment. As a figure of merit, the relative AD difference (ΔAD) in low gradient regions (LGR), distance to agreement (DTA) in high gradient regions and the γ(1%,1 mm) criterion were used for the phantoms. Mean organ doses and γ(3%,3 mm) were used for the patient data. For the semi-infinite sources, ΔAD in LGR was below 1%. DTA was below 0.6 mm. All profiles verified the γ(1%,1 mm) criterion. For both clinical cases, mean doses differed by less than 1% for the considered organs and all profiles verified the γ(3%,3 mm). The calculation time was below 4 min on a single processor for CC superposition and 40 h on a 40 nodes cluster for MCNP (10(8) histories). Our results show that the CC superposition is a very promising alternative to MC for (90)Y dosimetry, while significantly reducing computation time. PMID:25097006

  3. A new approach for dose calculation in targeted radionuclide therapy (TRT) based on collapsed cone superposition: validation with 90Y

    NASA Astrophysics Data System (ADS)

    Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud

    2014-09-01

    To speed-up the absorbed dose (AD) computation while accounting for tissue heterogeneities, a Collapsed Cone (CC) superposition algorithm was developed and validated for 90Y. The superposition was implemented with an Energy Deposition Kernel scaled with the radiological distance, along with CC acceleration. The validation relative to Monte Carlo simulations was performed on 6 phantoms involving soft tissue, lung and bone, a radioembolisation treatment and a simulated bone metastasis treatment. As a figure of merit, the relative AD difference (ΔAD) in low gradient regions (LGR), distance to agreement (DTA) in high gradient regions and the γ(1%,1 mm) criterion were used for the phantoms. Mean organ doses and γ(3%,3 mm) were used for the patient data. For the semi-infinite sources, ΔAD in LGR was below 1%. DTA was below 0.6 mm. All profiles verified the γ(1%,1 mm) criterion. For both clinical cases, mean doses differed by less than 1% for the considered organs and all profiles verified the γ(3%,3 mm). The calculation time was below 4 min on a single processor for CC superposition and 40 h on a 40 nodes cluster for MCNP (108 histories). Our results show that the CC superposition is a very promising alternative to MC for 90Y dosimetry, while significantly reducing computation time.

  4. A comparison between anisotropic analytical and multigrid superposition dose calculation algorithms in radiotherapy treatment planning.

    PubMed

    Wu, Vincent W C; Tse, Teddy K H; Ho, Cola L M; Yeung, Eric C Y

    2013-01-01

    Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time

  5. A comparison between anisotropic analytical and multigrid superposition dose calculation algorithms in radiotherapy treatment planning

    SciTech Connect

    Wu, Vincent W.C.; Tse, Teddy K.H.; Ho, Cola L.M.; Yeung, Eric C.Y.

    2013-07-01

    Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time.

  6. Influence of the superposition approximation on calculated effective dose rates from galactic cosmic rays at aerospace-related altitudes

    NASA Astrophysics Data System (ADS)

    Copeland, Kyle

    2015-07-01

    The superposition approximation was commonly employed in atmospheric nuclear transport modeling until recent years and is incorporated into flight dose calculation codes such as CARI-6 and EPCARD. The useful altitude range for this approximation is investigated using Monte Carlo transport techniques. CARI-7A simulates atmospheric radiation transport of elements H-Fe using a database of precalculated galactic cosmic radiation showers calculated with MCNPX 2.7.0 and is employed here to investigate the influence of the superposition approximation on effective dose rates, relative to full nuclear transport of galactic cosmic ray primary ions. Superposition is found to produce results less than 10% different from nuclear transport at current commercial and business aviation altitudes while underestimating dose rates at higher altitudes. The underestimate sometimes exceeds 20% at approximately 23 km and exceeds 40% at 50 km. Thus, programs employing this approximation should not be used to estimate doses or dose rates for high-altitude portions of the commercial space and near-space manned flights that are expected to begin soon.

  7. Superposition model calculation of zero-field splitting of Fe3+ in LiTaO3 crystal

    NASA Astrophysics Data System (ADS)

    Yeom, T. H.

    2001-11-01

    The second-order zero-field splitting (ZFS) parameter b20 of the Fe3+ ion centre at the Li site, the Ta site and the structural vacancy site in the LiTaO3 crystal are calculated using the empirical superposition model. The fourth-order ZFS parameters b40, b43 and b4-3 are also calculated at the Li and Ta site, respectively. The calculated b20 of Fe3+ ion at the Li site agrees well with the experimental one. It is concluded that the Fe3+ replaces the Li+ ion rather than the Ta5+ ion in the LiTaO3 crystal. This conclusion confirms the site assignment from the electron nuclear double resonance experiments.

  8. Dose calculation of megavoltage IMRT using convolution kernels extracted from GafChromic EBT film-measured pencil beam profiles

    NASA Astrophysics Data System (ADS)

    Naik, Mehul S.

    Intensity-modulated radiation therapy (IMRT) is a 3D conformal radiation therapy technique that utilizes either a multileaf intensity-modulating collimator (MIMiC used with the NOMOS Peacock system) or a multileaf collimator (MLC) on a conventional linear accelerator for beam intensity modulation to afford increased conformity in dose distributions. Due to the high-dose gradient regions that are effectively created, particular emphasis should be placed in the accurate determination of pencil beam kernels that are utilized by pencil beam convolution algorithms employed by a number of commercial IMRT treatment planning systems (TPS). These kernels are determined from relatively large field dose profiles that are typically collected using an ion chamber during commissioning of the TPS, while recent studies have demonstrated improvements in dose calculation accuracy when incorporating film data into the commissioning measurements. For this study, it has been proposed that the shape of high-resolution dose kernels can be extracted directly from single pencil beam (beamlet) profile measurements acquired using high-precision dosimetric film in order to accurately compute dose distributions, specifically for small fields and the penumbra regions of the larger fields. The effectiveness of GafChromic EBT film as an appropriate dosimeter to acquire the necessary measurements was evaluated and compared to the conventional silver-halide Kodak EDR2 film. Using the NOMOS Peacock system, similar dose kernels were extracted through deconvolution of the elementary pencil beam profiles using the two different types of films. Independent convolution-based calculations were performed using these kernels, resulting in better agreement with the measured relative dose profiles, as compared to those determined by CORVUS TPS' finite-size pencil beam (FSPB) algorithm. Preliminary evaluation of the proposed method in performing kernel extraction for an MLC-based IMRT system also showed

  9. Geometrical correction for the inter- and intramolecular basis set superposition error in periodic density functional theory calculations.

    PubMed

    Brandenburg, Jan Gerit; Alessio, Maristella; Civalleri, Bartolomeo; Peintinger, Michael F; Bredow, Thomas; Grimme, Stefan

    2013-09-26

    We extend the previously developed geometrical correction for the inter- and intramolecular basis set superposition error (gCP) to periodic density functional theory (DFT) calculations. We report gCP results compared to those from the standard Boys-Bernardi counterpoise correction scheme and large basis set calculations. The applicability of the method to molecular crystals as the main target is tested for the benchmark set X23. It consists of 23 noncovalently bound crystals as introduced by Johnson et al. (J. Chem. Phys. 2012, 137, 054103) and refined by Tkatchenko et al. (J. Chem. Phys. 2013, 139, 024705). In order to accurately describe long-range electron correlation effects, we use the standard atom-pairwise dispersion correction scheme DFT-D3. We show that a combination of DFT energies with small atom-centered basis sets, the D3 dispersion correction, and the gCP correction can accurately describe van der Waals and hydrogen-bonded crystals. Mean absolute deviations of the X23 sublimation energies can be reduced by more than 70% and 80% for the standard functionals PBE and B3LYP, respectively, to small residual mean absolute deviations of about 2 kcal/mol (corresponding to 13% of the average sublimation energy). As a further test, we compute the interlayer interaction of graphite for varying distances and obtain a good equilibrium distance and interaction energy of 6.75 Å and -43.0 meV/atom at the PBE-D3-gCP/SVP level. We fit the gCP scheme for a recently developed pob-TZVP solid-state basis set and obtain reasonable results for the X23 benchmark set and the potential energy curve for water adsorption on a nickel (110) surface. PMID:23947824

  10. A novel algorithm for the calculation of physical and biological irradiation quantities in scanned ion beam therapy: the beamlet superposition approach

    NASA Astrophysics Data System (ADS)

    Russo, G.; Attili, A.; Battistoni, G.; Bertrand, D.; Bourhaleb, F.; Cappucci, F.; Ciocca, M.; Mairani, A.; Milian, F. M.; Molinelli, S.; Morone, M. C.; Muraro, S.; Orts, T.; Patera, V.; Sala, P.; Schmitt, E.; Vivaldo, G.; Marchetto, F.

    2016-01-01

    The calculation algorithm of a modern treatment planning system for ion-beam radiotherapy should ideally be able to deal with different ion species (e.g. protons and carbon ions), to provide relative biological effectiveness (RBE) evaluations and to describe different beam lines. In this work we propose a new approach for ion irradiation outcomes computations, the beamlet superposition (BS) model, which satisfies these requirements. This model applies and extends the concepts of previous fluence-weighted pencil-beam algorithms to quantities of radiobiological interest other than dose, i.e. RBE- and LET-related quantities. It describes an ion beam through a beam-line specific, weighted superposition of universal beamlets. The universal physical and radiobiological irradiation effect of the beamlets on a representative set of water-like tissues is evaluated once, coupling the per-track information derived from FLUKA Monte Carlo simulations with the radiobiological effectiveness provided by the microdosimetric kinetic model and the local effect model. Thanks to an extension of the superposition concept, the beamlet irradiation action superposition is applicable for the evaluation of dose, RBE and LET distributions. The weight function for the beamlets superposition is derived from the beam phase space density at the patient entrance. A general beam model commissioning procedure is proposed, which has successfully been tested on the CNAO beam line. The BS model provides the evaluation of different irradiation quantities for different ions, the adaptability permitted by weight functions and the evaluation speed of analitical approaches. Benchmarking plans in simple geometries and clinical plans are shown to demonstrate the model capabilities.

  11. A 3D superposition pencil beam dose calculation algorithm for a 60Co therapy unit and its verification by MC simulation

    NASA Astrophysics Data System (ADS)

    Koncek, O.; Krivonoska, J.

    2014-11-01

    The MCNP Monte Carlo code was used to simulate the collimating system of the 60Co therapy unit to calculate the primary and scattered photon fluences as well as the electron contamination incident to the isocentric plane as the functions of the irradiation field size. Furthermore, a Monte Carlo simulation for the polyenergetic Pencil Beam Kernels (PBKs) generation was performed using the calculated photon and electron spectra. The PBK was analytically fitted to speed up the dose calculation using the convolution technique in the homogeneous media. The quality of the PBK fit was verified by comparing the calculated and simulated 60Co broad beam profiles and depth dose curves in a homogeneous water medium. The inhomogeneity correction coefficients were derived from the PBK simulation of an inhomogeneous slab phantom consisting of various materials. The inhomogeneity calculation model is based on the changes in the PBK radial displacement and on the change of the forward and backward electron scattering. The inhomogeneity correction is derived from the electron density values gained from a complete 3D CT array and considers different electron densities through which the pencil beam is propagated as well as the electron density values located between the interaction point and the point of dose deposition. Important aspects and details of the algorithm implementation are also described in this study.

  12. Do a bit more with convolution.

    PubMed

    Olsthoorn, Theo N

    2008-01-01

    Convolution is a form of superposition that efficiently deals with input varying arbitrarily in time or space. It works whenever superposition is applicable, that is, for linear systems. Even though convolution is well-known since the 19th century, this valuable method is still missing in most textbooks on ground water hydrology. This limits widespread application in this field. Perhaps most papers are too complex mathematically as they tend to focus on the derivation of analytical expressions rather than solving practical problems. However, convolution is straightforward with standard mathematical software or even a spreadsheet, as is demonstrated in the paper. The necessary system responses are not limited to analytic solutions; they may also be obtained by running an already existing ground water model for a single stress period until equilibrium is reached. With these responses, high-resolution time series of head or discharge may then be computed by convolution for arbitrary points and arbitrarily varying input, without further use of the model. There are probably thousands of applications in the field of ground water hydrology that may benefit from convolution. Therefore, its inclusion in ground water textbooks and courses is strongly needed. PMID:18181860

  13. The Convolution Method in Neutrino Physics Searches

    SciTech Connect

    Tsakstara, V.; Kosmas, T. S.; Chasioti, V. C.; Divari, P. C.; Sinatkas, J.

    2007-12-26

    We concentrate on the convolution method used in nuclear and astro-nuclear physics studies and, in particular, in the investigation of the nuclear response of various neutrino detection targets to the energy-spectra of specific neutrino sources. Since the reaction cross sections of the neutrinos with nuclear detectors employed in experiments are extremely small, very fine and fast convolution techniques are required. Furthermore, sophisticated de-convolution methods are also needed whenever a comparison between calculated unfolded cross sections and existing convoluted results is necessary.

  14. SU-E-T-08: A Convolution Model for Head Scatter Fluence in the Intensity Modulated Field

    SciTech Connect

    Chen, M; Mo, X; Chen, Y; Parnell, D; Key, S; Olivera, G; Galmarini, W; Lu, W

    2014-06-01

    Purpose: To efficiently calculate the head scatter fluence for an arbitrary intensity-modulated field with any source distribution using the source occlusion model. Method: The source occlusion model with focal and extra focal radiation (Jaffray et al, 1993) can be used to account for LINAC head scatter. In the model, the fluence map of any field shape at any point can be calculated via integration of the source distribution within the visible range, as confined by each segment, using the detector eye's view. A 2D integration would be required for each segment and each fluence plane point, which is time-consuming, as an intensity-modulated field contains typically tens to hundreds of segments. In this work, we prove that the superposition of the segmental integrations is equivalent to a simple convolution regardless of what the source distribution is. In fact, for each point, the detector eye's view of the field shape can be represented as a function with the origin defined at the point's pinhole reflection through the center of the collimator plane. We were thus able to reduce hundreds of source plane integration to one convolution. We calculated the fluence map for various 3D and IMRT beams and various extra-focal source distributions using both the segmental integration approach and the convolution approach and compared the computation time and fluence map results of both approaches. Results: The fluence maps calculated using the convolution approach were the same as those calculated using the segmental approach, except for rounding errors (<0.1%). While it took considerably longer time to calculate all segmental integrations, the fluence map calculation using the convolution approach took only ∼1/3 of the time for typical IMRT fields with ∼100 segments. Conclusions: The convolution approach for head scatter fluence calculation is fast and accurate and can be used to enhance the online process.

  15. Commissioning and initial acceptance tests for a commercial convolution dose calculation algorithm for radiotherapy treatment planning in comparison with Monte Carlo simulation and measurement

    PubMed Central

    Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen

    2012-01-01

    In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081

  16. Commissioning and initial acceptance tests for a commercial convolution dose calculation algorithm for radiotherapy treatment planning in comparison with Monte Carlo simulation and measurement.

    PubMed

    Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen

    2012-07-01

    In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081

  17. Distal Convoluted Tubule

    PubMed Central

    Ellison, David H.

    2014-01-01

    The distal convoluted tubule is the nephron segment that lies immediately downstream of the macula densa. Although short in length, the distal convoluted tubule plays a critical role in sodium, potassium, and divalent cation homeostasis. Recent genetic and physiologic studies have greatly expanded our understanding of how the distal convoluted tubule regulates these processes at the molecular level. This article provides an update on the distal convoluted tubule, highlighting concepts and pathophysiology relevant to clinical practice. PMID:24855283

  18. Investigation of the Fe3+ centers in perovskite KMgF3 through a combination of ab initio (density functional theory) and semi-empirical (superposition model) calculations

    NASA Astrophysics Data System (ADS)

    Emül, Y.; Erbahar, D.; Açıkgöz, M.

    2015-08-01

    Analyses of the local crystal and electronic structure in the vicinity of Fe3+ centers in perovskite KMgF3 crystal have been carried out in a comprehensive manner. A combination of density functional theory (DFT) and a semi-empirical superposition model (SPM) is used for a complete analysis of all Fe3+ centers in this study for the first time. Some quantitative information has been derived from the DFT calculations on both the electronic structure and the local geometry around Fe3+ centers. All of the trigonal (K-vacancy case, K-Li substitution case, and normal trigonal Fe3+ center case), FeF5O cluster, and tetragonal (Mg-vacancy and Mg-Li substitution cases) centers have been taken into account based on the previously suggested experimental and theoretical inferences. The collaboration between the experimental data and the results of both DFT and SPM calculations provides us to understand most probable structural model for Fe3+ centers in KMgF3.

  19. Some easily analyzable convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R.; Dolinar, S.; Pollara, F.; Vantilborg, H.

    1989-01-01

    Convolutional codes have played and will play a key role in the downlink telemetry systems on many NASA deep-space probes, including Voyager, Magellan, and Galileo. One of the chief difficulties associated with the use of convolutional codes, however, is the notorious difficulty of analyzing them. Given a convolutional code as specified, say, by its generator polynomials, it is no easy matter to say how well that code will perform on a given noisy channel. The usual first step in such an analysis is to computer the code's free distance; this can be done with an algorithm whose complexity is exponential in the code's constraint length. The second step is often to calculate the transfer function in one, two, or three variables, or at least a few terms in its power series expansion. This step is quite hard, and for many codes of relatively short constraint lengths, it can be intractable. However, a large class of convolutional codes were discovered for which the free distance can be computed by inspection, and for which there is a closed-form expression for the three-variable transfer function. Although for large constraint lengths, these codes have relatively low rates, they are nevertheless interesting and potentially useful. Furthermore, the ideas developed here to analyze these specialized codes may well extend to a much larger class.

  20. Multipartite entanglement of superpositions

    SciTech Connect

    Cavalcanti, D.; Terra Cunha, M. O.; Acin, A.

    2007-10-15

    The entanglement of superpositions [Linden et al., Phys. Rev. Lett. 97, 100502 (2006)]is generalized to the multipartite scenario: an upper bound to the multipartite entanglement of a superposition is given in terms of the entanglement of the superposed states and the superposition coefficients. This bound is proven to be tight for a class of states composed of an arbitrary number of qubits. We also extend the result to a large family of quantifiers, which includes the negativity, the robustness of entanglement, and the best separable approximation measure.

  1. Multipartite entanglement of superpositions

    NASA Astrophysics Data System (ADS)

    Cavalcanti, D.; Terra Cunha, M. O.; Acín, A.

    2007-10-01

    The entanglement of superpositions [Linden , Phys. Rev. Lett. 97, 100502 (2006)]is generalized to the multipartite scenario: an upper bound to the multipartite entanglement of a superposition is given in terms of the entanglement of the superposed states and the superposition coefficients. This bound is proven to be tight for a class of states composed of an arbitrary number of qubits. We also extend the result to a large family of quantifiers, which includes the negativity, the robustness of entanglement, and the best separable approximation measure.

  2. A geometrical correction for the inter- and intra-molecular basis set superposition error in Hartree-Fock and density functional theory calculations for large systems.

    PubMed

    Kruse, Holger; Grimme, Stefan

    2012-04-21

    A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model

  3. Network Class Superposition Analyses

    PubMed Central

    Pearson, Carl A. B.; Zeng, Chen; Simha, Rahul

    2013-01-01

    Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., for the yeast cell cycle process [1]), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix , which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for derived from Boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with . We show how to generate Derrida plots based on . We show that -based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on . We motivate all of these results in terms of a popular molecular biology Boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for , for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses. PMID:23565141

  4. Asymmetric quantum convolutional codes

    NASA Astrophysics Data System (ADS)

    La Guardia, Giuliano G.

    2016-01-01

    In this paper, we construct the first families of asymmetric quantum convolutional codes (AQCCs). These new AQCCs are constructed by means of the CSS-type construction applied to suitable families of classical convolutional codes, which are also constructed here. The new codes have non-catastrophic generator matrices, and they have great asymmetry. Since our constructions are performed algebraically, i.e. we develop general algebraic methods and properties to perform the constructions, it is possible to derive several families of such codes and not only codes with specific parameters. Additionally, several different types of such codes are obtained.

  5. Engineering mesoscopic superpositions of superfluid flow

    SciTech Connect

    Hallwood, D. W.; Brand, J.

    2011-10-15

    Modeling strongly correlated atoms demonstrates the possibility to prepare quantum superpositions that are robust against experimental imperfections and temperature. Such superpositions of vortex states are formed by adiabatic manipulation of interacting ultracold atoms confined to a one-dimensional ring trapping potential when stirred by a barrier. Here, we discuss the influence of nonideal experimental procedures and finite temperature. Adiabaticity conditions for changing the stirring rate reveal that superpositions of many atoms are most easily accessed in the strongly interacting, Tonks-Girardeau, regime, which is also the most robust at finite temperature. NOON-type superpositions of weakly interacting atoms are most easily created by adiabatically decreasing the interaction strength by means of a Feshbach resonance. The quantum dynamics of small numbers of particles is simulated and the size of the superpositions is calculated based on their ability to make precision measurements. The experimental creation of strongly correlated and NOON-type superpositions with about 100 atoms seems feasible in the near future.

  6. Superposition State Molecular Dynamics.

    PubMed

    Venkatnathan, Arun; Voth, Gregory A

    2005-01-01

    The ergodic sampling of rough energy landscapes is crucial for understanding phenomena like protein folding, peptide aggregation, polymer dynamics, and the glass transition. These rough energy landscapes are characterized by the presence of many local minima separated by high energy barriers, where Molecular Dynamics (MD) fails to satisfy ergodicity. To enhance ergodic behavior, we have developed the Superposition State Molecular Dynamics (SSMD) method, which uses a superposition of energy states to obtain an effective potential for the MD simulation. In turn, the dynamics on this effective potential can be used to sample the configurational free energy of the real potential. The effectiveness of the SSMD method for a one-dimensional rough potential energy landscape is presented as a test case. PMID:26641113

  7. Artificial neural superposition eye.

    PubMed

    Brückner, Andreas; Duparré, Jacques; Dannberg, Peter; Bräuer, Andreas; Tünnermann, Andreas

    2007-09-17

    We propose an ultra-thin imaging system which is based on the neural superposition compound eye of insects. Multiple light sensitive pixels in the footprint of each lenslet of this multi-channel configuration enable the parallel imaging of the individual object points. Together with the digital superposition of related signals this multiple sampling enables advanced functionalities for artificial compound eyes. Using this technique, color imaging and a circumvention for the trade-off between resolution and sensitivity of ultra-compact camera devices have been demonstrated in this article. The optical design and layout of such a system is discussed in detail. Experimental results are shown which indicate the attractiveness of microoptical artificial compound eyes for applications in the field of machine vision, surveillance or automotive imaging. PMID:19547555

  8. Concurrence of superpositions

    SciTech Connect

    Yu, Chang-shui; Yi, X. X.; Song, He-shan

    2007-02-15

    Bounds on the concurrence of the superposition state in terms of the concurrences of the states being superposed are found in this paper. The bounds on concurrence are quite different from those on the entanglement measured by von Neumann entropy [Linden et al., Phys. Rev. Lett. 97, 100502 (2006)]. In particular, a nonzero lower bound can be provided if the states being superposed are properly constrained.

  9. Understanding deep convolutional networks.

    PubMed

    Mallat, Stéphane

    2016-04-13

    Deep convolutional networks provide state-of-the-art classifications and regressions results over many high-dimensional problems. We review their architecture, which scatters data with a cascade of linear filter weights and nonlinearities. A mathematical framework is introduced to analyse their properties. Computations of invariants involve multiscale contractions with wavelets, the linearization of hierarchical symmetries and sparse separations. Applications are discussed. PMID:26953183

  10. Stereotactic Body Radiotherapy for Primary Lung Cancer at a Dose of 50 Gy Total in Five Fractions to the Periphery of the Planning Target Volume Calculated Using a Superposition Algorithm

    SciTech Connect

    Takeda, Atsuya; Sanuki, Naoko; Kunieda, Etsuo Ohashi, Toshio; Oku, Yohei; Takeda, Toshiaki; Shigematsu, Naoyuki; Kubo, Atsushi

    2009-02-01

    Purpose: To retrospectively analyze the clinical outcomes of stereotactic body radiotherapy (SBRT) for patients with Stages 1A and 1B non-small-cell lung cancer. Methods and Materials: We reviewed the records of patients with non-small-cell lung cancer treated with curative intent between Dec 2001 and May 2007. All patients had histopathologically or cytologically confirmed disease, increased levels of tumor markers, and/or positive findings on fluorodeoxyglucose positron emission tomography. Staging studies identified their disease as Stage 1A or 1B. Performance status was 2 or less according to World Health Organization guidelines in all cases. The prescribed dose of 50 Gy total in five fractions, calculated by using a superposition algorithm, was defined for the periphery of the planning target volume. Results: One hundred twenty-one patients underwent SBRT during the study period, and 63 were eligible for this analysis. Thirty-eight patients had Stage 1A (T1N0M0) and 25 had Stage 1B (T2N0M0). Forty-nine patients were not appropriate candidates for surgery because of chronic pulmonary disease. Median follow-up of these 49 patients was 31 months (range, 10-72 months). The 3-year local control, disease-free, and overall survival rates in patients with Stages 1A and 1B were 93% and 96% (p = 0.86), 76% and 77% (p = 0.83), and 90% and 63% (p = 0.09), respectively. No acute toxicity was observed. Grade 2 or higher radiation pneumonitis was experienced by 3 patients, and 1 of them had fatal bacterial pneumonia. Conclusions: The SBRT at 50 Gy total in five fractions to the periphery of the planning target volume calculated by using a superposition algorithm is feasible. High local control rates were achieved for both T2 and T1 tumors.

  11. Comparison of commercially available three-dimensional treatment planning algorithms for monitor unit calculations in the presence of heterogeneities.

    PubMed

    Butts, J R; Foster, A E

    2001-01-01

    This study uses an anthropomorphic phantom and its computed tomography (CT) data set to evaluate monitor unit (MU) calculations using the CMS Focus Clarkson, the CMS Focus Multigrid Superposition Model, the CMS Focus FFT Convolution Model, and the ADAC Pinnacle Collapsed Cone Convolution Superposition Algorithms. Using heterogeneity corrections, a treatment plan and corresponding MU calculations were generated for several typical clinical situations. A diode detector, placed in an anthropomorphic phantom, was used to compare the treatment planning algorithms' predicted doses with measured data. Differences between diode measurements and the algorithms' calculations were within reasonable levels of acceptability as recommended by Van Dyk et al. [Int. J. Rad. Onc. Biol. Phys. 26, 261-273 (1993)], except for the CMS Clarkson algorithm, which predicted too few MU for delivery of the intended dose to chest wall fields. PMID:11674836

  12. Calculations of the ionization potentials of the halogens by the relativistic Hartree-Rock-Dirac method taking account of superposition of configurations

    SciTech Connect

    Tupitsyn, I.I.

    1988-03-01

    The ionization potentials of the halogen group have been calculated. The calculations were carried out using the relativistic Hartree-Fock method taking into account correlation effects. Comparison of theoretical results with experimental data for the elements F, Cl, Br, and I allows an estimation of the accuracy and reliability of the method. The theoretical values of the ionization potential of astatine obtained here may be of definite interest for the chemistry of astatine.

  13. Reexamination of entanglement of superpositions

    NASA Astrophysics Data System (ADS)

    Gour, Gilad

    2007-11-01

    We find tight lower and upper bounds on the entanglement of a superposition of two bipartite states in terms of the entanglement of the two states constituting the superposition. Our upper bound is dramatically tighter than the one presented by Linden [Phys. Rev. Lett. 97, 100502 (2006)] and our lower bound can be used to provide lower bounds on different measures of entanglement such as the entanglement of formation and the entanglement of subspaces. We also find that in the case in which the two states are one-sided orthogonal, the entanglement of the superposition state can be expressed explicitly in terms of the entanglement of the two states in the superposition.

  14. Superposition Enhanced Nested Sampling

    NASA Astrophysics Data System (ADS)

    Martiniani, Stefano; Stevenson, Jacob D.; Wales, David J.; Frenkel, Daan

    2014-07-01

    The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.

  15. Investigation of the Fe{sup 3+} centers in perovskite KMgF{sub 3} through a combination of ab initio (density functional theory) and semi-empirical (superposition model) calculations

    SciTech Connect

    Emül, Y.; Erbahar, D.; Açıkgöz, M.

    2015-08-14

    Analyses of the local crystal and electronic structure in the vicinity of Fe{sup 3+} centers in perovskite KMgF{sub 3} crystal have been carried out in a comprehensive manner. A combination of density functional theory (DFT) and a semi-empirical superposition model (SPM) is used for a complete analysis of all Fe{sup 3+} centers in this study for the first time. Some quantitative information has been derived from the DFT calculations on both the electronic structure and the local geometry around Fe{sup 3+} centers. All of the trigonal (K-vacancy case, K-Li substitution case, and normal trigonal Fe{sup 3+} center case), FeF{sub 5}O cluster, and tetragonal (Mg-vacancy and Mg-Li substitution cases) centers have been taken into account based on the previously suggested experimental and theoretical inferences. The collaboration between the experimental data and the results of both DFT and SPM calculations provides us to understand most probable structural model for Fe{sup 3+} centers in KMgF{sub 3}.

  16. Convolutional coding techniques for data protection

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  17. Determinate-state convolutional codes

    NASA Technical Reports Server (NTRS)

    Collins, O.; Hizlan, M.

    1991-01-01

    A determinate state convolutional code is formed from a conventional convolutional code by pruning away some of the possible state transitions in the decoding trellis. The type of staged power transfer used in determinate state convolutional codes proves to be an extremely efficient way of enhancing the performance of a concatenated coding system. The decoder complexity is analyzed along with free distances of these new codes and extensive simulation results is provided of their performance at the low signal to noise ratios where a real communication system would operate. Concise, practical examples are provided.

  18. Quantum superpositions of crystalline structures

    SciTech Connect

    Baltrusch, Jens D.; Morigi, Giovanna; Cormick, Cecilia; De Chiara, Gabriele; Calarco, Tommaso

    2011-12-15

    A procedure is discussed for creating coherent superpositions of motional states of ion strings. The motional states are across the structural transition linear-zigzag, and their coherent superposition is achieved by means of spin-dependent forces, such that a coherent superposition of the electronic states of one ion evolves into an entangled state between the chain's internal and external degrees of freedom. It is shown that the creation of such an entangled state can be revealed by performing Ramsey interferometry with one ion of the chain.

  19. Reexamination of entanglement of superpositions

    SciTech Connect

    Gour, Gilad

    2007-11-15

    We find tight lower and upper bounds on the entanglement of a superposition of two bipartite states in terms of the entanglement of the two states constituting the superposition. Our upper bound is dramatically tighter than the one presented by Linden et al. [Phys. Rev. Lett. 97, 100502 (2006)] and our lower bound can be used to provide lower bounds on different measures of entanglement such as the entanglement of formation and the entanglement of subspaces. We also find that in the case in which the two states are one-sided orthogonal, the entanglement of the superposition state can be expressed explicitly in terms of the entanglement of the two states in the superposition.

  20. Underdosage of the upper-airway mucosa for small fields as used in intensity-modulated radiation therapy: a comparison between radiochromic film measurements, Monte Carlo simulations, and collapsed cone convolution calculations.

    PubMed

    Martens, C; Reynaert, N; De Wagter, C; Nilsson, P; Coghe, M; Palmans, H; Thierens, H; De Neve, W

    2002-07-01

    Head-and-neck tumors are often situated at an air-tissue interface what may result in an underdosage of part of the tumor in radiotherapy treatments using megavoltage photons, especially for small fields. In addition to effects of transient electronic disequilibrium, for these small fields, an increased lateral electron range in air will result in an important extra reduction of the central axis dose beyond the cavity. Therefore dose calculation algorithms need to model electron transport accurately. We simulated the trachea by a 2 cm diameter cylindrical air cavity with the rim situated 2 cm beneath the phantom surface. A 6 MV photon beam from an Elekta SLiplus linear accelerator, equipped with the standard multileaf collimator (MLC), was assessed. A 10 x 2 cm2 and a 10 x 1 cm2 field, both widthwise collimated by the MLC, were applied with their long side parallel to the cylinder axis. Central axis dose rebuild-up was studied. Radiochromic film measurements were performed in an in-house manufactured polystyrene phantom with the films oriented either along or perpendicular to the beam axis. Monte Carlo simulations were performed with BEAM and EGSnrc. Calculations were also performed using the pencil beam (PB) algorithm and the collapsed cone convolution (CCC) algorithm of Helax-TMS (MDS Nordion, Kanata, Cahada) version 6.0.2 and using the CCC algorithm of Pinnacle (ADAC Laboratories, Milpitas, CA, USA) version 4.2. A very good agreement between the film measurements and the Monte Carlo simulations was found. The CCC algorithms were not able to predict the interface dose accurately when lateral electronic disequilibrium occurs, but were shown to be a considerable improvement compared to the PB algorithm. The CCC algorithms overestimate the dose in the rebuild-up region. The interface dose was overestimated by a maximum of 31% or 54%, depending on the implementation of the CCC algorithm. At a depth of 1 mm, the maximum dose overestimation was 14% or 24%. PMID

  1. Entanglement-assisted quantum convolutional coding

    SciTech Connect

    Wilde, Mark M.; Brun, Todd A.

    2010-04-15

    We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.

  2. Simplified Convolution Codes

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Reed, I. S.

    1985-01-01

    Simple recursive algorithm efficiently calculates minimum-weight error vectors using Diophantine equations. Recursive algorithm uses general solution of polynomial linear Diophantine equation to determine minimum-weight error polynomial vector in equation in polynomial space.

  3. A quantum algorithm for Viterbi decoding of classical convolutional codes

    NASA Astrophysics Data System (ADS)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  4. QCDNUM: Fast QCD evolution and convolution

    NASA Astrophysics Data System (ADS)

    Botje, M.

    2011-02-01

    The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline

  5. Two dimensional convolute integers for machine vision and image recognition

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas R.

    1988-01-01

    Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.

  6. Linear superposition in nonlinear equations.

    PubMed

    Khare, Avinash; Sukhatme, Uday

    2002-06-17

    Several nonlinear systems such as the Korteweg-de Vries (KdV) and modified KdV equations and lambda phi(4) theory possess periodic traveling wave solutions involving Jacobi elliptic functions. We show that suitable linear combinations of these known periodic solutions yield many additional solutions with different periods and velocities. This linear superposition procedure works by virtue of some remarkable new identities involving elliptic functions. PMID:12059300

  7. Student ability to distinguish between superposition states and mixed states in quantum mechanics

    NASA Astrophysics Data System (ADS)

    Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.

    2015-12-01

    Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the experimental implications of a superposition state. In particular, they fail to recognize how a superposition state and a mixed state (sometimes called a "lack of knowledge" state) can produce different experimental results. We present data that suggest that superposition in quantum mechanics is a difficult concept for students enrolled in sophomore-, junior-, and graduate-level quantum mechanics courses. We illustrate how an interactive lecture tutorial can improve student understanding of quantum mechanical superposition. A longitudinal study suggests that the impact persists after an additional quarter of quantum mechanics instruction that does not specifically address these ideas.

  8. The Paraconsistent Logic of Quantum Superpositions

    NASA Astrophysics Data System (ADS)

    da Costa, N.; de Ronde, C.

    2013-07-01

    Physical superpositions exist both in classical and in quantum physics. However, what is exactly meant by `superposition' in each case is extremely different. In this paper we discuss some of the multiple interpretations which exist in the literature regarding superpositions in quantum mechanics. We argue that all these interpretations have something in common: they all attempt to avoid `contradiction'. We argue in this paper, in favor of the importance of developing a new interpretation of superpositions which takes into account contradiction, as a key element of the formal structure of the theory, "right from the start". In order to show the feasibility of our interpretational project we present an outline of a paraconsistent approach to quantum superpositions which attempts to account for the contradictory properties present in general within quantum superpositions. This approach must not be understood as a closed formal and conceptual scheme but rather as a first step towards a different type of understanding regarding quantum superpositions.

  9. Approximating large convolutions in digital images.

    PubMed

    Mount, D M; Kanungo, T; Netanyahu, N S; Piatko, C; Silverman, R; Wu, A Y

    2001-01-01

    Computing discrete two-dimensional (2-D) convolutions is an important problem in image processing. In mathematical morphology, an important variant is that of computing binary convolutions, where the kernel of the convolution is a 0-1 valued function. This operation can be quite costly, especially when large kernels are involved. We present an algorithm for computing convolutions of this form, where the kernel of the binary convolution is derived from a convex polygon. Because the kernel is a geometric object, we allow the algorithm some flexibility in how it elects to digitize the convex kernel at each placement, as long as the digitization satisfies certain reasonable requirements. We say that such a convolution is valid. Given this flexibility we show that it is possible to compute binary convolutions more efficiently than would normally be possible for large kernels. Our main result is an algorithm which, given an m x n image and a k-sided convex polygonal kernel K, computes a valid convolution in O(kmn) time. Unlike standard algorithms for computing correlations and convolutions, the running time is independent of the area or perimeter of K, and our techniques do not rely on computing fast Fourier transforms. Our algorithm is based on a novel use of Bresenham's (1965) line-drawing algorithm and prefix-sums to update the convolution incrementally as the kernel is moved from one position to another across the image. PMID:18255522

  10. Ultrasonic field modeling for immersed components using Gaussian beam superposition.

    PubMed

    Spies, Martin

    2007-05-01

    The Gaussian beam (GB) superposition approach can be applied to model ultrasound propagation in complex-structured materials and components. In this article, progress made in extending and applying the Gaussian beam superposition technique to model the beam fields generated by transducers with flat and focused rectangular apertures as well as with circular focused apertures is addressed. The refraction of transducer beam fields through curved surfaces is illustrated by calculation results for beam fields generated in curved components during immersion testing. In particular, the following developments are put forward: (i) the use of individually determined sets of GBs to model transducer beam fields with a number of less than ten beams; (ii) the application of the GB representation of rectangular transducers to focusing probes, as well as to the problem of transmission through interfaces; and (iii) computationally efficient transient modeling by superposition of 'temporally limited' GBs. PMID:17335863

  11. Student Ability to Distinguish between Superposition States and Mixed States in Quantum Mechanics

    ERIC Educational Resources Information Center

    Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.

    2015-01-01

    Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the…

  12. Creating a Superposition of Unknown Quantum States.

    PubMed

    Oszmaniec, Michał; Grudka, Andrzej; Horodecki, Michał; Wójcik, Antoni

    2016-03-18

    The superposition principle is one of the landmarks of quantum mechanics. The importance of quantum superpositions provokes questions about the limitations that quantum mechanics itself imposes on the possibility of their generation. In this work, we systematically study the problem of the creation of superpositions of unknown quantum states. First, we prove a no-go theorem that forbids the existence of a universal probabilistic quantum protocol producing a superposition of two unknown quantum states. Second, we provide an explicit probabilistic protocol generating a superposition of two unknown states, each having a fixed overlap with the known referential pure state. The protocol can be applied to generate coherent superposition of results of independent runs of subroutines in a quantum computer. Moreover, in the context of quantum optics it can be used to efficiently generate highly nonclassical states or non-Gaussian states. PMID:27035290

  13. Creating a Superposition of Unknown Quantum States

    NASA Astrophysics Data System (ADS)

    Oszmaniec, Michał; Grudka, Andrzej; Horodecki, Michał; Wójcik, Antoni

    2016-03-01

    The superposition principle is one of the landmarks of quantum mechanics. The importance of quantum superpositions provokes questions about the limitations that quantum mechanics itself imposes on the possibility of their generation. In this work, we systematically study the problem of the creation of superpositions of unknown quantum states. First, we prove a no-go theorem that forbids the existence of a universal probabilistic quantum protocol producing a superposition of two unknown quantum states. Second, we provide an explicit probabilistic protocol generating a superposition of two unknown states, each having a fixed overlap with the known referential pure state. The protocol can be applied to generate coherent superposition of results of independent runs of subroutines in a quantum computer. Moreover, in the context of quantum optics it can be used to efficiently generate highly nonclassical states or non-Gaussian states.

  14. Rotational superposition: a review of methods.

    PubMed

    Flower, D R

    1999-01-01

    Rotational superposition is one of the most commonly used algorithms in molecular modelling. Many different methods of solving superposition have been suggested. Of these, methods based on the quaternion parameterization of rotation are fast, accurate, and robust. Quaternion parameterization-based methods cannot result in rotation inversion and do not have special cases such as co-linearity or co-planarity of points. Thus, quaternion parameterization-based methods are the best choice for rotational superposition applications. PMID:10736782

  15. Superposition flows of entangled polymeric solutions

    NASA Astrophysics Data System (ADS)

    Ianniruberto, Giovanni; Unidad, Herwin Jerome

    2015-12-01

    Parallel and orthogonal superposition experiments by Vermant et al. (1998) on a polydisperse, entangled polymeric solution are here analyzed by using a simple, multi-mode differential constitutive equation based on the tube model, and also accounting for convective constraint release effects. Model predictions are in very good qualitative and quantitative agreement with parallel superposition data, while some discrepancies are found with orthogonal data, thus suggesting that orthogonal superposition experiments represent a more severe test for molecularly-based constitutive equations.

  16. Mesoscopic Superposition States in Relativistic Landau Levels

    SciTech Connect

    Bermudez, A.; Martin-Delgado, M. A.; Solano, E.

    2007-09-21

    We show that a linear superposition of mesoscopic states in relativistic Landau levels can be built when an external magnetic field couples to a relativistic spin 1/2 charged particle. Under suitable initial conditions, the associated Dirac equation produces unitarily superpositions of coherent states involving the particle orbital quanta in a well-defined mesoscopic regime. We demonstrate that these mesoscopic superpositions have a purely relativistic origin and disappear in the nonrelativistic limit.

  17. Convolution-deconvolution in DIGES

    SciTech Connect

    Philippacopoulos, A.J.; Simos, N.

    1995-05-01

    Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities.

  18. Macroscopic optomechanical superposition via periodic qubit flipping

    NASA Astrophysics Data System (ADS)

    Ge, Wenchao; Zubairy, M. Suhail

    2015-01-01

    We propose a scheme to generate macroscopic superpositions of well-distinguishable coherent states in an optomechanical system via periodic qubit flipping. Our scheme does not require the single-photon strong-coupling rate of an optomechanical system. The generated mechanical superposition state can be reconstructed using mechanical quantum-state reconstruction. The proposed scheme relies on recycling of an atom, fast atomic qubit flipping, and coherent state mapping between a single-photon superposition state and an atomic superposition state. We discuss the experimental feasibility of our proposal under current technology.

  19. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  20. Runge-Kutta based generalized convolution quadrature

    NASA Astrophysics Data System (ADS)

    Lopez-Fernandez, Maria; Sauter, Stefan

    2016-06-01

    We present the Runge-Kutta generalized convolution quadrature (gCQ) with variable time steps for the numerical solution of convolution equations for time and space-time problems. We present the main properties of the method and a convergence result.

  1. Symbol synchronization in convolutionally coded systems

    NASA Technical Reports Server (NTRS)

    Baumert, L. D.; Mceliece, R. J.; Van Tilborg, H. C. A.

    1979-01-01

    Alternate symbol inversion is sometimes applied to the output of convolutional encoders to guarantee sufficient richness of symbol transition for the receiver symbol synchronizer. A bound is given for the length of the transition-free symbol stream in such systems, and those convolutional codes are characterized in which arbitrarily long transition free runs occur.

  2. Rolling-Convolute Joint For Pressurized Glove

    NASA Technical Reports Server (NTRS)

    Kosmo, Joseph J.; Bassick, John W.

    1994-01-01

    Rolling-convolute metacarpal/finger joint enhances mobility and flexibility of pressurized glove. Intended for use in space suit to increase dexterity and decrease wearer's fatigue. Also useful in diving suits and other pressurized protective garments. Two ring elements plus bladder constitute rolling-convolute joint balancing torques caused by internal pressurization of glove. Provides comfortable grasp of various pieces of equipment.

  3. The general theory of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Stanley, R. P.

    1993-01-01

    This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.

  4. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  5. Search for optimal distance spectrum convolutional codes

    NASA Technical Reports Server (NTRS)

    Connor, Matthew C.; Perez, Lance C.; Costello, Daniel J., Jr.

    1993-01-01

    In order to communicate reliably and to reduce the required transmitter power, NASA uses coded communication systems on most of their deep space satellites and probes (e.g. Pioneer, Voyager, Galileo, and the TDRSS network). These communication systems use binary convolutional codes. Better codes make the system more reliable and require less transmitter power. However, there are no good construction techniques for convolutional codes. Thus, to find good convolutional codes requires an exhaustive search over the ensemble of all possible codes. In this paper, an efficient convolutional code search algorithm was implemented on an IBM RS6000 Model 580. The combination of algorithm efficiency and computational power enabled us to find, for the first time, the optimal rate 1/2, memory 14, convolutional code.

  6. Adaptive decoding of convolutional codes

    NASA Astrophysics Data System (ADS)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  7. The M&M Superposition Principle.

    ERIC Educational Resources Information Center

    Miller, John B.

    2000-01-01

    Describes a physical system for demonstrating operators, eigenvalues, and superposition of states for a set of unusual wave functions. Uses candy to provide students with a visual and concrete picture of a superposition of states rather than an abstract plot of several overlaid mathematical states. (WRM)

  8. Many-Body Basis Set Superposition Effect.

    PubMed

    Ouyang, John F; Bettens, Ryan P A

    2015-11-10

    The basis set superposition effect (BSSE) arises in electronic structure calculations of molecular clusters when questions relating to interactions between monomers within the larger cluster are asked. The binding energy, or total energy, of the cluster may be broken down into many smaller subcluster calculations and the energies of these subsystems linearly combined to, hopefully, produce the desired quantity of interest. Unfortunately, BSSE can plague these smaller fragment calculations. In this work, we carefully examine the major sources of error associated with reproducing the binding energy and total energy of a molecular cluster. In order to do so, we decompose these energies in terms of a many-body expansion (MBE), where a "body" here refers to the monomers that make up the cluster. In our analysis, we found it necessary to introduce something we designate here as a many-ghost many-body expansion (MGMBE). The work presented here produces some surprising results, but perhaps the most significant of all is that BSSE effects up to the order of truncation in a MBE of the total energy cancel exactly. In the case of the binding energy, the only BSSE correction terms remaining arise from the removal of the one-body monomer total energies. Nevertheless, our earlier work indicated that BSSE effects continued to remain in the total energy of the cluster up to very high truncation order in the MBE. We show in this work that the vast majority of these high-order many-body effects arise from BSSE associated with the one-body monomer total energies. Also, we found that, remarkably, the complete basis set limit values for the three-body and four-body interactions differed very little from that at the MP2/aug-cc-pVDZ level for the respective subclusters embedded within a larger cluster. PMID:26574311

  9. Bernoulli convolutions and 1D dynamics

    NASA Astrophysics Data System (ADS)

    Kempton, Tom; Persson, Tomas

    2015-10-01

    We describe a family {φλ} of dynamical systems on the unit interval which preserve Bernoulli convolutions. We show that if there are parameter ranges for which these systems are piecewise convex, then the corresponding Bernoulli convolution will be absolutely continuous with bounded density. We study the systems {φλ} and give some numerical evidence to suggest values of λ for which {φλ} may be piecewise convex.

  10. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  11. Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity

    SciTech Connect

    Sales, J. S.; Silva, L. F. da; Almeida, N. G. de

    2011-03-15

    We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.

  12. Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity

    NASA Astrophysics Data System (ADS)

    Sales, J. S.; da Silva, L. F.; de Almeida, N. G.

    2011-03-01

    We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.

  13. On the Use of Material-Dependent Damping in ANSYS for Mode Superposition Transient Analysis

    SciTech Connect

    Nie, J.; Wei, X.

    2011-07-17

    The mode superposition method is often used for dynamic analysis of complex structures, such as the seismic Category I structures in nuclear power plants, in place of the less efficient full method, which uses the full system matrices for calculation of the transient responses. In such applications, specification of material-dependent damping is usually desirable because complex structures can consist of multiple types of materials that may have different energy dissipation capabilities. A recent review of the ANSYS manual for several releases found that the use of material-dependent damping is not clearly explained for performing a mode superposition transient dynamic analysis. This paper includes several mode superposition transient dynamic analyses using different ways to specify damping in ANSYS, in order to determine how material-dependent damping can be specified conveniently in a mode superposition transient dynamic analysis.

  14. Experimental superposition of orders of quantum gates.

    PubMed

    Procopio, Lorenzo M; Moqanaki, Amir; Araújo, Mateus; Costa, Fabio; Alonso Calafell, Irati; Dowd, Emma G; Hamel, Deny R; Rozema, Lee A; Brukner, Časlav; Walther, Philip

    2015-01-01

    Quantum computers achieve a speed-up by placing quantum bits (qubits) in superpositions of different states. However, it has recently been appreciated that quantum mechanics also allows one to 'superimpose different operations'. Furthermore, it has been shown that using a qubit to coherently control the gate order allows one to accomplish a task--determining if two gates commute or anti-commute--with fewer gate uses than any known quantum algorithm. Here we experimentally demonstrate this advantage, in a photonic context, using a second qubit to control the order in which two gates are applied to a first qubit. We create the required superposition of gate orders by using additional degrees of freedom of the photons encoding our qubits. The new resource we exploit can be interpreted as a superposition of causal orders, and could allow quantum algorithms to be implemented with an efficiency unlikely to be achieved on a fixed-gate-order quantum computer. PMID:26250107

  15. An approximate CPHD filter for superpositional sensors

    NASA Astrophysics Data System (ADS)

    Mahler, Ronald; El-Fallah, Adel

    2012-06-01

    Most multitarget tracking algorithms, such as JPDA, MHT, and the PHD and CPHD filters, presume the following measurement model: (a) targets are point targets, (b) every target generates at most a single measurement, and (c) any measurement is generated by at most a single target. However, the most familiar sensors, such as surveillance and imaging radars, violate assumption (c). This is because they are actually superpositional-that is, any measurement is a sum of signals generated by all of the targets in the scene. At this conference in 2009, the first author derived exact formulas for PHD and CPHD filters that presume general superpositional measurement models. Unfortunately, these formulas are computationally intractable. In this paper, we modify and generalize a Gaussian approximation technique due to Thouin, Nannuru, and Coates to derive a computationally tractable superpositional-CPHD filter. Implementation requires sequential Monte Carlo (particle filter) techniques.

  16. Macroscopic Quantum Superposition in Cavity Optomechanics.

    PubMed

    Liao, Jie-Qiao; Tian, Lin

    2016-04-22

    Quantum superposition in mechanical systems is not only key evidence for macroscopic quantum coherence, but can also be utilized in modern quantum technology. Here we propose an efficient approach for creating macroscopically distinct mechanical superposition states in a two-mode optomechanical system. Photon hopping between the two cavity modes is modulated sinusoidally. The modulated photon tunneling enables an ultrastrong radiation-pressure force acting on the mechanical resonator, and hence significantly increases the mechanical displacement induced by a single photon. We study systematically the generation of the Yurke-Stoler-like states in the presence of system dissipations. We also discuss the experimental implementation of this scheme. PMID:27152802

  17. Macroscopic Quantum Superposition in Cavity Optomechanics

    NASA Astrophysics Data System (ADS)

    Liao, Jie-Qiao; Tian, Lin

    2016-04-01

    Quantum superposition in mechanical systems is not only key evidence for macroscopic quantum coherence, but can also be utilized in modern quantum technology. Here we propose an efficient approach for creating macroscopically distinct mechanical superposition states in a two-mode optomechanical system. Photon hopping between the two cavity modes is modulated sinusoidally. The modulated photon tunneling enables an ultrastrong radiation-pressure force acting on the mechanical resonator, and hence significantly increases the mechanical displacement induced by a single photon. We study systematically the generation of the Yurke-Stoler-like states in the presence of system dissipations. We also discuss the experimental implementation of this scheme.

  18. Large energy superpositions via Rydberg dressing

    NASA Astrophysics Data System (ADS)

    Khazali, Mohammadsadegh; Lau, Hon Wai; Humeniuk, Adam; Simon, Christoph

    2016-08-01

    We propose to create superposition states of over 100 strontium atoms in a ground state or metastable optical clock state using the Kerr-type interaction due to Rydberg state dressing in an optical lattice. The two components of the superposition can differ by an order of 300 eV in energy, allowing tests of energy decoherence models with greatly improved sensitivity. We take into account the effects of higher-order nonlinearities, spatial inhomogeneity of the interaction, decay from the Rydberg state, collective many-body decoherence, atomic motion, molecular formation, and diminishing Rydberg level separation for increasing principal number.

  19. Superposition of Polytropes in the Inner Heliosheath

    NASA Astrophysics Data System (ADS)

    Livadiotis, G.

    2016-03-01

    This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density-temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log-log scale is now generalized to a concave-downward parabola that is able to describe the observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ˜ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.

  20. The Evolution and Development of Neural Superposition

    PubMed Central

    Agi, Egemen; Langen, Marion; Altschuler, Steven J.; Wu, Lani F.; Zimmermann, Timo

    2014-01-01

    Visual systems have a rich history as model systems for the discovery and understanding of basic principles underlying neuronal connectivity. The compound eyes of insects consist of up to thousands of small unit eyes that are connected by photoreceptor axons to set up a visual map in the brain. The photoreceptor axon terminals thereby represent neighboring points seen in the environment in neighboring synaptic units in the brain. Neural superposition is a special case of such a wiring principle, where photoreceptors from different unit eyes that receive the same input converge upon the same synaptic units in the brain. This wiring principle is remarkable, because each photoreceptor in a single unit eye receives different input and each individual axon, among thousands others in the brain, must be sorted together with those few axons that have the same input. Key aspects of neural superposition have been described as early as 1907. Since then neuroscientists, evolutionary and developmental biologists have been fascinated by how such a complicated wiring principle could evolve, how it is genetically encoded, and how it is developmentally realized. In this review article, we will discuss current ideas about the evolutionary origin and developmental program of neural superposition. Our goal is to identify in what way the special case of neural superposition can help us answer more general questions about the evolution and development of genetically “hard-wired” synaptic connectivity in the brain. PMID:24912630

  1. The principle of superposition in human prehension

    PubMed Central

    Zatsiorsky, Vladimir M.; Latash, Mark L.; Gao, Fan; Shim, Jae Kun

    2010-01-01

    SUMMARY The experimental evidence supports the validity of the principle of superposition for multi-finger prehension in humans. Forces and moments of individual digits are defined by two independent commands: “Grasp the object stronger/weaker to prevent slipping” and “Maintain the rotational equilibrium of the object”. The effects of the two commands are summed up. PMID:20186284

  2. Sequential Syndrome Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.

  3. Number-Theoretic Functions via Convolution Rings.

    ERIC Educational Resources Information Center

    Berberian, S. K.

    1992-01-01

    Demonstrates the number theory property that the number of divisors of an integer n times the number of positive integers k, less than or equal to and relatively prime to n, equals the sum of the divisors of n using theory developed about multiplicative functions, the units of a convolution ring, and the Mobius Function. (MDH)

  4. About closedness by convolution of the Tsallis maximizers

    NASA Astrophysics Data System (ADS)

    Vignat, C.; Hero, A. O., III; Costa, J. A.

    2004-09-01

    In this paper, we study the stability under convolution of the maximizing distributions of the Tsallis entropy under energy constraint (called hereafter Tsallis distributions). These distributions are shown to obey three important properties: a stochastic representation property, an orthogonal invariance property and a duality property. As a consequence of these properties, the behavior of Tsallis distributions under convolution is characterized. At last, a special random convolution, called Kingman convolution, is shown to ensure the stability of Tsallis distributions.

  5. Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution

    SciTech Connect

    Akabani, G.; Hawkins, W.G.; Eckblade, M.B.; Leichner, P.K.

    1999-01-01

    The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transform (CHT) algorithm.

  6. Macroscopic Quantum Superposition in Cavity Optomechanics

    NASA Astrophysics Data System (ADS)

    Liao, Jie-Qiao; Tian, Lin

    Quantum superposition in mechanical systems is not only a key evidence of macroscopic quantum coherence, but can also be utilized in modern quantum technology. Here we propose an efficient approach for creating macroscopically distinct mechanical superposition states in a two-mode optomechanical system. Photon hopping between the two cavity-modes is modulated sinusoidally. The modulated photon tunneling enables an ultrastrong radiation-pressure force acting on the mechanical resonator, and hence significantly increases the mechanical displacement induced by a single photon. We present systematic studies on the generation of the Yurke-Stoler-like states in the presence of system dissipations. The state generation method is general and it can be implemented with either optomechanical or electromechanical systems. The authors are supported by the National Science Foundation under Award No. NSF-DMR-0956064 and the DARPA ORCHID program through AFOSR.

  7. Concurrence of superpositions of many states

    SciTech Connect

    Akhtarshenas, Seyed Javad

    2011-04-15

    In this paper, we use the concurrence vector as a measure of entanglement, and investigate lower and upper bounds on the concurrence of a superposition of bipartite states as a function of the concurrence of the superposed states. We show that the amount of entanglement quantified by the concurrence vector is exactly the same as that quantified by I concurrence, so that our results can be compared to those given in Phys. Rev. A 76, 042328 (2007). We obtain a tighter lower bound in the case in which the two superposed states are orthogonal. We also show that when the two superposed states are not necessarily orthogonal, both lower and upper bounds are, in general, tighter than the bounds given in terms of the I concurrence. An extension of the results to the case with more than two states in the superpositions is also given.

  8. A convolutional neural network neutrino event classifier

    DOE PAGESBeta

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  9. Deep Learning with Hierarchical Convolutional Factor Analysis

    PubMed Central

    Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence

    2013-01-01

    Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342

  10. A Mathematical Motivation for Complex-Valued Convolutional Networks.

    PubMed

    Tygert, Mark; Bruna, Joan; Chintala, Soumith; LeCun, Yann; Piantino, Serkan; Szlam, Arthur

    2016-05-01

    A complex-valued convolutional network (convnet) implements the repeated application of the following composition of three operations, recursively applying the composition to an input vector of nonnegative real numbers: (1) convolution with complex-valued vectors, followed by (2) taking the absolute value of every entry of the resulting vectors, followed by (3) local averaging. For processing real-valued random vectors, complex-valued convnets can be viewed as data-driven multiscale windowed power spectra, data-driven multiscale windowed absolute spectra, data-driven multiwavelet absolute values, or (in their most general configuration) data-driven nonlinear multiwavelet packets. Indeed, complex-valued convnets can calculate multiscale windowed spectra when the convnet filters are windowed complex-valued exponentials. Standard real-valued convnets, using rectified linear units (ReLUs), sigmoidal (e.g., logistic or tanh) nonlinearities, or max pooling, for example, do not obviously exhibit the same exact correspondence with data-driven wavelets (whereas for complex-valued convnets, the correspondence is much more than just a vague analogy). Courtesy of the exact correspondence, the remarkably rich and rigorous body of mathematical analysis for wavelets applies directly to (complex-valued) convnets. PMID:26890348

  11. Quantum convolutional codes derived from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Yan, Tingsu; Huang, Xinmei; Tang, Yuansheng

    2014-12-01

    In this paper, three families of quantum convolutional codes are constructed. The first one and the second one can be regarded as a generalization of Theorems 3, 4, 7 and 8 [J. Chen, J. Li, F. Yang and Y. Huang, Int. J. Theor. Phys., doi:10.1007/s10773-014-2214-6 (2014)], in the sense that we drop the constraint q ≡ 1 (mod 4). Furthermore, the second one and the third one attain the quantum generalized Singleton bound.

  12. Satellite image classification using convolutional learning

    NASA Astrophysics Data System (ADS)

    Nguyen, Thao; Han, Jiho; Park, Dong-Chul

    2013-10-01

    A satellite image classification method using Convolutional Neural Network (CNN) architecture is proposed in this paper. As a special case of deep learning, CNN classifies classes of images without any feature extraction step while other existing classification methods utilize rather complex feature extraction processes. Experiments on a set of satellite image data and the preliminary results show that the proposed classification method can be a promising alternative over existing feature extraction-based schemes in terms of classification accuracy and classification speed.

  13. Spatio-spectral concentration of convolutions

    NASA Astrophysics Data System (ADS)

    Hanasoge, Shravan M.

    2016-05-01

    Differential equations may possess coefficients that vary on a spectrum of scales. Because coefficients are typically multiplicative in real space, they turn into convolution operators in spectral space, mixing all wavenumbers. However, in many applications, only the largest scales of the solution are of interest and so the question turns to whether it is possible to build effective coarse-scale models of the coefficients in such a manner that the large scales of the solution are left intact. Here we apply the method of numerical homogenisation to deterministic linear equations to generate sub-grid-scale models of coefficients at desired frequency cutoffs. We use the Fourier basis to project, filter and compute correctors for the coefficients. The method is tested in 1D and 2D scenarios and found to reproduce the coarse scales of the solution to varying degrees of accuracy depending on the cutoff. We relate this method to mode-elimination Renormalisation Group (RG) and discuss the connection between accuracy and the cutoff wavenumber. The tradeoff is governed by a form of the uncertainty principle for convolutions, which states that as the convolution operator is squeezed in the spectral domain, it broadens in real space. As a consequence, basis sparsity is a high virtue and the choice of the basis can be critical.

  14. Convolutional Neural Network Based dem Super Resolution

    NASA Astrophysics Data System (ADS)

    Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang

    2016-06-01

    DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.

  15. Blind Identification of Convolutional Encoder Parameters

    PubMed Central

    Su, Shaojing; Zhou, Jing; Huang, Zhiping; Liu, Chunwu; Zhang, Yimeng

    2014-01-01

    This paper gives a solution to the blind parameter identification of a convolutional encoder. The problem can be addressed in the context of the noncooperative communications or adaptive coding and modulations (ACM) for cognitive radio networks. We consider an intelligent communication receiver which can blindly recognize the coding parameters of the received data stream. The only knowledge is that the stream is encoded using binary convolutional codes, while the coding parameters are unknown. Some previous literatures have significant contributions for the recognition of convolutional encoder parameters in hard-decision situations. However, soft-decision systems are applied more and more as the improvement of signal processing techniques. In this paper we propose a method to utilize the soft information to improve the recognition performances in soft-decision communication systems. Besides, we propose a new recognition method based on correlation attack to meet low signal-to-noise ratio situations. Finally we give the simulation results to show the efficiency of the proposed methods. PMID:24982997

  16. Toward quantum superposition of living organisms

    NASA Astrophysics Data System (ADS)

    Romero-Isart, Oriol; Juan, Mathieu L.; Quidant, Romain; Cirac, J. Ignacio

    2010-03-01

    The most striking feature of quantum mechanics is the existence of superposition states, where an object appears to be in different situations at the same time. The existence of such states has been previously tested with small objects, such as atoms, ions, electrons and photons (Zoller et al 2005 Eur. Phys. J. D 36 203-28), and even with molecules (Arndt et al 1999 Nature 401 680-2). More recently, it has been shown that it is possible to create superpositions of collections of photons (Deléglise et al 2008 Nature 455 510-14), atoms (Hammerer et al 2008 arXiv:0807.3358) or Cooper pairs (Friedman et al 2000 Nature 406 43-6). Very recent progress in optomechanical systems may soon allow us to create superpositions of even larger objects, such as micro-sized mirrors or cantilevers (Marshall et al 2003 Phys. Rev. Lett. 91 130401; Kippenberg and Vahala 2008 Science 321 1172-6 Marquardt and Girvin 2009 Physics 2 40; Favero and Karrai 2009 Nature Photon. 3 201-5), and thus to test quantum mechanical phenomena at larger scales. Here we propose a method to cool down and create quantum superpositions of the motion of sub-wavelength, arbitrarily shaped dielectric objects trapped inside a high-finesse cavity at a very low pressure. Our method is ideally suited for the smallest living organisms, such as viruses, which survive under low-vacuum pressures (Rothschild and Mancinelli 2001 Nature 406 1092-101) and optically behave as dielectric objects (Ashkin and Dziedzic 1987 Science 235 1517-20). This opens up the possibility of testing the quantum nature of living organisms by creating quantum superposition states in very much the same spirit as the original Schrödinger's cat 'gedanken' paradigm (Schrödinger 1935 Naturwissenschaften 23 807-12, 823-8, 844-9). We anticipate that our paper will be a starting point for experimentally addressing fundamental questions, such as the role of life and consciousness in quantum mechanics.

  17. Profile of CT scan output dose in axial and helical modes using convolution

    NASA Astrophysics Data System (ADS)

    Anam, C.; Haryanto, F.; Widita, R.; Arif, I.; Dougherty, G.

    2016-03-01

    The profile of the CT scan output dose is crucial for establishing the patient dose profile. The purpose of this study is to investigate the profile of the CT scan output dose in both axial and helical modes using convolution. A single scan output dose profile (SSDP) in the center of a head phantom was measured using a solid-state detector. The multiple scan output dose profile (MSDP) in the axial mode was calculated using convolution between SSDP and delta function, whereas for the helical mode MSDP was calculated using convolution between SSDP and the rectangular function. MSDPs were calculated for a number of scans (5, 10, 15, 20 and 25). The multiple scan average dose (MSAD) for differing numbers of scans was compared to the value of CT dose index (CTDI). Finally, the edge values of MSDP for every scan number were compared to the corresponding MSAD values. MSDPs were successfully generated by using convolution between a SSDP and the appropriate function. We found that CTDI only accurately estimates MSAD when the number of scans was more than 10. We also found that the edge values of the profiles were 42% to 93% lower than that the corresponding MSADs.

  18. X-ray optics simulation using Gaussian superposition technique.

    PubMed

    Idir, Mourad; Cywiak, Moisés; Morales, Arquímedes; Modi, Mohammed H

    2011-09-26

    We present an efficient method to perform x-ray optics simulation with high or partially coherent x-ray sources using Gaussian superposition technique. In a previous paper, we have demonstrated that full characterization of optical systems, diffractive and geometric, is possible by using the Fresnel Gaussian Shape Invariant (FGSI) previously reported in the literature. The complex amplitude distribution in the object plane is represented by a linear superposition of complex Gaussians wavelets and then propagated through the optical system by means of the referred Gaussian invariant. This allows ray tracing through the optical system and at the same time allows calculating with high precision the complex wave-amplitude distribution at any plane of observation. This technique can be applied in a wide spectral range where the Fresnel diffraction integral applies including visible, x-rays, acoustic waves, etc. We describe the technique and include some computer simulations as illustrative examples for x-ray optical component. We show also that this method can be used to study partial or total coherence illumination problem. PMID:21996845

  19. X-ray optics simulation using Gaussian superposition technique

    SciTech Connect

    Idir, M.; Cywiak, M.; Morales, A. and Modi, M.H.

    2011-09-15

    We present an efficient method to perform x-ray optics simulation with high or partially coherent x-ray sources using Gaussian superposition technique. In a previous paper, we have demonstrated that full characterization of optical systems, diffractive and geometric, is possible by using the Fresnel Gaussian Shape Invariant (FGSI) previously reported in the literature. The complex amplitude distribution in the object plane is represented by a linear superposition of complex Gaussians wavelets and then propagated through the optical system by means of the referred Gaussian invariant. This allows ray tracing through the optical system and at the same time allows calculating with high precision the complex wave-amplitude distribution at any plane of observation. This technique can be applied in a wide spectral range where the Fresnel diffraction integral applies including visible, x-rays, acoustic waves, etc. We describe the technique and include some computer simulations as illustrative examples for x-ray optical component. We show also that this method can be used to study partial or total coherence illumination problem.

  20. About simple nonlinear and linear superpositions of special exact solutions of Veselov-Novikov equation

    SciTech Connect

    Dubrovsky, V. G.; Topovsky, A. V.

    2013-03-15

    New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u{sup (n)}, n= 1, Horizontal-Ellipsis , N are constructed via Zakharov and Manakov {partial_derivative}-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u{sup (n)} and calculated by {partial_derivative}-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schroedinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums of special solutions u{sup (n)}. It is shown that the sums u=u{sup (k{sub 1})}+...+u{sup (k{sub m})}, 1 Less-Than-Or-Slanted-Equal-To k{sub 1} < k{sub 2} < Horizontal-Ellipsis < k{sub m} Less-Than-Or-Slanted-Equal-To N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schroedinger equation and can serve as model potentials for electrons in planar structures of modern electronics.

  1. About simple nonlinear and linear superpositions of special exact solutions of Veselov-Novikov equation

    NASA Astrophysics Data System (ADS)

    Dubrovsky, V. G.; Topovsky, A. V.

    2013-03-01

    New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u(n), n = 1, …, N are constructed via Zakharov and Manakov overline{partial }-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u(n) and calculated by overline{partial }-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schrödinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums of special solutions u(n). It is shown that the sums u= u^{(k_1)}+ldots + u^{(k_m)}, 1 ⩽ k1 < k2 < … < km ⩽ N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schrödinger equation and can serve as model potentials for electrons in planar structures of modern electronics.

  2. Transient Response of Shells of Revolution by Direct Integration and Modal Superposition Methods

    NASA Technical Reports Server (NTRS)

    Stephens, W. B.; Adelman, H. M.

    1974-01-01

    The results of an analytical effort to obtain and evaluate transient response data for a cylindrical and a conical shell by use of two different approaches: direct integration and modal superposition are described. The inclusion of nonlinear terms is more important than the inclusion of secondary linear effects (transverse shear deformation and rotary inertia) although there are thin-shell structures where these secondary effects are important. The advantages of the direct integration approach are that geometric nonlinear and secondary effects are easy to include and high-frequency response may be calculated. In comparison to the modal superposition technique the computer storage requirements are smaller. The advantages of the modal superposition approach are that the solution is independent of the previous time history and that once the modal data are obtained, the response for repeated cases may be efficiently computed. Also, any admissible set of initial conditions can be applied.

  3. Atom Microscopy via Dual Resonant Superposition

    NASA Astrophysics Data System (ADS)

    Abdul Jabar, M. S.; Bakht, Amin Bacha; Jalaluddin, M.; Iftikhar, Ahmad

    2015-12-01

    An M-type Rb87 atomic system is proposed for one-dimensional atom microscopy under the condition of Electromagnetically Induced Transparency. Super-localization of the atom in the absorption spectrum while its delocalization in the dispersion spectrum is observed due to the dual superposition effect of the resonant fields. The observed minimum uncertainty peaks will find important applications in Laser cooling, creating focused atom beams, atom nanolithography, and in measurement of the center-of-mass wave function of moving atoms.

  4. Design of artificial spherical superposition compound eye

    NASA Astrophysics Data System (ADS)

    Cao, Zhaolou; Zhai, Chunjie; Wang, Keyi

    2015-12-01

    In this research, design of artificial spherical superposition compound eye is presented. The imaging system consists of three layers of lens arrays. In each channel, two lenses are designed to control the angular magnification and a field lens is added to improve the image quality and extend the field of view. Aspherical surfaces are introduced to improve the image quality. Ray tracing results demonstrate that the light from the same object point is focused at the same imaging point through different channels. Therefore the system has much higher energy efficiency than conventional spherical apposition compound eye.

  5. Maximum predictive power and the superposition principle

    NASA Technical Reports Server (NTRS)

    Summhammer, Johann

    1994-01-01

    In quantum physics the direct observables are probabilities of events. We ask how observed probabilities must be combined to achieve what we call maximum predictive power. According to this concept the accuracy of a prediction must only depend on the number of runs whose data serve as input for the prediction. We transform each probability to an associated variable whose uncertainty interval depends only on the amount of data and strictly decreases with it. We find that for a probability which is a function of two other probabilities maximum predictive power is achieved when linearly summing their associated variables and transforming back to a probability. This recovers the quantum mechanical superposition principle.

  6. On Kolmogorov's superpositions and Boolean functions

    SciTech Connect

    Beiu, V.

    1998-12-31

    The paper overviews results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on an explicit numerical (i.e., constructive) algorithm for Kolmogorov's superpositions they will show that for obtaining minimum size neutral networks for implementing any Boolean function, the activation function of the neurons is the identity function. Because classical AND-OR implementations, as well as threshold gate implementations require exponential size (in the worst case), it will follow that size-optimal solutions for implementing arbitrary Boolean functions require analog circuitry. Conclusions and several comments on the required precision are ending the paper.

  7. A convolution model of rock bed thermal storage units

    NASA Astrophysics Data System (ADS)

    Sowell, E. F.; Curry, R. L.

    1980-01-01

    A method is presented whereby a packed-bed thermal storage unit is dynamically modeled for bi-directional flow and arbitrary input flow stream temperature variations. The method is based on the principle of calculating the output temperature as the sum of earlier input temperatures, each multiplied by a predetermined 'response factor', i.e., discrete convolution. A computer implementation of the scheme, in the form of a subroutine for a widely used solar simulation program (TRNSYS) is described and numerical results compared with other models. Also, a method for efficient computation of the required response factors is described; this solution is for a triangular input pulse, previously unreported, although the solution method is also applicable for other input functions. This solution requires a single integration of a known function which is easily carried out numerically to the required precision.

  8. The analysis of convolutional codes via the extended Smith algorithm

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Onyszchuk, I.

    1993-01-01

    Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.

  9. Resampling of data between arbitrary grids using convolution interpolation.

    PubMed

    Rasche, V; Proksa, R; Sinkus, R; Börnert, P; Eggers, H

    1999-05-01

    For certain medical applications resampling of data is required. In magnetic resonance tomography (MRT) or computer tomography (CT), e.g., data may be sampled on nonrectilinear grids in the Fourier domain. For the image reconstruction a convolution-interpolation algorithm, often called gridding, can be applied for resampling of the data onto a rectilinear grid. Resampling of data from a rectilinear onto a nonrectilinear grid are needed, e.g., if projections of a given rectilinear data set are to be obtained. In this paper we introduce the application of the convolution interpolation for resampling of data from one arbitrary grid onto another. The basic algorithm can be split into two steps. First, the data are resampled from the arbitrary input grid onto a rectilinear grid and second, the rectilinear data is resampled onto the arbitrary output grid. Furthermore, we like to introduce a new technique to derive the sampling density function needed for the first step of our algorithm. For fast, sampling-pattern-independent determination of the sampling density function the Voronoi diagram of the sample distribution is calculated. The volume of the Voronoi cell around each sample is used as a measure for the sampling density. It is shown that the introduced resampling technique allows fast resampling of data between arbitrary grids. Furthermore, it is shown that the suggested approach to derive the sampling density function is suitable even for arbitrary sampling patterns. Examples are given in which the proposed technique has been applied for the reconstruction of data acquired along spiral, radial, and arbitrary trajectories and for the fast calculation of projections of a given rectilinearly sampled image. PMID:10416800

  10. Convolutional coding combined with continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Pizzi, S. V.; Wilson, S. G.

    1985-01-01

    Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.

  11. Convolution neural networks for ship type recognition

    NASA Astrophysics Data System (ADS)

    Rainey, Katie; Reeder, John D.; Corelli, Alexander G.

    2016-05-01

    Algorithms to automatically recognize ship type from satellite imagery are desired for numerous maritime applications. This task is difficult, and example imagery accurately labeled with ship type is hard to obtain. Convolutional neural networks (CNNs) have shown promise in image recognition settings, but many of these applications rely on the availability of thousands of example images for training. This work attempts to under- stand for which types of ship recognition tasks CNNs might be well suited. We report the results of baseline experiments applying a CNN to several ship type classification tasks, and discuss many of the considerations that must be made in approaching this problem.

  12. Geometric multi-resolution analysis and data-driven convolutions

    NASA Astrophysics Data System (ADS)

    Strawn, Nate

    2015-09-01

    We introduce a procedure for learning discrete convolutional operators for generic datasets which recovers the standard block convolutional operators when applied to sets of natural images. They key observation is that the standard block convolutional operators on images are intuitive because humans naturally understand the grid structure of the self-evident functions over images spaces (pixels). This procedure first constructs a Geometric Multi-Resolution Analysis (GMRA) on the set of variables giving rise to a dataset, and then leverages the details of this data structure to identify subsets of variables upon which convolutional operators are supported, as well as a space of functions that can be shared coherently amongst these supports.

  13. Convolution Inequalities for the Boltzmann Collision Operator

    NASA Astrophysics Data System (ADS)

    Alonso, Ricardo J.; Carneiro, Emanuel; Gamba, Irene M.

    2010-09-01

    We study integrability properties of a general version of the Boltzmann collision operator for hard and soft potentials in n-dimensions. A reformulation of the collisional integrals allows us to write the weak form of the collision operator as a weighted convolution, where the weight is given by an operator invariant under rotations. Using a symmetrization technique in L p we prove a Young’s inequality for hard potentials, which is sharp for Maxwell molecules in the L 2 case. Further, we find a new Hardy-Littlewood-Sobolev type of inequality for Boltzmann collision integrals with soft potentials. The same method extends to radially symmetric, non-increasing potentials that lie in some {Ls_{weak}} or L s . The method we use resembles a Brascamp, Lieb and Luttinger approach for multilinear weighted convolution inequalities and follows a weak formulation setting. Consequently, it is closely connected to the classical analysis of Young and Hardy-Littlewood-Sobolev inequalities. In all cases, the inequality constants are explicitly given by formulas depending on integrability conditions of the angular cross section (in the spirit of Grad cut-off). As an additional application of the technique we also obtain estimates with exponential weights for hard potentials in both conservative and dissipative interactions.

  14. Convolutional fountain distribution over fading wireless channels

    NASA Astrophysics Data System (ADS)

    Usman, Mohammed

    2012-08-01

    Mobile broadband has opened the possibility of a rich variety of services to end users. Broadcast/multicast of multimedia data is one such service which can be used to deliver multimedia to multiple users economically. However, the radio channel poses serious challenges due to its time-varying properties, resulting in each user experiencing different channel characteristics, independent of other users. Conventional methods of achieving reliability in communication, such as automatic repeat request and forward error correction do not scale well in a broadcast/multicast scenario over radio channels. Fountain codes, being rateless and information additive, overcome these problems. Although the design of fountain codes makes it possible to generate an infinite sequence of encoded symbols, the erroneous nature of radio channels mandates the need for protecting the fountain-encoded symbols, so that the transmission is feasible. In this article, the performance of fountain codes in combination with convolutional codes, when used over radio channels, is presented. An investigation of various parameters, such as goodput, delay and buffer size requirements, pertaining to the performance of fountain codes in a multimedia broadcast/multicast environment is presented. Finally, a strategy for the use of 'convolutional fountain' over radio channels is also presented.

  15. Convolution formulations for non-negative intensity.

    PubMed

    Williams, Earl G

    2013-08-01

    Previously unknown spatial convolution formulas for a variant of the active normal intensity in planar coordinates have been derived that use measured pressure or normal velocity near-field holograms to construct a positive-only (outward) intensity distribution in the plane, quantifying the areas of the vibrating structure that produce radiation to the far-field. This is an extension of the outgoing-only (unipolar) intensity technique recently developed for arbitrary geometries by Steffen Marburg. The method is applied independently to pressure and velocity data measured in a plane close to the surface of a point-driven, unbaffled rectangular plate in the laboratory. It is demonstrated that the sound producing regions of the structure are clearly revealed using the derived formulas and that the spatial resolution is limited to a half-wavelength. A second set of formulas called the hybrid-intensity formulas are also derived which yield a bipolar intensity using a different spatial convolution operator, again using either the measured pressure or velocity. It is demonstrated from the experiment results that the velocity formula yields the classical active intensity and the pressure formula an interesting hybrid intensity that may be useful for source localization. Computations are fast and carried out in real space without Fourier transforms into wavenumber space. PMID:23927105

  16. Quantifying the interplay effect in prostate IMRT delivery using a convolution-based method

    SciTech Connect

    Li, Haisen S.; Chetty, Indrin J.; Solberg, Timothy D.

    2008-05-15

    The authors present a segment-based convolution method to account for the interplay effect between intrafraction organ motion and the multileaf collimator position for each particular segment in intensity modulated radiation therapy (IMRT) delivered in a step-and-shoot manner. In this method, the static dose distribution attributed to each segment is convolved with the probability density function (PDF) of motion during delivery of the segment, whereas in the conventional convolution method (''average-based convolution''), the static dose distribution is convolved with the PDF averaged over an entire fraction, an entire treatment course, or even an entire patient population. In the case of IMRT delivered in a step-and-shoot manner, the average-based convolution method assumes that in each segment the target volume experiences the same motion pattern (PDF) as that of population. In the segment-based convolution method, the dose during each segment is calculated by convolving the static dose with the motion PDF specific to that segment, allowing both intrafraction motion and the interplay effect to be accounted for in the dose calculation. Intrafraction prostate motion data from a population of 35 patients tracked using the Calypso system (Calypso Medical Technologies, Inc., Seattle, WA) was used to generate motion PDFs. These were then convolved with dose distributions from clinical prostate IMRT plans. For a single segment with a small number of monitor units, the interplay effect introduced errors of up to 25.9% in the mean CTV dose compared against the planned dose evaluated by using the PDF of the entire fraction. In contrast, the interplay effect reduced the minimum CTV dose by 4.4%, and the CTV generalized equivalent uniform dose by 1.3%, in single fraction plans. For entire treatment courses delivered in either a hypofractionated (five fractions) or conventional (>30 fractions) regimen, the discrepancy in total dose due to interplay effect was negligible.

  17. Phase properties of multicomponent superposition states in various amplifiers

    NASA Technical Reports Server (NTRS)

    Lee, Kang-Soo; Kim, M. S.

    1994-01-01

    There have been theoretical studies for generation of optical coherent superposition states. Once the superposition state is generated it is natural to ask if it is possible to amplify it without losing the nonclassical properties of the field state. We consider amplification of the superposition state in various amplifiers such as a sub-Poissonian amplifier, a phase-sensitive amplifier and a classical amplifier. We show the evolution of phase probability distribution functions in the amplifier.

  18. On the superposition principle in interference experiments

    PubMed Central

    Sinha, Aninda; H. Vijay, Aravind; Sinha, Urbasi

    2015-01-01

    The superposition principle is usually incorrectly applied in interference experiments. This has recently been investigated through numerics based on Finite Difference Time Domain (FDTD) methods as well as the Feynman path integral formalism. In the current work, we have derived an analytic formula for the Sorkin parameter which can be used to determine the deviation from the application of the principle. We have found excellent agreement between the analytic distribution and those that have been earlier estimated by numerical integration as well as resource intensive FDTD simulations. The analytic handle would be useful for comparing theory with future experiments. It is applicable both to physics based on classical wave equations as well as the non-relativistic Schrödinger equation. PMID:25973948

  19. Authentication Protocol using Quantum Superposition States

    SciTech Connect

    Kanamori, Yoshito; Yoo, Seong-Moo; Gregory, Don A.; Sheldon, Frederick T

    2009-01-01

    When it became known that quantum computers could break the RSA (named for its creators - Rivest, Shamir, and Adleman) encryption algorithm within a polynomial-time, quantum cryptography began to be actively studied. Other classical cryptographic algorithms are only secure when malicious users do not have sufficient computational power to break security within a practical amount of time. Recently, many quantum authentication protocols sharing quantum entangled particles between communicators have been proposed, providing unconditional security. An issue caused by sharing quantum entangled particles is that it may not be simple to apply these protocols to authenticate a specific user in a group of many users. An authentication protocol using quantum superposition states instead of quantum entangled particles is proposed. The random number shared between a sender and a receiver can be used for classical encryption after the authentication has succeeded. The proposed protocol can be implemented with the current technologies we introduce in this paper.

  20. Some partial-unit-memory convolutional codes

    NASA Technical Reports Server (NTRS)

    Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.

    1991-01-01

    The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.

  1. Image statistics decoding for convolutional codes

    NASA Technical Reports Server (NTRS)

    Pitt, G. H., III; Swanson, L.; Yuen, J. H.

    1987-01-01

    It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.

  2. Bacterial colony counting by Convolutional Neural Networks.

    PubMed

    Ferrari, Alessandro; Lombardi, Stefano; Signoroni, Alberto

    2015-08-01

    Counting bacterial colonies on microbiological culture plates is a time-consuming, error-prone, nevertheless fundamental task in microbiology. Computer vision based approaches can increase the efficiency and the reliability of the process, but accurate counting is challenging, due to the high degree of variability of agglomerated colonies. In this paper, we propose a solution which adopts Convolutional Neural Networks (CNN) for counting the number of colonies contained in confluent agglomerates, that scored an overall accuracy of the 92.8% on a large challenging dataset. The proposed CNN-based technique for estimating the cardinality of colony aggregates outperforms traditional image processing approaches, becoming a promising approach to many related applications. PMID:26738016

  3. Nonclassical Properties of Q-Deformed Superposition Light Field State

    NASA Technical Reports Server (NTRS)

    Ren, Min; Shenggui, Wang; Ma, Aiqun; Jiang, Zhuohong

    1996-01-01

    In this paper, the squeezing effect, the bunching effect and the anti-bunching effect of the superposition light field state which involving q-deformation vacuum state and q-Glauber coherent state are studied, the controllable q-parameter of the squeezing effect, the bunching effect and the anti-bunching effect of q-deformed superposition light field state are obtained.

  4. Dose discrepancies in the buildup region and their impact on dose calculations for IMRT fields

    SciTech Connect

    Hsu, Shu-Hui; Moran, Jean M.; Chen Yu; Kulasekere, Ravi; Roberson, Peter L.

    2010-05-15

    Purpose: Dose accuracy in the buildup region for radiotherapy treatment planning suffers from challenges in both measurement and calculation. This study investigates the dosimetry in the buildup region at normal and oblique incidences for open and IMRT fields and assesses the quality of the treatment planning calculations. Methods: This study was divided into three parts. First, percent depth doses and profiles (for 5x5, 10x10, 20x20, and 30x30 cm{sup 2} field sizes at 0 deg., 45 deg., and 70 deg. incidences) were measured in the buildup region in Solid Water using an Attix parallel plate chamber and Kodak XV film, respectively. Second, the parameters in the empirical contamination (EC) term of the convolution/superposition (CVSP) calculation algorithm were fitted based on open field measurements. Finally, seven segmental head-and-neck IMRT fields were measured on a flat phantom geometry and compared to calculations using {gamma} and dose-gradient compensation (C) indices to evaluate the impact of residual discrepancies and to assess the adequacy of the contamination term for IMRT fields. Results: Local deviations between measurements and calculations for open fields were within 1% and 4% in the buildup region for normal and oblique incidences, respectively. The C index with 5%/1 mm criteria for IMRT fields ranged from 89% to 99% and from 96% to 98% at 2 mm and 10 cm depths, respectively. The quality of agreement in the buildup region for open and IMRT fields is comparable to that in nonbuildup regions. Conclusions: The added EC term in CVSP was determined to be adequate for both open and IMRT fields. Due to the dependence of calculation accuracy on (1) EC modeling, (2) internal convolution and density grid sizes, (3) implementation details in the algorithm, and (4) the accuracy of measurements used for treatment planning system commissioning, the authors recommend an evaluation of the accuracy of near-surface dose calculations as a part of treatment planning

  5. Quantitative Analysis of Isotope Distributions In Proteomic Mass Spectrometry Using Least-Squares Fourier Transform Convolution

    PubMed Central

    Sperling, Edit; Bunner, Anne E.; Sykes, Michael T.; Williamson, James R.

    2008-01-01

    Quantitative proteomic mass spectrometry involves comparison of the amplitudes of peaks resulting from different isotope labeling patterns, including fractional atomic labeling and fractional residue labeling. We have developed a general and flexible analytical treatment of the complex isotope distributions that arise in these experiments, using Fourier transform convolution to calculate labeled isotope distributions and least-squares for quantitative comparison with experimental peaks. The degree of fractional atomic and fractional residue labeling can be determined from experimental peaks at the same time as the integrated intensity of all of the isotopomers in the isotope distribution. The approach is illustrated using data with fractional 15N-labeling and fractional 13C-isoleucine labeling. The least-squares Fourier transform convolution approach can be applied to many types of quantitive proteomic data, including data from stable isotope labeling by amino acids in cell culture and pulse labeling experiments. PMID:18522437

  6. Blind source separation of convolutive mixtures

    NASA Astrophysics Data System (ADS)

    Makino, Shoji

    2006-04-01

    This paper introduces the blind source separation (BSS) of convolutive mixtures of acoustic signals, especially speech. A statistical and computational technique, called independent component analysis (ICA), is examined. By achieving nonlinear decorrelation, nonstationary decorrelation, or time-delayed decorrelation, we can find source signals only from observed mixed signals. Particular attention is paid to the physical interpretation of BSS from the acoustical signal processing point of view. Frequency-domain BSS is shown to be equivalent to two sets of frequency domain adaptive microphone arrays, i.e., adaptive beamformers (ABFs). Although BSS can reduce reverberant sounds to some extent in the same way as ABF, it mainly removes the sounds from the jammer direction. This is why BSS has difficulties with long reverberation in the real world. If sources are not "independent," the dependence results in bias noise when obtaining the correct separation filter coefficients. Therefore, the performance of BSS is limited by that of ABF. Although BSS is upper bounded by ABF, BSS has a strong advantage over ABF. BSS can be regarded as an intelligent version of ABF in the sense that it can adapt without any information on the array manifold or the target direction, and sources can be simultaneously active in BSS.

  7. Accelerated unsteady flow line integral convolution.

    PubMed

    Liu, Zhanping; Moorhead, Robert J

    2005-01-01

    Unsteady flow line integral convolution (UFLIC) is a texture synthesis technique for visualizing unsteady flows with high temporal-spatial coherence. Unfortunately, UFLIC requires considerable time to generate each frame due to the huge amount of pathline integration that is computed for particle value scattering. This paper presents Accelerated UFLIC (AUFLIC) for near interactive (1 frame/second) visualization with 160,000 particles per frame. AUFLIC reuses pathlines in the value scattering process to reduce computationally expensive pathline integration. A flow-driven seeding strategy is employed to distribute seeds such that only a few of them need pathline integration while most seeds are placed along the pathlines advected at earlier times by other seeds upstream and, therefore, the known pathlines can be reused for fast value scattering. To maintain a dense scattering coverage to convey high temporal-spatial coherence while keeping the expense of pathline integration low, a dynamic seeding controller is designed to decide whether to advect, copy, or reuse a pathline. At a negligible memory cost, AUFLIC is 9 times faster than UFLIC with comparable image quality. PMID:15747635

  8. Metaheuristic Algorithms for Convolution Neural Network.

    PubMed

    Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  9. Metaheuristic Algorithms for Convolution Neural Network

    PubMed Central

    Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  10. Superposition rules for higher order systems and their applications

    NASA Astrophysics Data System (ADS)

    Cariñena, J. F.; Grabowski, J.; de Lucas, J.

    2012-05-01

    Superposition rules form a class of functions that describe general solutions of systems of first-order ordinary differential equations in terms of generic families of particular solutions and certain constants. In this work, we extend this notion and other related ones to systems of higher order differential equations and analyse their properties. Several results concerning the existence of various types of superposition rules for higher order systems are proved and illustrated with examples extracted from the physics and mathematics literature. In particular, two new superposition rules for the second- and third-order Kummer-Schwarz equations are derived.

  11. Nonclassical properties and quantum resources of hierarchical photonic superposition states

    SciTech Connect

    Volkoff, T. J.

    2015-11-15

    We motivate and introduce a class of “hierarchical” quantum superposition states of N coupled quantum oscillators. Unlike other well-known multimode photonic Schrödinger-cat states such as entangled coherent states, the hierarchical superposition states are characterized as two-branch superpositions of tensor products of single-mode Schrödinger-cat states. In addition to analyzing the photon statistics and quasiprobability distributions of prominent examples of these nonclassical states, we consider their usefulness for highprecision quantum metrology of nonlinear optical Hamiltonians and quantify their mode entanglement. We propose two methods for generating hierarchical superpositions in N = 2 coupled microwave cavities, exploiting currently existing quantum optical technology for generating entanglement between spatially separated electromagnetic field modes.

  12. A Galois connection approach to superposition and inaccessibility

    NASA Astrophysics Data System (ADS)

    Butterfield, Jeremy; Melia, Joseph

    1993-12-01

    Working in a quantum logic framework and using the idea of Galois connections, we give a natural sufficient condition for superposition and inaccessibility to give the same closure map on sets of states.

  13. Quantum State Engineering Via Coherent-State Superpositions

    NASA Technical Reports Server (NTRS)

    Janszky, Jozsef; Adam, P.; Szabo, S.; Domokos, P.

    1996-01-01

    The quantum interference between the two parts of the optical Schrodinger-cat state makes possible to construct a wide class of quantum states via discrete superpositions of coherent states. Even a small number of coherent states can approximate the given quantum states at a high accuracy when the distance between the coherent states is optimized, e. g. nearly perfect Fock state can be constructed by discrete superpositions of n + 1 coherent states lying in the vicinity of the vacuum state.

  14. Noise-enhanced convolutional neural networks.

    PubMed

    Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart

    2016-06-01

    Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. PMID:26700535

  15. A Superposition Technique for Deriving Photon Scattering Statistics in Plane-Parallel Cloudy Atmospheres

    NASA Technical Reports Server (NTRS)

    Platnick, S.

    1999-01-01

    Photon transport in a multiple scattering medium is critically dependent on scattering statistics, in particular the average number of scatterings. A superposition technique is derived to accurately determine the average number of scatterings encountered by reflected and transmitted photons within arbitrary layers in plane-parallel, vertically inhomogeneous clouds. As expected, the resulting scattering number profiles are highly dependent on cloud particle absorption and solar/viewing geometry. The technique uses efficient adding and doubling radiative transfer procedures, avoiding traditional time-intensive Monte Carlo methods. Derived superposition formulae are applied to a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Cloud remote sensing techniques that use solar reflectance or transmittance measurements generally assume a homogeneous plane-parallel cloud structure. The scales over which this assumption is relevant, in both the vertical and horizontal, can be obtained from the superposition calculations. Though the emphasis is on photon transport in clouds, the derived technique is applicable to any scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers in the atmosphere.

  16. Hardware efficient implementation of DFT using an improved first-order moments based cyclic convolution structure

    NASA Astrophysics Data System (ADS)

    Xiong, Jun; Liu, J. G.; Cao, Li

    2015-12-01

    This paper presents hardware efficient designs for implementing the one-dimensional (1D) discrete Fourier transform (DFT). Once DFT is formulated as the cyclic convolution form, the improved first-order moments-based cyclic convolution structure can be used as the basic computing unit for the DFT computation, which only contains a control module, a barrel shifter and (N-1)/2 accumulation units. After decomposing and reordering the twiddle factors, all that remains to do is shifting the input data sequence and accumulating them under the control of the statistical results on the twiddle factors. The whole calculation process only contains shift operations and additions with no need for multipliers and large memory. Compared with the previous first-order moments-based structure for DFT, the proposed designs have the advantages of less hardware consumption, lower power consumption and the flexibility to achieve better performance in certain cases. A series of experiments have proven the high performance of the proposed designs in terms of the area time product and power consumption. Similar efficient designs can be obtained for other computations, such as DCT/IDCT, DST/IDST, digital filter and correlation by transforming them into the forms of the first-order moments based cyclic convolution.

  17. Vehicle detection based on visual saliency and deep sparse convolution hierarchical model

    NASA Astrophysics Data System (ADS)

    Cai, Yingfeng; Wang, Hai; Chen, Xiaobo; Gao, Li; Chen, Long

    2016-06-01

    Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification. These types of methods generally have high processing times and low vehicle detection performance. To address this issue, a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed. A visual saliency calculation is firstly used to generate a small vehicle candidate area. The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection. The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group, which outperforms the existing state-of-the-art algorithms. More importantly, high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.

  18. Quantum superposition at the half-metre scale.

    PubMed

    Kovachy, T; Asenbaum, P; Overstreet, C; Donnelly, C A; Dickerson, S M; Sugarbaker, A; Hogan, J M; Kasevich, M A

    2015-12-24

    The quantum superposition principle allows massive particles to be delocalized over distant positions. Though quantum mechanics has proved adept at describing the microscopic world, quantum superposition runs counter to intuitive conceptions of reality and locality when extended to the macroscopic scale, as exemplified by the thought experiment of Schrödinger's cat. Matter-wave interferometers, which split and recombine wave packets in order to observe interference, provide a way to probe the superposition principle on macroscopic scales and explore the transition to classical physics. In such experiments, large wave-packet separation is impeded by the need for long interaction times and large momentum beam splitters, which cause susceptibility to dephasing and decoherence. Here we use light-pulse atom interferometry to realize quantum interference with wave packets separated by up to 54 centimetres on a timescale of 1 second. These results push quantum superposition into a new macroscopic regime, demonstrating that quantum superposition remains possible at the distances and timescales of everyday life. The sub-nanokelvin temperatures of the atoms and a compensation of transverse optical forces enable a large separation while maintaining an interference contrast of 28 per cent. In addition to testing the superposition principle in a new regime, large quantum superposition states are vital to exploring gravity with atom interferometers in greater detail. We anticipate that these states could be used to increase sensitivity in tests of the equivalence principle, measure the gravitational Aharonov-Bohm effect, and eventually detect gravitational waves and phase shifts associated with general relativity. PMID:26701053

  19. Quantum superposition at the half-metre scale

    NASA Astrophysics Data System (ADS)

    Kovachy, T.; Asenbaum, P.; Overstreet, C.; Donnelly, C. A.; Dickerson, S. M.; Sugarbaker, A.; Hogan, J. M.; Kasevich, M. A.

    2015-12-01

    The quantum superposition principle allows massive particles to be delocalized over distant positions. Though quantum mechanics has proved adept at describing the microscopic world, quantum superposition runs counter to intuitive conceptions of reality and locality when extended to the macroscopic scale, as exemplified by the thought experiment of Schrödinger’s cat. Matter-wave interferometers, which split and recombine wave packets in order to observe interference, provide a way to probe the superposition principle on macroscopic scales and explore the transition to classical physics. In such experiments, large wave-packet separation is impeded by the need for long interaction times and large momentum beam splitters, which cause susceptibility to dephasing and decoherence. Here we use light-pulse atom interferometry to realize quantum interference with wave packets separated by up to 54 centimetres on a timescale of 1 second. These results push quantum superposition into a new macroscopic regime, demonstrating that quantum superposition remains possible at the distances and timescales of everyday life. The sub-nanokelvin temperatures of the atoms and a compensation of transverse optical forces enable a large separation while maintaining an interference contrast of 28 per cent. In addition to testing the superposition principle in a new regime, large quantum superposition states are vital to exploring gravity with atom interferometers in greater detail. We anticipate that these states could be used to increase sensitivity in tests of the equivalence principle, measure the gravitational Aharonov-Bohm effect, and eventually detect gravitational waves and phase shifts associated with general relativity.

  20. Tissue Heterogeneity in IMRT Dose Calculation for Lung Cancer

    SciTech Connect

    Pasciuti, Katia; Iaccarino, Giuseppe; Strigari, Lidia; Malatesta, Tiziana; Benassi, Marcello; Di Nallo, Anna Maria; Mirri, Alessandra; Pinzi, Valentina; Landoni, Valeria

    2011-07-01

    The aim of this study was to evaluate the differences in accuracy of dose calculation between 3 commonly used algorithms, the Pencil Beam algorithm (PB), the Anisotropic Analytical Algorithm (AAA), and the Collapsed Cone Convolution Superposition (CCCS) for intensity-modulated radiation therapy (IMRT). The 2D dose distributions obtained with the 3 algorithms were compared on each CT slice pixel by pixel, using the MATLAB code (The MathWorks, Natick, MA) and the agreement was assessed with the {gamma} function. The effect of the differences on dose-volume histograms (DVHs), tumor control, and normal tissue complication probability (TCP and NTCP) were also evaluated, and its significance was quantified by using a nonparametric test. In general PB generates regions of over-dosage both in the lung and in the tumor area. These differences are not always in DVH of the lung, although the Wilcoxon test indicated significant differences in 2 of 4 patients. Disagreement in the lung region was also found when the {Gamma} analysis was performed. The effect on TCP is less important than for NTCP because of the slope of the curve at the level of the dose of interest. The effect of dose calculation inaccuracy is patient-dependent and strongly related to beam geometry and to the localization of the tumor. When multiple intensity-modulated beams are used, the effect of the presence of the heterogeneity on dose distribution may not always be easily predictable.

  1. Tissue heterogeneity in IMRT dose calculation for lung cancer.

    PubMed

    Pasciuti, Katia; Iaccarino, Giuseppe; Strigari, Lidia; Malatesta, Tiziana; Benassi, Marcello; Di Nallo, Anna Maria; Mirri, Alessandra; Pinzi, Valentina; Landoni, Valeria

    2011-01-01

    The aim of this study was to evaluate the differences in accuracy of dose calculation between 3 commonly used algorithms, the Pencil Beam algorithm (PB), the Anisotropic Analytical Algorithm (AAA), and the Collapsed Cone Convolution Superposition (CCCS) for intensity-modulated radiation therapy (IMRT). The 2D dose distributions obtained with the 3 algorithms were compared on each CT slice pixel by pixel, using the MATLAB code (The MathWorks, Natick, MA) and the agreement was assessed with the γ function. The effect of the differences on dose-volume histograms (DVHs), tumor control, and normal tissue complication probability (TCP and NTCP) were also evaluated, and its significance was quantified by using a nonparametric test. In general PB generates regions of over-dosage both in the lung and in the tumor area. These differences are not always in DVH of the lung, although the Wilcoxon test indicated significant differences in 2 of 4 patients. Disagreement in the lung region was also found when the Γ analysis was performed. The effect on TCP is less important than for NTCP because of the slope of the curve at the level of the dose of interest. The effect of dose calculation inaccuracy is patient-dependent and strongly related to beam geometry and to the localization of the tumor. When multiple intensity-modulated beams are used, the effect of the presence of the heterogeneity on dose distribution may not always be easily predictable. PMID:20970989

  2. A Geometric Construction of Cyclic Cocycles on Twisted Convolution Algebras

    NASA Astrophysics Data System (ADS)

    Angel, Eitan

    2010-09-01

    In this thesis we give a construction of cyclic cocycles on convolution algebras twisted by gerbes over discrete translation groupoids. In his seminal book, Connes constructs a map from the equivariant cohomology of a manifold carrying the action of a discrete group into the periodic cyclic cohomology of the associated convolution algebra. Furthermore, for proper étale groupoids, J.-L. Tu and P. Xu provide a map between the periodic cyclic cohomology of a gerbe twisted convolution algebra and twisted cohomology groups. Our focus will be the convolution algebra with a product defined by a gerbe over a discrete translation groupoid. When the action is not proper, we cannot construct an invariant connection on the gerbe; therefore to study this algebra, we instead develop simplicial notions related to ideas of J. Dupont to construct a simplicial form representing the Dixmier-Douady class of the gerbe. Then by using a JLO formula we define a morphism from a simplicial complex twisted by this simplicial Dixmier-Douady form to the mixed bicomplex of certain matrix algebras. Finally, we define a morphism from this complex to the mixed bicomplex computing the periodic cyclic cohomology of the twisted convolution algebras.

  3. Observing a coherent superposition of an atom and a molecule

    SciTech Connect

    Dowling, Mark R.; Bartlett, Stephen D.; Rudolph, Terry; Spekkens, Robert W.

    2006-11-15

    We demonstrate that it is possible, in principle, to perform a Ramsey-type interference experiment to exhibit a coherent superposition of a single atom and a diatomic molecule. This gedanken experiment, based on the techniques of Aharonov and Susskind [Phys. Rev. 155, 1428 (1967)], explicitly violates the commonly accepted superselection rule that forbids coherent superpositions of eigenstates of differing atom number. A Bose-Einstein condensate plays the role of a reference frame that allows for coherent operations analogous to Ramsey pulses. We also investigate an analogous gedanken experiment to exhibit a coherent superposition of a single boson and a fermion, violating the commonly accepted superselection rule forbidding coherent superpositions of states of differing particle statistics. In this case, the reference frame is realized by a multimode state of many fermions. This latter case reproduces all of the relevant features of Ramsey interferometry, including Ramsey fringes over many repetitions of the experiment. However, the apparent inability of this proposed experiment to produce well-defined relative phases between two distinct systems each described by a coherent superposition of a boson and a fermion demonstrates that there are additional, outstanding requirements to fully 'lift' the univalence superselection rule.

  4. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons

    NASA Astrophysics Data System (ADS)

    Sanchez-Parcerisa, D.; Cortés-Giraldo, M. A.; Dolney, D.; Kondrla, M.; Fager, M.; Carabe, A.

    2016-02-01

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm-1) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.

  5. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  6. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  7. ASIC-based architecture for the real-time computation of 2D convolution with large kernel size

    NASA Astrophysics Data System (ADS)

    Shao, Rui; Zhong, Sheng; Yan, Luxin

    2015-12-01

    Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.

  8. Output-sensitive 3D line integral convolution.

    PubMed

    Falk, Martin; Weiskopf, Daniel

    2008-01-01

    We propose an output-sensitive visualization method for 3D line integral convolution (LIC) whose rendering speed is largely independent of the data set size and mostly governed by the complexity of the output on the image plane. Our approach of view-dependent visualization tightly links the LIC generation with the volume rendering of the LIC result in order to avoid the computation of unnecessary LIC points: early-ray termination and empty-space leaping techniques are used to skip the computation of the LIC integral in a lazy-evaluation approach; both ray casting and texture slicing can be used as volume-rendering techniques. The input noise is modeled in object space to allow for temporal coherence under object and camera motion. Different noise models are discussed, covering dense representations based on filtered white noise all the way to sparse representations similar to oriented LIC. Aliasing artifacts are avoided by frequency control over the 3D noise and by employing a 3D variant of MIPmapping. A range of illumination models is applied to the LIC streamlines: different codimension-2 lighting models and a novel gradient-based illumination model that relies on precomputed gradients and does not require any direct calculation of gradients after the LIC integral is evaluated. We discuss the issue of proper sampling of the LIC and volume-rendering integrals by employing a frequency-space analysis of the noise model and the precomputed gradients. Finally, we demonstrate that our visualization approach lends itself to a fast graphics processing unit (GPU) implementation that supports both steady and unsteady flow. Therefore, this 3D LIC method allows users to interactively explore 3D flow by means of high-quality, view-dependent, and adaptive LIC volume visualization. Applications to flow visualization in combination with feature extraction and focus-and-context visualization are described, a comparison to previous methods is provided, and a detailed performance

  9. Vibration analysis of FG cylindrical shells with power-law index using discrete singular convolution technique

    NASA Astrophysics Data System (ADS)

    Mercan, Kadir; Demir, Çiğdem; Civalek, Ömer

    2016-01-01

    In the present manuscript, free vibration response of circular cylindrical shells with functionally graded material (FGM) is investigated. The method of discrete singular convolution (DSC) is used for numerical solution of the related governing equation of motion of FGM cylindrical shell. The constitutive relations are based on the Love's first approximation shell theory. The material properties are graded in the thickness direction according to a volume fraction power law indexes. Frequency values are calculated for different types of boundary conditions, material and geometric parameters. In general, close agreement between the obtained results and those of other researchers has been found.

  10. relline: Relativistic line profiles calculation

    NASA Astrophysics Data System (ADS)

    Dauser, Thomas

    2015-05-01

    relline calculates relativistic line profiles; it is compatible with the common X-ray data analysis software XSPEC (ascl:9910.005) and ISIS (ascl:1302.002). The two basic forms are an additive line model (RELLINE) and a convolution model to calculate relativistic smearing (RELCONV).

  11. Non-coaxial superposition of vector vortex beams.

    PubMed

    Aadhi, A; Vaity, Pravin; Chithrabhanu, P; Reddy, Salla Gangi; Prabakar, Shashi; Singh, R P

    2016-02-10

    Vector vortex beams are classified into four types depending upon spatial variation in their polarization vector. We have generated all four of these types of vector vortex beams by using a modified polarization Sagnac interferometer with a vortex lens. Further, we have studied the non-coaxial superposition of two vector vortex beams. It is observed that the superposition of two vector vortex beams with same polarization singularity leads to a beam with another kind of polarization singularity in their interaction region. The results may be of importance in ultrahigh security of the polarization-encrypted data that utilizes vector vortex beams and multiple optical trapping with non-coaxial superposition of vector vortex beams. We verified our experimental results with theory. PMID:26906384

  12. Dissipative Optomechanical Preparation of Macroscopic Quantum Superposition States.

    PubMed

    Abdi, M; Degenfeld-Schonburg, P; Sameti, M; Navarrete-Benlloch, C; Hartmann, M J

    2016-06-10

    The transition from quantum to classical physics remains an intensely debated question even though it has been investigated for more than a century. Further clarifications could be obtained by preparing macroscopic objects in spatial quantum superpositions and proposals for generating such states for nanomechanical devices either in a transient or a probabilistic fashion have been put forward. Here, we introduce a method to deterministically obtain spatial superpositions of arbitrary lifetime via dissipative state preparation. In our approach, we engineer a double-well potential for the motion of the mechanical element and drive it towards the ground state, which shows the desired spatial superposition, via optomechanical sideband cooling. We propose a specific implementation based on a superconducting circuit coupled to the mechanical motion of a lithium-decorated monolayer graphene sheet, introduce a method to verify the mechanical state by coupling it to a superconducting qubit, and discuss its prospects for testing collapse models for the quantum to classical transition. PMID:27341233

  13. Robust mesoscopic superposition of strongly correlated ultracold atoms

    SciTech Connect

    Hallwood, David W.; Ernst, Thomas; Brand, Joachim

    2010-12-15

    We propose a scheme to create coherent superpositions of annular flow of strongly interacting bosonic atoms in a one-dimensional ring trap. The nonrotating ground state is coupled to a vortex state with mesoscopic angular momentum by means of a narrow potential barrier and an applied phase that originates from either rotation or a synthetic magnetic field. We show that superposition states in the Tonks-Girardeau regime are robust against single-particle loss due to the effects of strong correlations. The coupling between the mesoscopically distinct states scales much more favorably with particle number than in schemes relying on weak interactions, thus making particle numbers of hundreds or thousands feasible. Coherent oscillations induced by time variation of parameters may serve as a 'smoking gun' signature for detecting superposition states.

  14. Dissipative Optomechanical Preparation of Macroscopic Quantum Superposition States

    NASA Astrophysics Data System (ADS)

    Abdi, M.; Degenfeld-Schonburg, P.; Sameti, M.; Navarrete-Benlloch, C.; Hartmann, M. J.

    2016-06-01

    The transition from quantum to classical physics remains an intensely debated question even though it has been investigated for more than a century. Further clarifications could be obtained by preparing macroscopic objects in spatial quantum superpositions and proposals for generating such states for nanomechanical devices either in a transient or a probabilistic fashion have been put forward. Here, we introduce a method to deterministically obtain spatial superpositions of arbitrary lifetime via dissipative state preparation. In our approach, we engineer a double-well potential for the motion of the mechanical element and drive it towards the ground state, which shows the desired spatial superposition, via optomechanical sideband cooling. We propose a specific implementation based on a superconducting circuit coupled to the mechanical motion of a lithium-decorated monolayer graphene sheet, introduce a method to verify the mechanical state by coupling it to a superconducting qubit, and discuss its prospects for testing collapse models for the quantum to classical transition.

  15. Robustly optimal rate one-half binary convolutional codes

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1975-01-01

    Three optimality criteria for convolutional codes are considered in this correspondence: namely, free distance, minimum distance, and distance profile. Here we report the results of computer searches for rate one-half binary convolutional codes that are 'robustly optimal' in the sense of being optimal for one criterion and optimal or near-optimal for the other two criteria. Comparisons with previously known codes are made. The results of a computer simulation are reported to show the importance of the distance profile to computational performance with sequential decoding.

  16. Seeing lens imaging as a superposition of multiple views

    NASA Astrophysics Data System (ADS)

    Grusche, Sascha

    2016-01-01

    In the conventional approach to lens imaging, rays are used to map object points to image points. However, many students want to think of the image as a whole. To answer this need, Kepler’s ray drawing is reinterpreted in terms of shifted camera obscura images. These images are uncovered by covering the lens with pinholes. Thus, lens imaging is seen as a superposition of sharp images from different viewpoints, so-called elemental images. This superposition is simulated with projectors, and with transparencies. Lens ray diagrams are constructed based on elemental images; the conventional construction method is included as a special case.

  17. Tight bounds on the concurrence of quantum superpositions

    SciTech Connect

    Niset, J.; Cerf, N. J.

    2007-10-15

    The entanglement content of superpositions of quantum states is investigated based on a measure called concurrence. Given a bipartite pure state in arbitrary dimension written as the quantum superposition of two other such states, we find simple inequalities relating the concurrence of the state to that of its components. We derive an exact expression for the concurrence when the component states are biorthogonal and provide elegant upper and lower bounds in all other cases. For quantum bits, our upper bound is tighter than the previously derived bound [N. Linden et al., Phys. Rev. Lett. 97, 100502 (2006)].

  18. Orbital angular momentum of superposition of identical shifted vortex beams.

    PubMed

    Kovalev, A A; Kotlyar, V V

    2015-10-01

    We have formulated and proven the following theorem: the superposition of an arbitrary number of arbitrarily off-axis, identical nonparaxial optical vortex beams of arbitrary radially symmetric shape, integer topological charge n, and arbitrary real weight coefficients has the normalized orbital angular momentum (OAM) equal to that of individual constituent identical beams. This theorem enables generating vortex laser beams with different (not necessarily radially symmetric) intensity profiles but identical OAM. Superpositions of Bessel, Hankel-Bessel, Bessel-Gaussian, and Laguerre-Gaussian beams with the same OAM are discussed. PMID:26479934

  19. Entanglement and discord of the superposition of Greenberger-Horne-Zeilinger states

    SciTech Connect

    Parashar, Preeti; Rana, Swapan

    2011-03-15

    We calculate the analytic expression for geometric measure of entanglement for arbitrary superposition of two N-qubit canonical orthonormal Greenberger-Horne-Zeilinger (GHZ) states and the same for two W states. In the course of characterizing all kinds of nonclassical correlations, an explicit formula for quantum discord (via relative entropy) for the former class of states has been presented. Contrary to the GHZ state, the closest separable state to the W state is not classical. Therefore, in this case, the discord is different from the relative entropy of entanglement. We conjecture that the discord for the N-qubit W state is log{sub 2}N.

  20. Die and telescoping punch form convolutions in thin diaphragm

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Die and punch set forms convolutions in thin dished metal diaphragm without stretching the metal too thin at sharp curvatures. The die corresponds to the metal shape to be formed, and the punch consists of elements that progressively slide against one another under the restraint of a compressed-air cushion to mate with the die.

  1. Maximum-likelihood estimation of circle parameters via convolution.

    PubMed

    Zelniker, Emanuel E; Clarkson, I Vaughan L

    2006-04-01

    The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images. PMID:16579374

  2. Convolutional virtual electric field for image segmentation using active contours.

    PubMed

    Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden

    2014-01-01

    Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images. PMID:25360586

  3. Superposition-additive approach: thermodynamic parameters of clusterization of monosubstituted alkanes at the air/water interface.

    PubMed

    Vysotsky, Yu B; Belyaeva, E A; Fomina, E S; Fainerman, V B; Aksenenko, E V; Vollhardt, D; Miller, R

    2011-12-21

    The applicability of the superposition-additive approach for the calculation of the thermodynamic parameters of formation and atomization of conjugate systems, their dipole electric polarisabilities, molecular diamagnetic susceptibilities, π-electron circular currents, as well as for the estimation of the thermodynamic parameters of substituted alkanes, was demonstrated earlier. Now the applicability of the superposition-additive approach for the description of clusterization of fatty alcohols, thioalcohols, amines, carboxylic acids at the air/water interface is studied. Two superposition-additive schemes are used that ensure the maximum superimposition of the graphs of the considered molecular structures including the intermolecular CH-HC interactions within the clusters. The thermodynamic parameters of clusterization are calculated for dimers, trimers and tetramers. The calculations are based on the values of enthalpy, entropy and Gibbs' energy of clusterization calculated earlier using the semiempirical quantum chemical PM3 method. It is shown that the proposed approach is capable of the reproduction with sufficiently enough accuracy of the values calculated previously. PMID:22042000

  4. A high-order fast method for computing convolution integral with smooth kernel

    SciTech Connect

    Qiang, Ji

    2009-09-28

    In this paper we report on a high-order fast method to numerically calculate convolution integral with smooth non-periodic kernel. This method is based on the Newton-Cotes quadrature rule for the integral approximation and an FFT method for discrete summation. The method can have an arbitrarily high-order accuracy in principle depending on the number of points used in the integral approximation and a computational cost of O(Nlog(N)), where N is the number of grid points. For a three-point Simpson rule approximation, the method has an accuracy of O(h{sup 4}), where h is the size of the computational grid. Applications of the Simpson rule based algorithm to the calculation of a one-dimensional continuous Gauss transform and to the calculation of a two-dimensional electric field from a charged beam are also presented.

  5. Evaluation of convolutional neural networks for visual recognition.

    PubMed

    Nebauer, C

    1998-01-01

    Convolutional neural networks provide an efficient method to constrain the complexity of feedforward neural networks by weight sharing and restriction to local connections. This network topology has been applied in particular to image classification when sophisticated preprocessing is to be avoided and raw images are to be classified directly. In this paper two variations of convolutional networks--neocognitron and a modification of neocognitron--are compared with classifiers based on fully connected feedforward layers (i.e., multilayer perceptron, nearest neighbor classifier, auto-encoding network) with respect to their visual recognition performance. Beside the original neocognitron a modification of the neocognitron is proposed which combines neurons from perceptron with the localized network structure of neocognitron. Instead of training convolutional networks by time-consuming error backpropagation, in this work a modular procedure is applied whereby layers are trained sequentially from the input to the output layer in order to recognize features of increasing complexity. For a quantitative experimental comparison with standard classifiers two very different recognition tasks have been chosen: handwritten digit recognition and face recognition. In the first example on handwritten digit recognition the generalization of convolutional networks is compared to fully connected networks. In several experiments the influence of variations of position, size, and orientation of digits is determined and the relation between training sample size and validation error is observed. In the second example recognition of human faces is investigated under constrained and variable conditions with respect to face orientation and illumination and the limitations of convolutional networks are discussed. PMID:18252491

  6. De-convoluting mixed crude oil in Prudhoe Bay Field, North Slope, Alaska

    USGS Publications Warehouse

    Peters, K.E.; Scott, Ramos L.; Zumberge, J.E.; Valin, Z.C.; Bird, K.J.

    2008-01-01

    Seventy-four crude oil samples from the Barrow arch on the North Slope of Alaska were studied to assess the relative volumetric contributions from different source rocks to the giant Prudhoe Bay Field. We applied alternating least squares to concentration data (ALS-C) for 46 biomarkers in the range C19-C35 to de-convolute mixtures of oil generated from carbonate rich Triassic Shublik Formation and clay rich Jurassic Kingak Shale and Cretaceous Hue Shale-gamma ray zone (Hue-GRZ) source rocks. ALS-C results for 23 oil samples from the prolific Ivishak Formation reservoir of the Prudhoe Bay Field indicate approximately equal contributions from Shublik Formation and Hue-GRZ source rocks (37% each), less from the Kingak Shale (26%), and little or no contribution from other source rocks. These results differ from published interpretations that most oil in the Prudhoe Bay Field originated from the Shublik Formation source rock. With few exceptions, the relative contribution of oil from the Shublik Formation decreases, while that from the Hue-GRZ increases in reservoirs along the Barrow arch from Point Barrow in the northwest to Point Thomson in the southeast (???250 miles or 400 km). The Shublik contribution also decreases to a lesser degree between fault blocks within the Ivishak pool from west to east across the Prudhoe Bay Field. ALS-C provides a robust means to calculate the relative amounts of two or more oil types in a mixture. Furthermore, ALS-C does not require that pure end member oils be identified prior to analysis or that laboratory mixtures of these oils be prepared to evaluate mixing. ALS-C of biomarkers reliably de-convolutes mixtures because the concentrations of compounds in mixtures vary as linear functions of the amount of each oil type. ALS of biomarker ratios (ALS-R) cannot be used to de-convolute mixtures because compound ratios vary as nonlinear functions of the amount of each oil type.

  7. Comparison of dose calculation algorithms in slab phantoms with cortical bone equivalent heterogeneities

    SciTech Connect

    Carrasco, P.; Jornet, N.; Duch, M. A.; Panettieri, V.; Weber, L.; Eudaldo, T.; Ginjaume, M.; Ribas, M.

    2007-08-15

    To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10x10, 5x5, and 2x2 cm{sup 2}) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2x2 cm{sup 2} field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values

  8. Comparison of dose calculation algorithms in slab phantoms with cortical bone equivalent heterogeneities.

    PubMed

    Carrasco, P; Jornet, N; Duch, M A; Panettieri, V; Weber, L; Eudaldo, T; Ginjaume, M; Ribas, M

    2007-08-01

    To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10 x 10, 5 x 5, and 2 x 2 cm2) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2 x 2 cm2 field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values

  9. Dose-calculation algorithms in the context of inhomogeneity corrections for high energy photon beams

    SciTech Connect

    Papanikolaou, Niko; Stathakis, Sotirios

    2009-10-15

    Radiation therapy has witnessed a plethora of innovations and developments in the past 15 years. Since the introduction of computed tomography for treatment planning there has been a steady introduction of new methods to refine treatment delivery. Imaging continues to be an integral part of the planning, but also the delivery, of modern radiotherapy. However, all the efforts of image guided radiotherapy, intensity-modulated planning and delivery, adaptive radiotherapy, and everything else that we pride ourselves in having in the armamentarium can fall short, unless there is an accurate dose-calculation algorithm. The agreement between the calculated and delivered doses is of great significance in radiation therapy since the accuracy of the absorbed dose as prescribed determines the clinical outcome. Dose-calculation algorithms have evolved greatly over the years in an effort to be more inclusive of the effects that govern the true radiation transport through the human body. In this Vision 20/20 paper, we look back to see how it all started and where things are now in terms of dose algorithms for photon beams and the inclusion of tissue heterogeneities. Convolution-superposition algorithms have dominated the treatment planning industry for the past few years. Monte Carlo techniques have an inherent accuracy that is superior to any other algorithm and as such will continue to be the gold standard, along with measurements, and maybe one day will be the algorithm of choice for all particle treatment planning in radiation therapy.

  10. Comparison of selected dose calculation algorithms in radiotherapy treatment planning for tissues with inhomogeneities

    NASA Astrophysics Data System (ADS)

    Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.

    2016-03-01

    Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.

  11. Solar flux-density distribution due to partially shaded/blocked mirrors using the separation of variables/superposition technique with polynomial and Gaussian sunshapes

    SciTech Connect

    Elsayed, M.; Fathalah, K.A.

    1996-05-01

    In a previous work, the separation of a variable/superposition technique was used to predict the flux density distribution on the receiver surfaces of solar central receiver plants. In this paper further developments of the technique are given. A numerical technique is derived to carry out the convolution of the sunshape and error density functions. Also, a simplified numerical procedure is presented to determine the basic flux density function on which the technique depends. The technique is used to predict the receiver solar flux distribution using two sunshapes, polynomial and Gaussian distributions. The results predicted with the technique are validated by comparison with experimental results from mirrors both with and without partial shading/blocking of their surfaces.

  12. Multidimensional detonation propagation modeled via nonlinear shock wave superposition

    NASA Astrophysics Data System (ADS)

    Higgins, Andrew; Mehrjoo, Navid

    2010-11-01

    Detonation waves in gases are inherently multidimensional due to their cellular structure, and detonations in liquids and heterogeneous solids are often associated with instabilities and stochastic, localized reaction centers (i.e., hot spots). To explore the statistical nature of detonation dynamics in such systems, a simple model that idealizes detonation propagation as an ensemble of interacting blast waves originating from spatially random point sources has been proposed. Prior results using this model exhibited features that have been observed in real detonating systems, such as anomalous scaling between axisymmetric and two-dimensional geometries. However, those efforts used simple linear superposition of the blast waves. The present work uses a model of blast wave superposition developed for multiple-source explosions (the LAMB approximation) that incorporates the nonlinear interaction of shock waves analytically, permitting the effect of a more physical model of blast wave interaction to be explored. The results are suggestive of a universal behavior in systems of spatially randomized energy sources.

  13. Nonclassicality tests and entanglement witnesses for macroscopic mechanical superposition states

    NASA Astrophysics Data System (ADS)

    Gittsovich, Oleg; Moroder, Tobias; Asadian, Ali; Gühne, Otfried; Rabl, Peter

    2015-02-01

    We describe a set of measurement protocols for performing nonclassicality tests and the verification of entangled superposition states of macroscopic continuous variable systems, such as nanomechanical resonators. Following earlier works, we first consider a setup where a two-level system is used to indirectly probe the motion of the mechanical system via Ramsey measurements and discuss the application of this method for detecting nonclassical mechanical states. We then show that the generalization of this technique to multiple resonator modes allows the conditioned preparation and the detection of entangled mechanical superposition states. The proposed measurement protocols can be implemented in various qubit-resonator systems that are currently under experimental investigation and find applications in future tests of quantum mechanics at a macroscopic scale.

  14. Quantum Delayed-Choice Experiment and Wave-Particle Superposition

    NASA Astrophysics Data System (ADS)

    Guo, Qi; Cheng, Liu-Yong; Wang, Hong-Fu; Zhang, Shou

    2015-08-01

    We propose a simple implementation scheme of quantum delayed-choice experiment in linear optical system without initial entanglement resource. By choosing different detecting devices, one can selectively observe the photon's different behaviors after the photon has been passed the Mach-Zehnder interferometer. The scheme shows that the photon's wave behavior and particle behavior can be observed with a single experimental setup by postselection, that is, the photon can show the superposition behavior of wave and particle. Especially, we compare the wave-particle superposition behavior and the wave-particle mixture behavior in detail, and find the quantum interference effect between wave and particle behavior, which may be helpful to reveal the nature of photonessentially.

  15. a Convolutional Network for Semantic Facade Segmentation and Interpretation

    NASA Astrophysics Data System (ADS)

    Schmitz, Matthias; Mayer, Helmut

    2016-06-01

    In this paper we present an approach for semantic interpretation of facade images based on a Convolutional Network. Our network processes the input images in a fully convolutional way and generates pixel-wise predictions. We show that there is no need for large datasets to train the network when transfer learning is employed, i. e., a part of an already existing network is used and fine-tuned, and when the available data is augmented by using deformed patches of the images for training. The network is trained end-to-end with patches of the images and each patch is augmented independently. To undo the downsampling for the classification, we add deconvolutional layers to the network. Outputs of different layers of the network are combined to achieve more precise pixel-wise predictions. We demonstrate the potential of our network based on results for the eTRIMS (Korč and Förstner, 2009) dataset reduced to facades.

  16. UFLIC: A Line Integral Convolution Algorithm for Visualizing Unsteady Flows

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Kao, David L.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents an algorithm, UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Using the Line Integral Convolution (LIC) as the underlying method, a new convolution algorithm is proposed that can effectively trace the flow's global features over time. The new algorithm consists of a time-accurate value depositing scheme and a successive feed-forward method. The value depositing scheme accurately models the flow advection, and the successive feed-forward method maintains the coherence between animation frames. Our new algorithm can produce time-accurate, highly coherent flow animations to highlight global features in unsteady flow fields. CFD scientists, for the first time, are able to visualize unsteady surface flows using our algorithm.

  17. Deep learning for steganalysis via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu

    2015-03-01

    Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.

  18. Study on Expansion of Convolutional Compactors over Galois Field

    NASA Astrophysics Data System (ADS)

    Arai, Masayuki; Fukumoto, Satoshi; Iwasaki, Kazuhiko

    Convolutional compactors offer a promising technique of compacting test responses. In this study we expand the architecture of convolutional compactor onto a Galois field in order to improve compaction ratio as well as reduce X-masking probability, namely, the probability that an error is masked by unknown values. While each scan chain is independently connected by EOR gates in the conventional arrangement, the proposed scheme treats q signals as an element over GF(2q), and the connections are configured on the same field. We show the arrangement of the proposed compactors and the equivalent expression over GF(2). We then evaluate the effectiveness of the proposed expansion in terms of X-masking probability by simulations with uniform distribution of X-values, as well as reduction of hardware overheads. Furthermore, we evaluate a multi-weight arrangement of the proposed compactors for non-uniform X distributions.

  19. Two-dimensional convolute integers for analytical instrumentation

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.

    1982-01-01

    As new analytical instruments and techniques emerge with increased dimensionality, a corresponding need is seen for data processing logic which can appropriately address the data. Two-dimensional measurements reveal enhanced unknown mixture analysis capability as a result of the greater spectral information content over two one-dimensional methods taken separately. It is noted that two-dimensional convolute integers are merely an extension of the work by Savitzky and Golay (1964). It is shown that these low-pass, high-pass and band-pass digital filters are truly two-dimensional and that they can be applied in a manner identical with their one-dimensional counterpart, that is, a weighted nearest-neighbor, moving average with zero phase shifting, convoluted integer (universal number) weighting coefficients.

  20. Image Super-Resolution Using Deep Convolutional Networks.

    PubMed

    Dong, Chao; Loy, Chen Change; He, Kaiming; Tang, Xiaoou

    2016-02-01

    We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality. PMID:26761735

  1. On the dosimetric behaviour of photon dose calculation algorithms in the presence of simple geometric heterogeneities: comparison with Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk; Brink, Carsten; Clivio, Alessandro; Knöös, Tommy; Nicolini, Giorgia; Cozzi, Luca

    2007-03-01

    A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam with modified Batho correction; Nucletron Helax-TMS: the collapsed cone and the pencil beam with equivalent path length correction; CMS XiO: the multigrid superposition and the fast Fourier transform convolution; Philips Pinnacle: the collapsed cone. Monte Carlo simulations (MC) performed with the EGSnrc codes BEAMnrc and DOSxyznrc from NRCC in Ottawa were used as a benchmark. The study was carried out in simple geometrical water phantoms (ρ = 1.00 g cm-3) with inserts of different densities simulating light lung tissue (ρ = 0.035 g cm-3), normal lung (ρ = 0.20 g cm-3) and cortical bone tissue (ρ = 1.80 g cm-3). Experiments were performed for low- and high-energy photon beams (6 and 15 MV) and for square (13 × 13 cm2) and elongated rectangular (2.8 × 13 cm2) fields. Analysis was carried out on the basis of depth dose curves and transverse profiles at several depths. Assuming the MC data as reference, γ index analysis was carried out distinguishing between regions inside the non-water inserts or inside the uniform water. For this study, a distance to agreement was set to 3 mm while the dose difference varied from 2% to 10%. In general all algorithms based on pencil-beam convolutions showed a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (ρ = 0.035 g cm-3), enhanced for the most energetic beam. For denser, and more clinical, densities a better agreement among the sophisticated algorithms with respect to MC was observed.

  2. Sensing Super-position: Visual Instrument Sensor Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2006-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an

  3. Interplay of gravitation and linear superposition of different mass eigenstates

    NASA Astrophysics Data System (ADS)

    Ahluwalia, D. V.; Burgard, C.

    1998-04-01

    The interplay of gravitation and the quantum-mechanical principle of linear superposition induces a new set of neutrino oscillation phases. These ensure that the flavor-oscillation clocks, inherent in the phenomenon of neutrino oscillations, redshift precisely as required by Einstein's theory of gravitation. The physical observability of these phases in the context of the solar neutrino anomaly, type-II supernova, and certain atomic systems is briefly discussed.

  4. Tailoring quantum superpositions with linearly polarized amplitude-modulated light

    SciTech Connect

    Pustelny, S.; Koczwara, M.; Cincio, L.; Gawlik, W.

    2011-04-15

    Amplitude-modulated nonlinear magneto-optical rotation is a powerful technique that offers a possibility of controllable generation of given quantum states. In this paper, we demonstrate creation and detection of specific ground-state magnetic-sublevel superpositions in {sup 87}Rb. By appropriate tuning of the modulation frequency and magnetic-field induction the efficiency of a given coherence generation is controlled. The processes are analyzed versus different experimental parameters.

  5. Quantum Superposition, Collapse, and the Default Specification Principle

    NASA Astrophysics Data System (ADS)

    Nikkhah Shirazi, Armin

    2014-03-01

    Quantum Superposition and collapse lie at the heart of the difficulty in understanding what quantum mechanics is exactly telling us about reality. We present here a principle which permits one to formulate a simple and general mathematical model that abstracts these features out of quantum theory. A precise formulation of this principle in terms of a set-theoretic axiom added to standard set theory may directly connect the foundations of physics to the foundations of mathematics.

  6. Macroscopic superposition of ultracold atoms with orbital degrees of freedom

    SciTech Connect

    Garcia-March, M. A.; Carr, L. D.; Dounas-Frazer, D. R.

    2011-04-15

    We introduce higher dimensions into the problem of Bose-Einstein condensates in a double-well potential, taking into account orbital angular momentum. We completely characterize the eigenstates of this system, delineating new regimes via both analytical high-order perturbation theory and numerical exact diagonalization. Among these regimes are mixed Josephson- and Fock-like behavior, crossings in both excited and ground states, and shadows of macroscopic superposition states.

  7. Face Detection Using GPU-Based Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Nasse, Fabian; Thurau, Christian; Fink, Gernot A.

    In this paper, we consider the problem of face detection under pose variations. Unlike other contributions, a focus of this work resides within efficient implementation utilizing the computational powers of modern graphics cards. The proposed system consists of a parallelized implementation of convolutional neural networks (CNNs) with a special emphasize on also parallelizing the detection process. Experimental validation in a smart conference room with 4 active ceiling-mounted cameras shows a dramatic speed-gain under real-life conditions.

  8. New syndrome decoder for (n, 1) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.

  9. Single-Atom Gating of Quantum State Superpositions

    SciTech Connect

    Moon, Christopher

    2010-04-28

    The ultimate miniaturization of electronic devices will likely require local and coherent control of single electronic wavefunctions. Wavefunctions exist within both physical real space and an abstract state space with a simple geometric interpretation: this state space - or Hilbert space - is spanned by mutually orthogonal state vectors corresponding to the quantized degrees of freedom of the real-space system. Measurement of superpositions is akin to accessing the direction of a vector in Hilbert space, determining an angle of rotation equivalent to quantum phase. Here we show that an individual atom inside a designed quantum corral1 can control this angle, producing arbitrary coherent superpositions of spatial quantum states. Using scanning tunnelling microscopy and nanostructures assembled atom-by-atom we demonstrate how single spins and quantum mirages can be harnessed to image the superposition of two electronic states. We also present a straightforward method to determine the atom path enacting phase rotations between any desired state vectors. A single atom thus becomes a real-space handle for an abstract Hilbert space, providing a simple technique for coherent quantum state manipulation at the spatial limit of condensed matter.

  10. Automatic localization of vertebrae based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Shen, Wei; Yang, Feng; Mu, Wei; Yang, Caiyun; Yang, Xin; Tian, Jie

    2015-03-01

    Localization of the vertebrae is of importance in many medical applications. For example, the vertebrae can serve as the landmarks in image registration. They can also provide a reference coordinate system to facilitate the localization of other organs in the chest. In this paper, we propose a new vertebrae localization method using convolutional neural networks (CNN). The main advantage of the proposed method is the removal of hand-crafted features. We construct two training sets to train two CNNs that share the same architecture. One is used to distinguish the vertebrae from other tissues in the chest, and the other is aimed at detecting the centers of the vertebrae. The architecture contains two convolutional layers, both of which are followed by a max-pooling layer. Then the output feature vector from the maxpooling layer is fed into a multilayer perceptron (MLP) classifier which has one hidden layer. Experiments were performed on ten chest CT images. We used leave-one-out strategy to train and test the proposed method. Quantitative comparison between the predict centers and ground truth shows that our convolutional neural networks can achieve promising localization accuracy without hand-crafted features.

  11. Fine-grained representation learning in convolutional autoencoders

    NASA Astrophysics Data System (ADS)

    Luo, Chang; Wang, Jie

    2016-03-01

    Convolutional autoencoders (CAEs) have been widely used as unsupervised feature extractors for high-resolution images. As a key component in CAEs, pooling is a biologically inspired operation to achieve scale and shift invariances, and the pooled representation directly affects the CAEs' performance. Fine-grained pooling, which uses small and dense pooling regions, encodes fine-grained visual cues and enhances local characteristics. However, it tends to be sensitive to spatial rearrangements. In most previous works, pooled features were obtained by empirically modulating parameters in CAEs. We see the CAE as a whole and propose a fine-grained representation learning law to extract better fine-grained features. This representation learning law suggests two directions for improvement. First, we probabilistically evaluate the discrimination-invariance tradeoff with fine-grained granularity in the pooled feature maps, and suggest the proper filter scale in the convolutional layer and appropriate whitening parameters in preprocessing step. Second, pooling approaches are combined with the sparsity degree in pooling regions, and we propose the preferable pooling approach. Experimental results on two independent benchmark datasets demonstrate that our representation learning law could guide CAEs to extract better fine-grained features and performs better in multiclass classification task. This paper also provides guidance for selecting appropriate parameters to obtain better fine-grained representation in other convolutional neural networks.

  12. A Discriminative Representation of Convolutional Features for Indoor Scene Recognition

    NASA Astrophysics Data System (ADS)

    Khan, Salman H.; Hayat, Munawar; Bennamoun, Mohammed; Togneri, Roberto; Sohel, Ferdous A.

    2016-07-01

    Indoor scene recognition is a multi-faceted and challenging problem due to the diverse intra-class variations and the confusing inter-class similarities. This paper presents a novel approach which exploits rich mid-level convolutional features to categorize indoor scenes. Traditionally used convolutional features preserve the global spatial structure, which is a desirable property for general object recognition. However, we argue that this structuredness is not much helpful when we have large variations in scene layouts, e.g., in indoor scenes. We propose to transform the structured convolutional activations to another highly discriminative feature space. The representation in the transformed space not only incorporates the discriminative aspects of the target dataset, but it also encodes the features in terms of the general object categories that are present in indoor scenes. To this end, we introduce a new large-scale dataset of 1300 object categories which are commonly present in indoor scenes. Our proposed approach achieves a significant performance boost over previous state of the art approaches on five major scene classification datasets.

  13. On the growth and form of cortical convolutions

    NASA Astrophysics Data System (ADS)

    Tallinen, Tuomas; Chung, Jun Young; Rousseau, François; Girard, Nadine; Lefèvre, Julien; Mahadevan, L.

    2016-06-01

    The rapid growth of the human cortex during development is accompanied by the folding of the brain into a highly convoluted structure. Recent studies have focused on the genetic and cellular regulation of cortical growth, but understanding the formation of the gyral and sulcal convolutions also requires consideration of the geometry and physical shaping of the growing brain. To study this, we use magnetic resonance images to build a 3D-printed layered gel mimic of the developing smooth fetal brain; when immersed in a solvent, the outer layer swells relative to the core, mimicking cortical growth. This relative growth puts the outer layer into mechanical compression and leads to sulci and gyri similar to those in fetal brains. Starting with the same initial geometry, we also build numerical simulations of the brain modelled as a soft tissue with a growing cortex, and show that this also produces the characteristic patterns of convolutions over a realistic developmental course. All together, our results show that although many molecular determinants control the tangential expansion of the cortex, the size, shape, placement and orientation of the folds arise through iterations and variations of an elementary mechanical instability modulated by early fetal brain geometry.

  14. Entanglement of electronic subbands and coherent superposition of spin states in a Rashba nanoloop

    NASA Astrophysics Data System (ADS)

    Safaiee, R.; Golshan, M. M.

    2011-10-01

    The present work is concerned with an analysis of the entanglement between the electronic coherent superpositions of spin states and subbands in a quasi-one-dimensional Rashba nanoloop acted upon by a strong perpendicular magnetic field. We explicitly include the confining potential and the Rashba spin-orbit coupling into the Hamiltonian and then proceed to calculate the von Neumann entropy, a measure of entanglement, as a function of time. An analysis of the von Neumann entropy demonstrates that, as expected, the dynamics of entanglement strongly depends upon the initial state and electronic subband excitations. When the initial state is a pure one formed by a subband excitation and the z-component of spin states, the entanglement exhibits periodic oscillations with local minima (dips). On the other hand, when the initial state is formed by the subband states and a coherent superposition of spin states, the entanglement still periodically oscillates, exhibiting stronger correlations, along with elimination of the dips. Moreover, in the long run, the entanglement for the latter case undergoes the phenomenon of collapse-revivals. This behaviour is absent for the first case of the initial states. We also show that the degree of entanglement strongly depends upon the electronic subband excitations in both cases.

  15. Teleportation of a general two-mode coherent-state superposition via attenuated quantum channels with ideal and/or threshold detectors

    NASA Astrophysics Data System (ADS)

    An, Nguyen Ba

    2009-04-01

    Three novel probabilistic yet conclusive schemes are proposed to teleport a general two-mode coherent-state superposition via attenuated quantum channels with ideal and/or threshold detectors. The calculated total success probability is highest (lowest) when only ideal (threshold) detectors are used.

  16. The all-source Green's function (ASGF) and its applications to storm surge modeling, part I: from the governing equations to the ASGF convolution

    NASA Astrophysics Data System (ADS)

    Xu, Zhigang

    2015-12-01

    In this study, a new method of storm surge modeling is proposed. This method is orders of magnitude faster than the traditional method within the linear dynamics framework. The tremendous enhancement of the computational efficiency results from the use of a pre-calculated all-source Green's function (ASGF), which connects a point of interest (POI) to the rest of the world ocean. Once the ASGF has been pre-calculated, it can be repeatedly used to quickly produce a time series of a storm surge at the POI. Using the ASGF, storm surge modeling can be simplified as its convolution with an atmospheric forcing field. If the ASGF is prepared with the global ocean as the model domain, the output of the convolution is free of the effects of artificial open-water boundary conditions. Being the first part of this study, this paper presents mathematical derivations from the linearized and depth-averaged shallow-water equations to the ASGF convolution, establishes various auxiliary concepts that will be useful throughout the study, and interprets the meaning of the ASGF from different perspectives. This paves the way for the ASGF convolution to be further developed as a data-assimilative regression model in part II. Five Appendixes provide additional details about the algorithm and the MATLAB functions.

  17. Push-Pull Optical Pumping of Pure Superposition States

    NASA Astrophysics Data System (ADS)

    Jau, Y.-Y.; Miron, E.; Post, A. B.; Kuzma, N. N.; Happer, W.

    2004-10-01

    A new optical pumping method, “push-pull pumping,” can produce very nearly pure, coherent superposition states between the initial and the final sublevels of the important field-independent 0-0 clock resonance of alkali-metal atoms. The key requirement for push-pull pumping is the use of D1 resonant light which alternates between left and right circular polarization at the Bohr frequency of the state. The new pumping method works for a wide range of conditions, including atomic beams with almost no collisions, and atoms in buffer gases with pressures of many atmospheres.

  18. Controllable photon bunching by atomic superpositions in a driven cavity

    NASA Astrophysics Data System (ADS)

    Guo, Weijie; Wang, Yao; Wei, L. F.

    2016-04-01

    We propose a feasible approach to generate the desired light with controllable photon bunchings by adjusting the atomic superpositions in a driven cavity. Under the large detuning limit, i.e., the cavity is far resonance with the inside atom(s), we show that the photons in the cavity are always bunchings. Typically, when the effective dispersive interaction equals the detuning between the driving and cavity fields, we find that the value of second-order correlation g(2 )(0 ) inverses to the probability of the superposed atomic state. This suggests that such a value could be arbitrarily large, and thus the bunchings of the photons could be significantly enhanced.

  19. Scaling of macroscopic superpositions close to a quantum phase transition

    NASA Astrophysics Data System (ADS)

    Abad, Tahereh; Karimipour, Vahid

    2016-05-01

    It is well known that in a quantum phase transition (QPT), entanglement remains short ranged [Osterloh et al., Nature (London) 416, 608 (2005), 10.1038/416608a]. We ask if there is a quantum property entailing the whole system which diverges near this point. Using the recently proposed measures of quantum macroscopicity, we show that near a quantum critical point, it is the effective size of macroscopic superposition between the two symmetry breaking states which grows to the scale of system size, and its derivative with respect to the coupling shows both singular behavior and scaling properties.

  20. Accelerated Superposition State Molecular Dynamics for Condensed Phase Systems.

    PubMed

    Ceotto, Michele; Ayton, Gary S; Voth, Gregory A

    2008-04-01

    An extension of superposition state molecular dynamics (SSMD) [Venkatnathan and Voth J. Chem. Theory Comput. 2005, 1, 36] is presented with the goal to accelerate timescales and enable the study of "long-time" phenomena for condensed phase systems. It does not require any a priori knowledge about final and transition state configurations, or specific topologies. The system is induced to explore new configurations by virtue of a fictitious (free-particle-like) accelerating potential. The acceleration method can be applied to all degrees of freedom in the system and can be applied to condensed phases and fluids. PMID:26620930

  1. A Review on the Use of Grid-Based Boltzmann Equation Solvers for Dose Calculation in External Photon Beam Treatment Planning

    PubMed Central

    Kan, Monica W. K.; Yu, Peter K. N.; Leung, Lucullus H. T.

    2013-01-01

    Deterministic linear Boltzmann transport equation (D-LBTE) solvers have recently been developed, and one of the latest available software codes, Acuros XB, has been implemented in a commercial treatment planning system for radiotherapy photon beam dose calculation. One of the major limitations of most commercially available model-based algorithms for photon dose calculation is the ability to account for the effect of electron transport. This induces some errors in patient dose calculations, especially near heterogeneous interfaces between low and high density media such as tissue/lung interfaces. D-LBTE solvers have a high potential of producing accurate dose distributions in and near heterogeneous media in the human body. Extensive previous investigations have proved that D-LBTE solvers were able to produce comparable dose calculation accuracy as Monte Carlo methods with a reasonable speed good enough for clinical use. The current paper reviews the dosimetric evaluations of D-LBTE solvers for external beam photon radiotherapy. This content summarizes and discusses dosimetric validations for D-LBTE solvers in both homogeneous and heterogeneous media under different circumstances and also the clinical impact on various diseases due to the conversion of dose calculation from a conventional convolution/superposition algorithm to a recently released D-LBTE solver. PMID:24066294

  2. Learning Contextual Dependence With Convolutional Hierarchical Recurrent Neural Networks

    NASA Astrophysics Data System (ADS)

    Zuo, Zhen; Shuai, Bing; Wang, Gang; Liu, Xiao; Wang, Xingxing; Wang, Bing; Chen, Yushi

    2016-07-01

    Existing deep convolutional neural networks (CNNs) have shown their great success on image classification. CNNs mainly consist of convolutional and pooling layers, both of which are performed on local image areas without considering the dependencies among different image regions. However, such dependencies are very important for generating explicit image representation. In contrast, recurrent neural networks (RNNs) are well known for their ability of encoding contextual information among sequential data, and they only require a limited number of network parameters. General RNNs can hardly be directly applied on non-sequential data. Thus, we proposed the hierarchical RNNs (HRNNs). In HRNNs, each RNN layer focuses on modeling spatial dependencies among image regions from the same scale but different locations. While the cross RNN scale connections target on modeling scale dependencies among regions from the same location but different scales. Specifically, we propose two recurrent neural network models: 1) hierarchical simple recurrent network (HSRN), which is fast and has low computational cost; and 2) hierarchical long-short term memory recurrent network (HLSTM), which performs better than HSRN with the price of more computational cost. In this manuscript, we integrate CNNs with HRNNs, and develop end-to-end convolutional hierarchical recurrent neural networks (C-HRNNs). C-HRNNs not only make use of the representation power of CNNs, but also efficiently encodes spatial and scale dependencies among different image regions. On four of the most challenging object/scene image classification benchmarks, our C-HRNNs achieve state-of-the-art results on Places 205, SUN 397, MIT indoor, and competitive results on ILSVRC 2012.

  3. Convolutional neural networks for mammography mass lesion classification.

    PubMed

    Arevalo, John; Gonzalez, Fabio A; Ramos-Pollan, Raul; Oliveira, Jose L; Guevara Lopez, Miguel Angel

    2015-08-01

    Feature extraction is a fundamental step when mammography image analysis is addressed using learning based approaches. Traditionally, problem dependent handcrafted features are used to represent the content of images. An alternative approach successfully applied in other domains is the use of neural networks to automatically discover good features. This work presents an evaluation of convolutional neural networks to learn features for mammography mass lesions before feeding them to a classification stage. Experimental results showed that this approach is a suitable strategy outperforming the state-of-the-art representation from 79.9% to 86% in terms of area under the ROC curve. PMID:26736382

  4. Convolution seal for transition duct in turbine system

    SciTech Connect

    Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray

    2015-03-10

    A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface member for interfacing with a turbine section. The turbine system further includes a convolution seal contacting the interface member to provide a seal between the interface member and the turbine section.

  5. Convolution seal for transition duct in turbine system

    SciTech Connect

    Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray

    2015-05-26

    A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface feature for interfacing with an adjacent transition duct. The turbine system further includes a convolution seal contacting the interface feature to provide a seal between the interface feature and the adjacent transition duct.

  6. Is turbulent mixing a self-convolution process?

    PubMed

    Venaille, Antoine; Sommeria, Joel

    2008-06-13

    Experimental results for the evolution of the probability distribution function (PDF) of a scalar mixed by a turbulent flow in a channel are presented. The sequence of PDF from an initial skewed distribution to a sharp Gaussian is found to be nonuniversal. The route toward homogeneization depends on the ratio between the cross sections of the dye injector and the channel. In connection with this observation, advantages, shortcomings, and applicability of models for the PDF evolution based on a self-convolution mechanism are discussed. PMID:18643510

  7. A Fortran 90 code for magnetohydrodynamics. Part 1, Banded convolution

    SciTech Connect

    Walker, D.W.

    1992-03-01

    This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.

  8. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

    1996-01-01

    We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

  9. New Syndrome Decoding Techniques for the (n, K) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.

  10. New syndrome decoding techniques for the (n, k) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964

  11. Simplified Syndrome Decoding of (n, 1) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.

  12. Continuous speech recognition based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Qing-qing; Liu, Yong; Pan, Jie-lin; Yan, Yong-hong

    2015-07-01

    Convolutional Neural Networks (CNNs), which showed success in achieving translation invariance for many image processing tasks, are investigated for continuous speech recognitions in the paper. Compared to Deep Neural Networks (DNNs), which have been proven to be successful in many speech recognition tasks nowadays, CNNs can reduce the NN model sizes significantly, and at the same time achieve even better recognition accuracies. Experiments on standard speech corpus TIMIT showed that CNNs outperformed DNNs in the term of the accuracy when CNNs had even smaller model size.

  13. A digital model for streamflow routing by convolution methods

    USGS Publications Warehouse

    Doyle, W.H., Jr.; Shearman, H.O.; Stiltner, G.J.; Krug, W.O.

    1984-01-01

    U.S. Geological Survey computer model, CONROUT, for routing streamflow by unit-response convolution flow-routing techniques from an upstream channel location to a downstream channel location has been developed and documented. Calibration and verification of the flow-routing model and subsequent use of the model for simulation is also documented. Three hypothetical examples and two field applications are presented to illustrate basic flow-routing concepts. Most of the discussion is limited to daily flow routing since, to date, all completed and current studies of this nature involve daily flow routing. However, the model is programmed to accept hourly flow-routing data. (USGS)

  14. Faster GPU-based convolutional gridding via thread coarsening

    NASA Astrophysics Data System (ADS)

    Merry, B.

    2016-07-01

    Convolutional gridding is a processor-intensive step in interferometric imaging. While it is possible to use graphics processing units (GPUs) to accelerate this operation, existing methods use only a fraction of the available flops. We apply thread coarsening to improve the efficiency of an existing algorithm, and observe performance gains of up to 3.2 × for single-polarization gridding and 1.9 × for quad-polarization gridding on a GeForce GTX 980, and smaller but still significant gains on a Radeon R9 290X.

  15. Modeling scattering from azimuthally symmetric bathymetric features using wavefield superposition.

    PubMed

    Fawcett, John A

    2007-12-01

    In this paper, an approach for modeling the scattering from azimuthally symmetric bathymetric features is described. These features are useful models for small mounds and indentations on the seafloor at high frequencies and seamounts, shoals, and basins at low frequencies. A bathymetric feature can be considered as a compact closed region, with the same sound speed and density as one of the surrounding media. Using this approach, a number of numerical methods appropriate for a partially buried target or facet problem can be applied. This paper considers the use of wavefield superposition and because of the azimuthal symmetry, the three-dimensional solution to the scattering problem can be expressed as a Fourier sum of solutions to a set of two-dimensional scattering problems. In the case where the surrounding two half spaces have only a density contrast, a semianalytic coupled mode solution is derived. This provides a benchmark solution to scattering from a class of penetrable hemispherical bosses or indentations. The details and problems of the numerical implementation of the wavefield superposition method are described. Example computations using the method for a simple scattering feature on a seabed are presented for a wide band of frequencies. PMID:18247740

  16. Experiments testing macroscopic quantum superpositions must be slow

    PubMed Central

    Mari, Andrea; De Palma, Giacomo; Giovannetti, Vittorio

    2016-01-01

    We consider a thought experiment where the preparation of a macroscopically massive or charged particle in a quantum superposition and the associated dynamics of a distant test particle apparently allow for superluminal communication. We give a solution to the paradox which is based on the following fundamental principle: any local experiment, discriminating a coherent superposition from an incoherent statistical mixture, necessarily requires a minimum time proportional to the mass (or charge) of the system. For a charged particle, we consider two examples of such experiments, and show that they are both consistent with the previous limitation. In the first, the measurement requires to accelerate the charge, that can entangle with the emitted photons. In the second, the limitation can be ascribed to the quantum vacuum fluctuations of the electromagnetic field. On the other hand, when applied to massive particles our result provides an indirect evidence for the existence of gravitational vacuum fluctuations and for the possibility of entangling a particle with quantum gravitational radiation. PMID:26959656

  17. Experiments testing macroscopic quantum superpositions must be slow.

    PubMed

    Mari, Andrea; De Palma, Giacomo; Giovannetti, Vittorio

    2016-01-01

    We consider a thought experiment where the preparation of a macroscopically massive or charged particle in a quantum superposition and the associated dynamics of a distant test particle apparently allow for superluminal communication. We give a solution to the paradox which is based on the following fundamental principle: any local experiment, discriminating a coherent superposition from an incoherent statistical mixture, necessarily requires a minimum time proportional to the mass (or charge) of the system. For a charged particle, we consider two examples of such experiments, and show that they are both consistent with the previous limitation. In the first, the measurement requires to accelerate the charge, that can entangle with the emitted photons. In the second, the limitation can be ascribed to the quantum vacuum fluctuations of the electromagnetic field. On the other hand, when applied to massive particles our result provides an indirect evidence for the existence of gravitational vacuum fluctuations and for the possibility of entangling a particle with quantum gravitational radiation. PMID:26959656

  18. Experiments testing macroscopic quantum superpositions must be slow

    NASA Astrophysics Data System (ADS)

    Mari, Andrea; de Palma, Giacomo; Giovannetti, Vittorio

    2016-03-01

    We consider a thought experiment where the preparation of a macroscopically massive or charged particle in a quantum superposition and the associated dynamics of a distant test particle apparently allow for superluminal communication. We give a solution to the paradox which is based on the following fundamental principle: any local experiment, discriminating a coherent superposition from an incoherent statistical mixture, necessarily requires a minimum time proportional to the mass (or charge) of the system. For a charged particle, we consider two examples of such experiments, and show that they are both consistent with the previous limitation. In the first, the measurement requires to accelerate the charge, that can entangle with the emitted photons. In the second, the limitation can be ascribed to the quantum vacuum fluctuations of the electromagnetic field. On the other hand, when applied to massive particles our result provides an indirect evidence for the existence of gravitational vacuum fluctuations and for the possibility of entangling a particle with quantum gravitational radiation.

  19. Runs in superpositions of renewal processes with applications to discrimination

    NASA Astrophysics Data System (ADS)

    Alsmeyer, Gerold; Irle, Albrecht

    2006-02-01

    Wald and Wolfowitz [Ann. Math. Statist. 11 (1940) 147-162] introduced the run test for testing whether two samples of i.i.d. random variables follow the same distribution. Here a run means a consecutive subsequence of maximal length from only one of the two samples. In this paper we contribute to the problem of runs and resulting test procedures for the superposition of independent renewal processes which may be interpreted as arrival processes of customers from two different input channels at the same service station. To be more precise, let (Sn)n[greater-or-equal, slanted]1 and (Tn)n[greater-or-equal, slanted]1 be the arrival processes for channel 1 and channel 2, respectively, and (Wn)n[greater-or-equal, slanted]1 their be superposition with counting process . Let further be the number of runs in W1,...,Wn and the number of runs observed up to time t. We study the asymptotic behavior of and Rt, first for the case where (Sn)n[greater-or-equal, slanted]1 and (Tn)n[greater-or-equal, slanted]1 have exponentially distributed increments with parameters [lambda]1 and [lambda]2, and then for the more difficult situation when these increments have an absolutely continuous distribution. These results are used to design asymptotic level [alpha] tests for testing [lambda]1=[lambda]2 against [lambda]1[not equal to][lambda]2 in the first case, and for testing for equal scale parameters in the second.

  20. Time-Temperature Superposition Applied to PBX Mechanical Properties

    NASA Astrophysics Data System (ADS)

    Thompson, Darla; Deluca, Racci

    2011-06-01

    The use of plastic-bonded explosives (PBXs) in weapon applications requires a certain level of structural/mechanical integrity. Uniaxial tension and compression experiments characterize the mechanical response of materials over a wide range of temperatures and strain rates, providing the basis for predictive modeling in more complex geometries. After years of data collection on a wide variety of PBX formulations, we have applied time-temperature superposition principles to a mechanical properties database which includes PBX 9501, PBX 9502, PBXN-110, PBXN-9, and HPP (propellant). The results of quasi-static tension and compression, SHPB compression, and cantilever DMA are compared. Time-temperature relationships of maximum stress and corresponding strain values are analyzed in addition to the more conventional analysis of modulus. Our analysis shows adherence to the principles of time-temperature superposition and correlations of mechanical response to the binder glass transition and specimen density. Direct ties relate time-temperature analysis to the underlying basis of existing PBX mechanical models (ViscoSCRAM). Results suggest that, within limits, mechanical response can be predicted at conditions not explicitly measured. LA-UR 11-01096.

  1. Time-temperature superposition applied to PBX mechanical properties

    NASA Astrophysics Data System (ADS)

    Thompson, Darla; DeLuca, Racci; Wright, Walter J.

    2012-03-01

    The use of plastic-bonded explosives (PBXs) in weapon applications requires that they possess and maintain a level of structural/mechanical integrity. Uniaxial tension and compression experiments are typically used to characterize the mechanical response of materials over a wide range of temperatures and strain rates, providing the basis for predictive modeling in more complex geometries. After many years of data collection on a variety of PBX formulations, we have here applied the principles of time-temperature superposition to a mechanical properties database which includes PBX 9501, PBX 9502, PBXN-110, PBXN-9, and HPP (propellant). Consistencies are demonstrated between the results of quasi-static tension and compression, dynamic Split-Hopkinson Pressure Bar (SHPB) compression, and cantilever Dynamic Mechanical Analysis (DMA). Timetemperature relationships of maximum stress and corresponding strain values are analyzed, in addition to the more conventional analysis of modulus. The extensive analysis shows adherence to the principles of time-temperature superposition and correlations of mechanical response to binder glasstransition temperature (Tg) and specimen density. Direct ties exist between the time-temperature analysis and the underlying basis of a useful existing PBX mechanical model (ViscoSCRAM). Results give confidence that, with some limitations, mechanical response can be predicted at conditions not explicitly measured.

  2. Evolution of superpositions of quantum states through a level crossing

    SciTech Connect

    Torosov, B. T.; Vitanov, N. V.

    2011-12-15

    The Landau-Zener-Stueckelberg-Majorana (LZSM) model is widely used for estimating transition probabilities in the presence of crossing energy levels in quantum physics. This model, however, makes the unphysical assumption of an infinitely long constant interaction, which introduces a divergent phase in the propagator. This divergence remains hidden when estimating output probabilities for a single input state insofar as the divergent phase cancels out. In this paper we show that, because of this divergent phase, the LZSM model is inadequate to describe the evolution of pure or mixed superposition states across a level crossing. The LZSM model can be used only if the system is initially in a single state or in a completely mixed superposition state. To this end, we show that the more realistic Demkov-Kunike model, which assumes a hyperbolic-tangent level crossing and a hyperbolic-secant interaction envelope, is free of divergences and is a much more adequate tool for describing the evolution through a level crossing for an arbitrary input state. For multiple crossing energies which are reducible to one or more effective two-state systems (e.g., by the Majorana and Morris-Shore decompositions), similar conclusions apply: the LZSM model does not produce definite values of the populations and the coherences, and one should use the Demkov-Kunike model instead.

  3. Free energy surfaces from an extended harmonic superposition approach and kinetics for alanine dipeptide

    NASA Astrophysics Data System (ADS)

    Strodel, Birgit; Wales, David J.

    2008-12-01

    Approximate free energy surfaces and transition rates are presented for alanine dipeptide for a variety of force fields and implicit solvent models. Our calculations are based upon local minima, transition states and pathways characterised for each potential energy surface using geometry optimisation. The superposition approach employing only local minima and harmonic densities of states provides a representation of low-lying regions of the free energy surfaces. However, including contributions from the transition states of the potential energy surface and selected points obtained from displacements along the corresponding reaction vectors produces surfaces that compare quite well with results from replica exchange molecular dynamics. Characterising the local minima, transition states, normal modes, pathways, rate constants and free energy surfaces for each force field within this framework typically requires between one and five minutes cpu time on a single processor.

  4. Capillary force and torque on spheroidal particles floating at a fluid interface beyond the superposition approximation

    NASA Astrophysics Data System (ADS)

    Galatola, P.

    2016-02-01

    By means of a perturbative scheme, we determine analytically the capillary energy of a spheroidal colloid floating on a deformed fluid interface in terms of the local curvature tensor of the background deformation. We validate our results, that hold for small ellipticity of the particle and small deformations of the surface, by an exact numerical calculation. As an application of our perturbative approach, we determine the asymptotic interaction, for large separations d , between two different spheroidal particles. The dominant contribution is quadrupolar and proportional to d-4. It coincides with the known superposition approximation and is zero if one of the two particles is spherical. The next to leading approximation, proportional to d-8, is always attractive and independent of the orientation of the two colloids. It is the dominant contribution to the interaction between a spheroidal and a spherical colloid.

  5. Convolution and non convolution Perfectly Matched Layer techniques optimized at grazing incidence for high-order wave propagation modelling

    NASA Astrophysics Data System (ADS)

    Martin, Roland; Komatitsch, Dimitri; Bruthiaux, Emilien; Gedney, Stephen D.

    2010-05-01

    We present and discuss here two different unsplit formulations of the frequency shift PML based on convolution or non convolution integrations of auxiliary memory variables. Indeed, the Perfectly Matched Layer absorbing boundary condition has proven to be very efficient from a numerical point of view for the elastic wave equation to absorb both body waves with non-grazing incidence and surface waves. However, at grazing incidence the classical discrete Perfectly Matched Layer method suffers from large spurious reflections that make it less efficient for instance in the case of very thin mesh slices, in the case of sources located very close to the edge of the mesh, and/or in the case of receivers located at very large offset. In [1] we improve the Perfectly Matched Layer at grazing incidence for the seismic wave equation based on an unsplit convolution technique. This improved PML has a cost that is similar in terms of memory storage to that of the classical PML. We illustrate the efficiency of this improved Convolutional Perfectly Matched Layer based on numerical benchmarks using a staggered finite-difference method on a very thin mesh slice for an isotropic material and show that results are significantly improved compared with the classical Perfectly Matched Layer technique. We also show that, as the classical model, the technique is intrinsically unstable in the case of some anisotropic materials. In this case, retaining an idea of [2], this has been stabilized by adding correction terms adequately along any coordinate axis [3]. More specifically this has been applied to the spectral-element method based on a hybrid first/second order time integration scheme in which the Newmark time marching scheme allows us to match perfectly at the base of the absorbing layer a velocity-stress formulation in the PML and a second order displacement formulation in the inner computational domain.Our CPML unsplit formulation has the advantage to reduce the memory storage of CPML

  6. Enthalpy difference between conformations of normal alkanes: effects of basis set and chain length on intramolecular basis set superposition error

    NASA Astrophysics Data System (ADS)

    Balabin, Roman M.

    2011-03-01

    The quantum chemistry of conformation equilibrium is a field where great accuracy (better than 100 cal mol-1) is needed because the energy difference between molecular conformers rarely exceeds 1000-3000 cal mol-1. The conformation equilibrium of straight-chain (normal) alkanes is of particular interest and importance for modern chemistry. In this paper, an extra error source for high-quality ab initio (first principles) and DFT calculations of the conformation equilibrium of normal alkanes, namely the intramolecular basis set superposition error (BSSE), is discussed. In contrast to out-of-plane vibrations in benzene molecules, diffuse functions on carbon and hydrogen atoms were found to greatly reduce the relative BSSE of n-alkanes. The corrections due to the intramolecular BSSE were found to be almost identical for the MP2, MP4, and CCSD(T) levels of theory. Their cancelation is expected when CCSD(T)/CBS (CBS, complete basis set) energies are evaluated by addition schemes. For larger normal alkanes (N > 12), the magnitude of the BSSE correction was found to be up to three times larger than the relative stability of the conformer; in this case, the basis set superposition error led to a two orders of magnitude difference in conformer abundance. No error cancelation due to the basis set superposition was found. A comparison with amino acid, peptide, and protein data was provided.

  7. Multiple deep convolutional neural networks averaging for face alignment

    NASA Astrophysics Data System (ADS)

    Zhang, Shaohua; Yang, Hua; Yin, Zhouping

    2015-05-01

    Face alignment is critical for face recognition, and the deep learning-based method shows promise for solving such issues, given that competitive results are achieved on benchmarks with additional benefits, such as dispensing with handcrafted features and initial shape. However, most existing deep learning-based approaches are complicated and quite time-consuming during training. We propose a compact face alignment method for fast training without decreasing its accuracy. Rectified linear unit is employed, which allows all networks approximately five times faster convergence than a tanh neuron. An eight learnable layer deep convolutional neural network (DCNN) based on local response normalization and a padding convolutional layer (PCL) is designed to provide reliable initial values during prediction. A model combination scheme is presented to further reduce errors, while showing that only two network architectures and hyperparameter selection procedures are required in our approach. A three-level cascaded system is ultimately built based on the DCNNs and model combination mode. Extensive experiments validate the effectiveness of our method and demonstrate comparable accuracy with state-of-the-art methods on BioID, labeled face parts in the wild, and Helen datasets.

  8. Convolutional neural network architectures for predicting DNA–protein binding

    PubMed Central

    Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.

    2016-01-01

    Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608

  9. Classification of Histology Sections via Multispectral Convolutional Sparse Coding*

    PubMed Central

    Zhou, Yin; Barner, Kenneth; Spellman, Paul

    2014-01-01

    Image-based classification of histology sections plays an important role in predicting clinical outcomes. However this task is very challenging due to the presence of large technical variations (e.g., fixation, staining) and biological heterogeneities (e.g., cell type, cell state). In the field of biomedical imaging, for the purposes of visualization and/or quantification, different stains are typically used for different targets of interest (e.g., cellular/subcellular events), which generates multi-spectrum data (images) through various types of microscopes and, as a result, provides the possibility of learning biological-component-specific features by exploiting multispectral information. We propose a multispectral feature learning model that automatically learns a set of convolution filter banks from separate spectra to efficiently discover the intrinsic tissue morphometric signatures, based on convolutional sparse coding (CSC). The learned feature representations are then aggregated through the spatial pyramid matching framework (SPM) and finally classified using a linear SVM. The proposed system has been evaluated using two large-scale tumor cohorts, collected from The Cancer Genome Atlas (TCGA). Experimental results show that the proposed model 1) outperforms systems utilizing sparse coding for unsupervised feature learning (e.g., PSD-SPM [5]); 2) is competitive with systems built upon features with biological prior knowledge (e.g., SMLSPM [4]). PMID:25554749

  10. Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks.

    PubMed

    Dosovitskiy, Alexey; Fischer, Philipp; Springenberg, Jost Tobias; Riedmiller, Martin; Brox, Thomas

    2016-09-01

    Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101, Caltech-256). While features learned with our approach cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor. PMID:26540673

  11. Enhancing Neutron Beam Production with a Convoluted Moderator

    SciTech Connect

    Iverson, Erik B; Baxter, David V; Muhrer, Guenter; Ansell, Stuart; Gallmeier, Franz X; Dalgliesh, Robert; Lu, Wei; Kaiser, Helmut

    2014-10-01

    We describe a new concept for a neutron moderating assembly resulting in the more efficient production of slow neutron beams. The Convoluted Moderator, a heterogeneous stack of interleaved moderating material and nearly transparent single-crystal spacers, is a directionally-enhanced neutron beam source, improving beam effectiveness over an angular range comparable to the range accepted by neutron beam lines and guides. We have demonstrated gains of 50% in slow neutron intensity for a given fast neutron production rate while simultaneously reducing the wavelength-dependent emission time dispersion by 25%, both coming from a geometric effect in which the neutron beam lines view a large surface area of moderating material in a relatively small volume. Additionally, we have confirmed a Bragg-enhancement effect arising from coherent scattering within the single-crystal spacers. We have not observed hypothesized refractive effects leading to additional gains at long wavelength. In addition to confirmation of the validity of the Convoluted Moderator concept, our measurements provide a series of benchmark experiments suitable for developing simulation and analysis techniques for practical optimization and eventual implementation at slow neutron source facilities.

  12. Convolutional Neural Network Based Fault Detection for Rotating Machinery

    NASA Astrophysics Data System (ADS)

    Janssens, Olivier; Slavkovikj, Viktor; Vervisch, Bram; Stockman, Kurt; Loccufier, Mia; Verstockt, Steven; Van de Walle, Rik; Van Hoecke, Sofie

    2016-09-01

    Vibration analysis is a well-established technique for condition monitoring of rotating machines as the vibration patterns differ depending on the fault or machine condition. Currently, mainly manually-engineered features, such as the ball pass frequencies of the raceway, RMS, kurtosis an crest, are used for automatic fault detection. Unfortunately, engineering and interpreting such features requires a significant level of human expertise. To enable non-experts in vibration analysis to perform condition monitoring, the overhead of feature engineering for specific faults needs to be reduced as much as possible. Therefore, in this article we propose a feature learning model for condition monitoring based on convolutional neural networks. The goal of this approach is to autonomously learn useful features for bearing fault detection from the data itself. Several types of bearing faults such as outer-raceway faults and lubrication degradation are considered, but also healthy bearings and rotor imbalance are included. For each condition, several bearings are tested to ensure generalization of the fault-detection system. Furthermore, the feature-learning based approach is compared to a feature-engineering based approach using the same data to objectively quantify their performance. The results indicate that the feature-learning system, based on convolutional neural networks, significantly outperforms the classical feature-engineering based approach which uses manually engineered features and a random forest classifier. The former achieves an accuracy of 93.61 percent and the latter an accuracy of 87.25 percent.

  13. Deep Convolutional Neural Networks for large-scale speech tasks.

    PubMed

    Sainath, Tara N; Kingsbury, Brian; Saon, George; Soltau, Hagen; Mohamed, Abdel-rahman; Dahl, George; Ramabhadran, Bhuvana

    2015-04-01

    Convolutional Neural Networks (CNNs) are an alternative type of neural network that can be used to reduce spectral variations and model spectral correlations which exist in signals. Since speech signals exhibit both of these properties, we hypothesize that CNNs are a more effective model for speech compared to Deep Neural Networks (DNNs). In this paper, we explore applying CNNs to large vocabulary continuous speech recognition (LVCSR) tasks. First, we determine the appropriate architecture to make CNNs effective compared to DNNs for LVCSR tasks. Specifically, we focus on how many convolutional layers are needed, what is an appropriate number of hidden units, what is the best pooling strategy. Second, investigate how to incorporate speaker-adapted features, which cannot directly be modeled by CNNs as they do not obey locality in frequency, into the CNN framework. Third, given the importance of sequence training for speech tasks, we introduce a strategy to use ReLU+dropout during Hessian-free sequence training of CNNs. Experiments on 3 LVCSR tasks indicate that a CNN with the proposed speaker-adapted and ReLU+dropout ideas allow for a 12%-14% relative improvement in WER over a strong DNN system, achieving state-of-the art results in these 3 tasks. PMID:25439765

  14. Comparison of dose calculation algorithms in phantoms with lung equivalent heterogeneities under conditions of lateral electronic disequilibrium.

    PubMed

    Carrasco, P; Jornet, N; Duch, M A; Weber, L; Ginjaume, M; Eudaldo, T; Jurado, D; Ruiz, A; Ribas, M

    2004-10-01

    An extensive set of benchmark measurement of PDDs and beam profiles was performed in a heterogeneous layer phantom, including a lung equivalent heterogeneity, by means of several detectors and compared against the predicted dose values by different calculation algorithms in two treatment planning systems. PDDs were measured with TLDs, plane parallel and cylindrical ionization chambers and beam profiles with films. Additionally, Monte Carlo simulations by means of the PENELOPE code were performed. Four different field sizes (10 x 10, 5 x 5, 2 x 2, and 1 x 1 cm2) and two lung equivalent materials (CIRS, p(w)e=0.195 and St. Bartholomew Hospital, London, p(w)e=0.244-0.322) were studied. The performance of four correction-based algorithms and one based on convolution-superposition was analyzed. The correction-based algorithms were the Batho, the Modified Batho, and the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system and the TMS Pencil Beam from the Helax-TMS (Nucletron) treatment planning system. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. The only studied calculation methods that correlated successfully with the measured values with a 2% average inside all media were the Collapsed Cone and the Monte Carlo simulation. The biggest difference between the predicted and the delivered dose in the beam axis was found for the EqTAR algorithm inside the CIRS lung equivalent material in a 2 x 2 cm2 18 MV x-ray beam. In these conditions, average and maximum difference against the TLD measurements were 32% and 39%, respectively. In the water equivalent part of the phantom every algorithm correctly predicted the dose (within 2%) everywhere except very close to the interfaces where differences up to 24% were found for 2 x 2 cm2 18 MV photon beams. Consistent values were found between the reference detector (ionization chamber in water and TLD in lung) and Monte Carlo simulations, yielding minimal

  15. Comparison of dose calculation algorithms in phantoms with lung equivalent heterogeneities under conditions of lateral electronic disequilibrium

    SciTech Connect

    Carrasco, P.; Jornet, N.; Duch, M.A.; Weber, L.; Ginjaume, M.; Eudaldo, T.; Jurado, D.; Ruiz, A.; Ribas, M.

    2004-10-01

    An extensive set of benchmark measurement of PDDs and beam profiles was performed in a heterogeneous layer phantom, including a lung equivalent heterogeneity, by means of several detectors and compared against the predicted dose values by different calculation algorithms in two treatment planning systems. PDDs were measured with TLDs, plane parallel and cylindrical ionization chambers and beam profiles with films. Additionally, Monte Carlo simulations by meansof the PENELOPE code were performed. Four different field sizes (10x10, 5x5, 2x2, and1x1 cm{sup 2}) and two lung equivalent materials (CIRS, {rho}{sub e}{sup w}=0.195 and St. Bartholomew Hospital, London, {rho}{sub e}{sup w}=0.244-0.322) were studied. The performance of four correction-based algorithms and one based on convolution-superposition was analyzed. The correction-based algorithms were the Batho, the Modified Batho, and the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system and the TMS Pencil Beam from the Helax-TMS (Nucletron) treatment planning system. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. The only studied calculation methods that correlated successfully with the measured values with a 2% average inside all media were the Collapsed Cone and the Monte Carlo simulation. The biggest difference between the predicted and the delivered dose in the beam axis was found for the EqTAR algorithm inside the CIRS lung equivalent material in a 2x2 cm{sup 2} 18 MV x-ray beam. In these conditions, average and maximum difference against the TLD measurements were 32% and 39%, respectively. In the water equivalent part of the phantom every algorithm correctly predicted the dose (within 2%) everywhere except very close to the interfaces where differences up to 24% were found for 2x2 cm{sup 2} 18 MV photon beams. Consistent values were found between the reference detector (ionization chamber in water and TLD in lung) and Monte Carlo

  16. Effects of superpositions of quantum states on quantum isoenergetic cycles: Efficiency and maximum power output

    NASA Astrophysics Data System (ADS)

    Niu, X. Y.; Huang, X. L.; Shang, Y. F.; Wang, X. Y.

    2015-04-01

    Superposition principle plays a crucial role in quantum mechanics, thus its effects on thermodynamics is an interesting topic. Here, the effects of superpositions of quantum states on isoenergetic cycle are studied. We find superposition can improve the heat engine efficiency and release the positive work condition in general case. In the finite time process, we find the efficiency at maximum power output in superposition case is lower than the nonsuperposition case. This efficiency depends on one index of the energy spectrum of the working substance. This result does not mean the superposition discourages the heat engine performance. For fixed efficiency or fixed power, the superposition improves the power or efficiency respectively. These results show how quantum mechanical properties affect the thermodynamical cycle.

  17. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    PubMed

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy. PMID:24710398

  18. The origin of non-classical effects in a one-dimensional superposition of coherent states

    NASA Technical Reports Server (NTRS)

    Buzek, V.; Knight, P. L.; Barranco, A. Vidiella

    1992-01-01

    We investigate the nature of the quantum fluctuations in a light field created by the superposition of coherent fields. We give a physical explanation (in terms of Wigner functions and phase-space interference) why the 1-D superposition of coherent states in the direction of the x-quadrature leads to the squeezing of fluctuations in the y-direction, and show that such a superposition can generate the squeezed vacuum and squeezed coherent states.

  19. Macroscopicity of quantum superpositions on a one-parameter unitary path in Hilbert space

    NASA Astrophysics Data System (ADS)

    Volkoff, T. J.; Whaley, K. B.

    2014-12-01

    We analyze quantum states formed as superpositions of an initial pure product state and its image under local unitary evolution, using two measurement-based measures of superposition size: one based on the optimal quantum binary distinguishability of the branches of the superposition and another based on the ratio of the maximal quantum Fisher information of the superposition to that of its branches, i.e., the relative metrological usefulness of the superposition. A general formula for the effective sizes of these states according to the branch-distinguishability measure is obtained and applied to superposition states of N quantum harmonic oscillators composed of Gaussian branches. Considering optimal distinguishability of pure states on a time-evolution path leads naturally to a notion of distinguishability time that generalizes the well-known orthogonalization times of Mandelstam and Tamm and Margolus and Levitin. We further show that the distinguishability time provides a compact operational expression for the superposition size measure based on the relative quantum Fisher information. By restricting the maximization procedure in the definition of this measure to an appropriate algebra of observables, we show that the superposition size of, e.g., NOON states and hierarchical cat states, can scale linearly with the number of elementary particles comprising the superposition state, implying precision scaling inversely with the total number of photons when these states are employed as probes in quantum parameter estimation of a 1-local Hamiltonian in this algebra.

  20. Robustness of superposition states evolving under the influence of a thermal reservoir

    SciTech Connect

    Sales, J. S.; Almeida, N. G. de

    2011-06-15

    We study the evolution of superposition states under the influence of a reservoir at zero and finite temperatures in cavity quantum electrodynamics aiming to know how their purity is lost over time. The superpositions studied here are composed of coherent states, orthogonal coherent states, squeezed coherent states, and orthogonal squeezed coherent states, which we introduce to generalize the orthogonal coherent states. For comparison, we also show how the robustness of the superpositions studied here differs from that of a qubit given by a superposition of zero- and one-photon states.

  1. Superresolved imaging in digital holography by superposition of tilted wavefronts.

    PubMed

    Mico, Vicente; Zalevsky, Zeev; García-Martínez, Pascuala; García, Javier

    2006-02-10

    A technique based on superresolution by digital holographic microscopic imaging is presented. We used a two dimensional (2-D) vertical-cavity self-emitting laser (VCSEL) array as spherical-wave illumination sources. The method is defined in terms of an incoherent superposition of tilted wavefronts. The tilted spherical wave originating from the 2-D VCSEL elements illuminates the target in transmission mode to obtain a hologram in a Mach-Zehnder interferometer configuration. Superresolved images of the input object above the common lens diffraction limit are generated by sequential recording of the individual holograms and numerical reconstruction of the image with the extended spatial frequency range. We have experimentally tested the approach for a microscope objective with an exact 2-D reconstruction image of the input object. The proposed approach has implementation advantages for applications in biological imaging or the microelectronic industry in which structured targets are being inspected. PMID:16512523

  2. Superposition method for analysis of free-edge stresses

    NASA Technical Reports Server (NTRS)

    Whitcomb, J. D.; Raju, I. S.

    1983-01-01

    Superposition techniques were used to transform the edge stress problem for composite laminates into a more lucid form. By eliminating loads and stresses not contributing to interlaminar stresses, the essential aspects of the edge stress problem are easily recognized. Transformed problem statements were developed for both mechanical and thermal loads. Also, a technique for approximate analysis using a two dimensional plane strain analysis was developed. Conventional quasi-three dimensional analysis was used to evaluate the accuracy of the transformed problems and the approximate two dimensional analysis. The transformed problems were shown to be exactly equivalent to the original problems. The approximate two dimensional analysis was found to predict the interlaminar normal and shear stresses reasonably well.

  3. Sensing Super-Position: Human Sensing Beyond the Visual Spectrum

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2007-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This paper addresses the technical feasibility of augmenting human vision through Sensing Super-position by mixing natural Human sensing. The current implementation of the device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of Lie human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system. The

  4. Spatial properties of coaxial superposition of two coherent Gaussian beams.

    PubMed

    Boubaha, Boualem; Naidoo, Darryl; Godin, Thomas; Fromager, Michael; Forbes, Andrew; Aït-Ameur, Kamel

    2013-08-10

    In this paper, we explore theoretically and experimentally the laser beam shaping ability resulting from the coaxial superposition of two coherent Gaussian beams (GBs). This technique is classified under interferometric laser beam shaping techniques contrasting with the usual ones based on diffraction. The experimental setup does not involve the use of some two-wave interferometer but uses a spatial light modulator for the generation of the necessary interference term. This allows one to avoid the thermal drift occurring in interferometers and gives a total flexibility of the key parameter setting the beam transformation. In particular, we demonstrate the reshaping of a GB into a bottle beam or top-hat beam in the focal plane of a focusing lens. PMID:23938430

  5. SU-E-T-465: Dose Calculation Method for Dynamic Tumor Tracking Using a Gimbal-Mounted Linac

    SciTech Connect

    Sugimoto, S; Inoue, T; Kurokawa, C; Usui, K; Sasai, K; Utsunomiya, S; Ebe, K

    2014-06-01

    Purpose: Dynamic tumor tracking using the gimbal-mounted linac (Vero4DRT, Mitsubishi Heavy Industries, Ltd., Japan) has been available when respiratory motion is significant. The irradiation accuracy of the dynamic tumor tracking has been reported to be excellent. In addition to the irradiation accuracy, a fast and accurate dose calculation algorithm is needed to validate the dose distribution in the presence of respiratory motion because the multiple phases of it have to be considered. A modification of dose calculation algorithm is necessary for the gimbal-mounted linac due to the degrees of freedom of gimbal swing. The dose calculation algorithm for the gimbal motion was implemented using the linear transformation between coordinate systems. Methods: The linear transformation matrices between the coordinate systems with and without gimbal swings were constructed using the combination of translation and rotation matrices. The coordinate system where the radiation source is at the origin and the beam axis along the z axis was adopted. The transformation can be divided into the translation from the radiation source to the gimbal rotation center, the two rotations around the center relating to the gimbal swings, and the translation from the gimbal center to the radiation source. After operating the transformation matrix to the phantom or patient image, the dose calculation can be performed as the no gimbal swing. The algorithm was implemented in the treatment planning system, PlanUNC (University of North Carolina, NC). The convolution/superposition algorithm was used. The dose calculations with and without gimbal swings were performed for the 3 × 3 cm{sup 2} field with the grid size of 5 mm. Results: The calculation time was about 3 minutes per beam. No significant additional time due to the gimbal swing was observed. Conclusions: The dose calculation algorithm for the finite gimbal swing was implemented. The calculation time was moderate.

  6. Using convolutional decoding to improve time delay and phase estimation in digital communications

    DOEpatents

    Ormesher, Richard C.; Mason, John J.

    2010-01-26

    The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.

  7. There is no MacWilliams identity for convolutional codes. [transmission gain comparison

    NASA Technical Reports Server (NTRS)

    Shearer, J. B.; Mceliece, R. J.

    1977-01-01

    An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.

  8. The uniform continuity of characteristic function from convoluted exponential distribution with stabilizer constant

    NASA Astrophysics Data System (ADS)

    Devianto, Dodi

    2016-02-01

    It is constructed convolution of generated random variable from independent and identically exponential distribution with stabilizer constant. The characteristic function of this distribution is obtained by using Laplace-Stieltjes transform. The uniform continuity property of characteristic function from this convolution is obtained by using analytical methods as basic properties.

  9. SU-E-T-607: An Experimental Validation of Gamma Knife Based Convolution Algorithm On Solid Acrylic Anthropomorphic Phantom

    SciTech Connect

    Gopishankar, N; Bisht, R K

    2014-06-01

    Purpose: To perform dosimetric evaluation of convolution algorithm in Gamma Knife (Perfexion Model) using solid acrylic anthropomorphic phantom. Methods: An in-house developed acrylic phantom with ion chamber insert was used for this purpose. The middle insert was designed to fit ion chamber from top(head) as well as from bottom(neck) of the phantom, henceforth measurement done at two different positions simultaneously. Leksell frame fixed to phantom simulated patient treatment. Prior to dosimetric study, hounsfield units and electron density of acrylic material were incorporated into the calibration curve in the TPS for convolution algorithm calculation. A CT scan of phantom with ion chamber (PTW Freiberg, 0.125cc) was obtained with following scanning parameters: Tube voltage-110kV, Slice thickness-1mm and FOV-240mm. Three separate single shot plans were generated in LGP TPS (Version 10.1.) with collimators 16mm, 8mm and 4mm respectively for both ion chamber positions. Both TMR10 and Convolution algorithm based planning (CABP) were used for dose calculation. A dose of 6Gy at 100% isodose was prescribed at centre of ion chamber visible in the CT scan. The phantom with ion chamber was positioned in the treatment couch for dose delivery. Results: The ion chamber measured dose was 5.98Gy for 16mm collimator shot plan with less than 1% deviation for convolution algorithm whereas with TMR10 measured dose was 5.6Gy. For 8mm and 4mm collimator plan merely a dose of 3.86Gy and 2.18Gy respectively were delivered at TPS calculated time for CABP. Conclusion: CABP is expected to perform accurate prediction of time for dose delivery for all collimators, but significant variation in measured dose was observed for 8mm and 4mm collimator which may be due collimator size effect. Effect of metal artifacts caused by pins and frame on the CT scan also may have role in misinterpreting CABP. The study carried out requires further investigation.

  10. A wave superposition method formulated in digital acoustic space

    NASA Astrophysics Data System (ADS)

    Hwang, Yong-Sin

    In this thesis, a new formulation of the Wave Superposition method is proposed wherein the conventional mesh approach is replaced by a simple 3-D digital work space that easily accommodates shape optimization for minimizing or maximizing radiation efficiency. As sound quality is in demand in almost all product designs and also because of fierce competition between product manufacturers, faster and accurate computational method for shape optimization is always desired. Because the conventional Wave Superposition method relies solely on mesh geometry, it cannot accommodate fast shape changes in the design stage of a consumer product or machinery, where many iterations of shape changes are required. Since the use of a mesh hinders easy shape changes, a new approach for representing geometry is introduced by constructing a uniform lattice in a 3-D digital work space. A voxel (a portmanteau, a new word made from combining the sound and meaning, of the words, volumetric and pixel) is essentially a volume element defined by the uniform lattice, and does not require separate connectivity information as a mesh element does. In the presented method, geometry is represented with voxels that can easily adapt to shape changes, therefore it is more suitable for shape optimization. The new method was validated by computing radiated sound power of structures of simple and complex geometries and complex mode shapes. It was shown that matching volume velocity is a key component to an accurate analysis. A sensitivity study showed that it required at least 6 elements per acoustic wavelength, and a complexity study showed a minimal reduction in computational time.

  11. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons.

    PubMed

    Sanchez-Parcerisa, D; Cortés-Giraldo, M A; Dolney, D; Kondrla, M; Fager, M; Carabe, A

    2016-02-21

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm(-1)) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process. PMID:26840945

  12. The effect of whitening transformation on pooling operations in convolutional autoencoders

    NASA Astrophysics Data System (ADS)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua

    2015-12-01

    Convolutional autoencoders (CAEs) are unsupervised feature extractors for high-resolution images. In the pre-processing step, whitening transformation has widely been adopted to remove redundancy by making adjacent pixels less correlated. Pooling is a biologically inspired operation to reduce the resolution of feature maps and achieve spatial invariance in convolutional neural networks. Conventionally, pooling methods are mainly determined empirically in most previous work. Therefore, our main purpose is to study the relationship between whitening processing and pooling operations in convolutional autoencoders for image classification. We propose an adaptive pooling approach based on the concepts of information entropy to test the effect of whitening on pooling in different conditions. Experimental results on benchmark datasets indicate that the performance of pooling strategies is associated with the distribution of feature activations, which can be affected by whitening processing. This provides guidance for the selection of pooling methods in convolutional autoencoders and other convolutional neural networks.

  13. Generalized Viterbi algorithms for error detection with convolutional codes

    NASA Astrophysics Data System (ADS)

    Seshadri, N.; Sundberg, C.-E. W.

    Presented are two generalized Viterbi algorithms (GVAs) for the decoding of convolutional codes. They are a parallel algorithm that simultaneously identifies the L best estimates of the transmitted sequence, and a serial algorithm that identifies the lth best estimate using the knowledge about the previously found l-1 estimates. These algorithms are applied to combined speech and channel coding systems, concatenated codes, trellis-coded modulation, partial response (continuous-phase modulation), and hybrid ARQ (automatic repeat request) schemes. As an example, for a concatenated code more than 2 dB is gained by the use of the GVA with L = 3 over the Viterbi algorithm for block error rates less than 10-2. The channel is a Rayleigh fading channel.

  14. Tomography by iterative convolution - Empirical study and application to interferometry

    NASA Technical Reports Server (NTRS)

    Vest, C. M.; Prikryl, I.

    1984-01-01

    An algorithm for computer tomography has been developed that is applicable to reconstruction from data having incomplete projections because an opaque object blocks some of the probing radiation as it passes through the object field. The algorithm is based on iteration between the object domain and the projection (Radon transform) domain. Reconstructions are computed during each iteration by the well-known convolution method. Although it is demonstrated that this algorithm does not converge, an empirically justified criterion for terminating the iteration when the most accurate estimate has been computed is presented. The algorithm has been studied by using it to reconstruct several different object fields with several different opaque regions. It also has been used to reconstruct aerodynamic density fields from interferometric data recorded in wind tunnel tests.

  15. Plane-wave decomposition by spherical-convolution microphone array

    NASA Astrophysics Data System (ADS)

    Rafaely, Boaz; Park, Munhum

    2001-05-01

    Reverberant sound fields are widely studied, as they have a significant influence on the acoustic performance of enclosures in a variety of applications. For example, the intelligibility of speech in lecture rooms, the quality of music in auditoria, the noise level in offices, and the production of 3D sound in living rooms are all affected by the enclosed sound field. These sound fields are typically studied through frequency response measurements or statistical measures such as reverberation time, which do not provide detailed spatial information. The aim of the work presented in this seminar is the detailed analysis of reverberant sound fields. A measurement and analysis system based on acoustic theory and signal processing, designed around a spherical microphone array, is presented. Detailed analysis is achieved by decomposition of the sound field into waves, using spherical Fourier transform and spherical convolution. The presentation will include theoretical review, simulation studies, and initial experimental results.

  16. Visualization of vasculature with convolution surfaces: method, validation and evaluation.

    PubMed

    Oeltze, Steffen; Preim, Bernhard

    2005-04-01

    We present a method for visualizing vasculature based on clinical computed tomography or magnetic resonance data. The vessel skeleton as well as the diameter information per voxel serve as input. Our method adheres to these data, while producing smooth transitions at branchings and closed, rounded ends by means of convolution surfaces. We examine the filter design with respect to irritating bulges, unwanted blending and the correct visualization of the vessel diameter. The method has been applied to a large variety of anatomic trees. We discuss the validation of the method by means of a comparison to other visualization methods. Surface distance measures are carried out to perform a quantitative validation. Furthermore, we present the evaluation of the method which has been accomplished on the basis of a survey by 11 radiologists and surgeons. PMID:15822811

  17. Finding the complete path and weight enumerators of convolutional codes

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I.

    1990-01-01

    A method for obtaining the complete path enumerator T(D, L, I) of a convolutional code is described. A system of algebraic equations is solved, using a new algorithm for computing determinants, to obtain T(D, L, I) for the (7,1/2) NASA standard code. Generating functions, derived from T(D, L, I) are used to upper bound Viterbi decoder error rates. This technique is currently feasible for constraint length K less than 10 codes. A practical, fast algorithm is presented for computing the leading nonzero coefficients of the generating functions used to bound the performance of constraint length K less than 20 codes. Code profiles with about 50 nonzero coefficients are obtained with this algorithm for the experimental K = 15, rate 1/4, code in the Galileo mission and for the proposed K = 15, rate 1/6, 2-dB code.

  18. Drug-Drug Interaction Extraction via Convolutional Neural Networks

    PubMed Central

    Liu, Shengyu; Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong

    2016-01-01

    Drug-drug interaction (DDI) extraction as a typical relation extraction task in natural language processing (NLP) has always attracted great attention. Most state-of-the-art DDI extraction systems are based on support vector machines (SVM) with a large number of manually defined features. Recently, convolutional neural networks (CNN), a robust machine learning method which almost does not need manually defined features, has exhibited great potential for many NLP tasks. It is worth employing CNN for DDI extraction, which has never been investigated. We proposed a CNN-based method for DDI extraction. Experiments conducted on the 2013 DDIExtraction challenge corpus demonstrate that CNN is a good choice for DDI extraction. The CNN-based DDI extraction method achieves an F-score of 69.75%, which outperforms the existing best performing method by 2.75%. PMID:26941831

  19. Highly parallel vector visualization using line integral convolution

    SciTech Connect

    Cabral, B.; Leedom, C.

    1995-12-01

    Line Integral Convolution (LIC) is an effective imaging operator for visualizing large vector fields. It works by blurring an input image along local vector field streamlines yielding an output image. LIC is highly parallelizable because it uses only local read-sharing of input data and no write-sharing of output data. Both coarse- and fine-grained implementations have been developed. The coarse-grained implementation uses a straightforward row-tiling of the vector field to parcel out work to multiple CPUs. The fine-grained implementation uses a series of image warps and sums to compute the LIC algorithm across the entire vector field at once. This is accomplished by novel use of high-performance graphics hardware texture mapping and accumulation buffers.

  20. Enhanced Line Integral Convolution with Flow Feature Detection

    NASA Technical Reports Server (NTRS)

    Lane, David; Okada, Arthur

    1996-01-01

    The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain. The method produces a flow texture image based on the input velocity field defined in the domain. Because of the nature of the algorithm, the texture image tends to be blurry. This sometimes makes it difficult to identify boundaries where flow separation and reattachments occur. We present techniques to enhance LIC texture images and use colored texture images to highlight flow separation and reattachment boundaries. Our techniques have been applied to several flow fields defined in 3D curvilinear multi-block grids and scientists have found the results to be very useful.

  1. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  2. Deep convolutional neural networks for ATR from SAR imagery

    NASA Astrophysics Data System (ADS)

    Morgan, David A. E.

    2015-05-01

    Deep architectures for classification and representation learning have recently attracted significant attention within academia and industry, with many impressive results across a diverse collection of problem sets. In this work we consider the specific application of Automatic Target Recognition (ATR) using Synthetic Aperture Radar (SAR) data from the MSTAR public release data set. The classification performance achieved using a Deep Convolutional Neural Network (CNN) on this data set was found to be competitive with existing methods considered to be state-of-the-art. Unlike most existing algorithms, this approach can learn discriminative feature sets directly from training data instead of requiring pre-specification or pre-selection by a human designer. We show how this property can be exploited to efficiently adapt an existing classifier to recognise a previously unseen target and discuss potential practical applications.

  3. Invariant Descriptor Learning Using a Siamese Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Chen, L.; Rottensteiner, F.; Heipke, C.

    2016-06-01

    In this paper we describe learning of a descriptor based on the Siamese Convolutional Neural Network (CNN) architecture and evaluate our results on a standard patch comparison dataset. The descriptor learning architecture is composed of an input module, a Siamese CNN descriptor module and a cost computation module that is based on the L2 Norm. The cost function we use pulls the descriptors of matching patches close to each other in feature space while pushing the descriptors for non-matching pairs away from each other. Compared to related work, we optimize the training parameters by combining a moving average strategy for gradients and Nesterov's Accelerated Gradient. Experiments show that our learned descriptor reaches a good performance and achieves state-of-art results in terms of the false positive rate at a 95 % recall rate on standard benchmark datasets.

  4. Asymptotic expansions of Mellin convolution integrals: An oscillatory case

    NASA Astrophysics Data System (ADS)

    López, José L.; Pagola, Pedro

    2010-01-01

    In a recent paper [J.L. López, Asymptotic expansions of Mellin convolution integrals, SIAM Rev. 50 (2) (2008) 275-293], we have presented a new, very general and simple method for deriving asymptotic expansions of for small x. It contains Watson's Lemma and other classical methods, Mellin transform techniques, McClure and Wong's distributional approach and the method of analytic continuation used in this approach as particular cases. In this paper we generalize that idea to the case of oscillatory kernels, that is, to integrals of the form , with c[set membership, variant]R, and we give a method as simple as the one given in the above cited reference for the case c=0. We show that McClure and Wong's distributional approach for oscillatory kernels and the summability method for oscillatory integrals are particular cases of this method. Some examples are given as illustration.

  5. Convolutional Neural Networks for patient-specific ECG classification.

    PubMed

    Kiranyaz, Serkan; Ince, Turker; Hamila, Ridha; Gabbouj, Moncef

    2015-08-01

    We propose a fast and accurate patient-specific electrocardiogram (ECG) classification and monitoring system using an adaptive implementation of 1D Convolutional Neural Networks (CNNs) that can fuse feature extraction and classification into a unified learner. In this way, a dedicated CNN will be trained for each patient by using relatively small common and patient-specific training data and thus it can also be used to classify long ECG records such as Holter registers in a fast and accurate manner. Alternatively, such a solution can conveniently be used for real-time ECG monitoring and early alert system on a light-weight wearable device. The experimental results demonstrate that the proposed system achieves a superior classification performance for the detection of ventricular ectopic beats (VEB) and supraventricular ectopic beats (SVEB). PMID:26736826

  6. Drug-Drug Interaction Extraction via Convolutional Neural Networks.

    PubMed

    Liu, Shengyu; Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong

    2016-01-01

    Drug-drug interaction (DDI) extraction as a typical relation extraction task in natural language processing (NLP) has always attracted great attention. Most state-of-the-art DDI extraction systems are based on support vector machines (SVM) with a large number of manually defined features. Recently, convolutional neural networks (CNN), a robust machine learning method which almost does not need manually defined features, has exhibited great potential for many NLP tasks. It is worth employing CNN for DDI extraction, which has never been investigated. We proposed a CNN-based method for DDI extraction. Experiments conducted on the 2013 DDIExtraction challenge corpus demonstrate that CNN is a good choice for DDI extraction. The CNN-based DDI extraction method achieves an F-score of 69.75%, which outperforms the existing best performing method by 2.75%. PMID:26941831

  7. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.

    PubMed

    He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian

    2015-09-01

    Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224 × 224) input image. This requirement is "artificial" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, "spatial pyramid pooling", to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 × faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition. PMID:26353135

  8. Digital simulation of electromagnetic wave propagation in a multiconductor transmission system using the superposition principle and Hartley transform

    SciTech Connect

    Mahmutcehajic, R. . Faculty of Electrical Engineering); Babic, S. . Faculty of Electrical Engineering); Gacanovic, R.; Carsimamovic, S. )

    1993-07-01

    A method of calculation of electromagnetic transients in a multiconductor transmission systems using the superposition principle and Hartley transform is developed in the paper. The method takes into account all frequency dependent parameters of the transmission system. In the method, the impulse response of a transmission system are first obtained in the actual phase domain using a Hartley transform method. Then, the impulse responses are included in a transient calculation using the superposition principle. Up to now, Hartley transform has not been used in this field. Therefore, the properties and advantages of the transform are considered in comparison with more usual approaches using the other transform methods. The new method is applied on different transmission systems (underground cables and overhead lines). The obtained results are compared with those calculated by using conventional methods and field tests. The accuracy of the method is found to be good. The method, therefore, can be used as a very efficient alternative to the existing approximate models implemented in the programs of the time domain when all frequency dependent parameters have to be taken into account.

  9. Superpositions of Free Electron Vortices and Measurement of Matter Wave Gouy Phase

    NASA Astrophysics Data System (ADS)

    McMorran, Benjamin; Harvey, Tyler; Pierce, Jordan; Linck, Martin

    2014-03-01

    We demonstrate superpositions of free electron matter wave orbital states using nanofabricated diffraction holograms. The orbital superposition is comprised of an electron beam that is a coherent mixture of two overlapped, co-propagating vortex beam modes with different topological charge. Whereas a pure mode electron vortex beam forms an annular spot when projected onto an imaging detector, the superposition has an intensity profile that is broken into azimuthal lobes. The number of lobes is given by the absolute difference in topological charge between the two orbital components. We created superpositions of vortices with various topological charges, from ml = 0 to 15. We use these superposition states to measure the Gouy phase measurement for matter waves. We discuss the possibility of using these beams to measure magnetic fields. Support from University of Oregon CAMCOR, and LBNL LDRD grant.

  10. Scrambled coherent superposition for enhanced optical fiber communication in the nonlinear transmission regime.

    PubMed

    Liu, Xiang; Chandrasekhar, S; Winzer, P J; Chraplyvy, A R; Tkach, R W; Zhu, B; Taunay, T F; Fishteyn, M; DiGiovanni, D J

    2012-08-13

    Coherent superposition of light waves has long been used in various fields of science, and recent advances in digital coherent detection and space-division multiplexing have enabled the coherent superposition of information-carrying optical signals to achieve better communication fidelity on amplified-spontaneous-noise limited communication links. However, fiber nonlinearity introduces highly correlated distortions on identical signals and diminishes the benefit of coherent superposition in nonlinear transmission regime. Here we experimentally demonstrate that through coordinated scrambling of signal constellations at the transmitter, together with appropriate unscrambling at the receiver, the full benefit of coherent superposition is retained in the nonlinear transmission regime of a space-diversity fiber link based on an innovatively engineered multi-core fiber. This scrambled coherent superposition may provide the flexibility of trading communication capacity for performance in future optical fiber networks, and may open new possibilities in high-performance and secure optical communications. PMID:23038549

  11. Advanced superposition methods for high speed turbopump vibration analysis

    NASA Technical Reports Server (NTRS)

    Nielson, C. E.; Campany, A. D.

    1981-01-01

    The small, high pressure Mark 48 liquid hydrogen turbopump was analyzed and dynamically tested to determine the cause of high speed vibration at an operating speed of 92,400 rpm. This approaches the design point operating speed of 95,000 rpm. The initial dynamic analysis in the design stage and subsequent further analysis of the rotor only dynamics failed to predict the vibration characteristics found during testing. An advanced procedure for dynamics analysis was used in this investigation. The procedure involves developing accurate dynamic models of the rotor assembly and casing assembly by finite element analysis. The dynamically instrumented assemblies are independently rap tested to verify the analytical models. The verified models are then combined by modal superposition techniques to develop a completed turbopump model where dynamic characteristics are determined. The results of the dynamic testing and analysis obtained are presented and methods of moving the high speed vibration characteristics to speeds above the operating range are recommended. Recommendations for use of these advanced dynamic analysis procedures during initial design phases are given.

  12. Large quantum superpositions of a nanoparticle immersed in superfluid helium

    NASA Astrophysics Data System (ADS)

    Lychkovskiy, O.

    2016-06-01

    Preparing and detecting spatially extended quantum superpositions of a massive object comprises an important fundamental test of quantum theory. These quantum states are extremely fragile and tend to quickly decay into incoherent mixtures due to the environmental decoherence. Experimental setups considered up to date address this threat in a conceptually straightforward way—by eliminating the environment, i.e., by isolating an object in a sufficiently high vacuum. We show that another option exists: decoherence is suppressed in the presence of a strongly interacting environment if this environment is superfluid. Indeed, as long as an object immersed in a pure superfluid at zero temperature moves with a velocity below the critical one, it does not create, absorb, or scatter any excitations of the superfluid. Hence, in this idealized situation the decoherence is absent. In reality the decoherence will be present due to thermal excitations of the superfluid and impurities contaminating the superfluid. We examine various decoherence channels in the superfluid

  13. Solar Supergranulation Revealed as a Superposition of Traveling Waves

    NASA Technical Reports Server (NTRS)

    Gizon, L.; Duvall, T. L., Jr.; Schou, J.; Oegerle, William (Technical Monitor)

    2002-01-01

    40 years ago two new solar phenomena were described: supergranulation and the five-minute solar oscillations. While the oscillations have since been explained and exploited to determine the properties of the solar interior, the supergranulation has remained unexplained. The supergranules, appearing as convective-like cellular patterns of horizontal outward flow with a characteristic diameter of 30 Mm and an apparent lifetime of 1 day, have puzzling properties, including their apparent superrotation and the minute temperature variations over the cells. Using a 60-day sequence of data from the MDI (Michelson-Doppler Imager) instrument onboard the SOHO (Solar and Heliospheric Observatory) spacecraft, we show that the supergranulation pattern is formed by a superposition of traveling waves with periods of 5-10 days. The wave power is anisotropic with excess power in the direction of rotation and toward the equator, leading to spurious rotation rates and north-south flows as derived from correlation analyses. These newly discovered waves could play an important role in maintaining differential rotation in the upper convection zone by transporting angular momentum towards the equator.

  14. An Application of Linear Superposition to Estimating Lattice-Physics Parameters

    SciTech Connect

    Zheng Jie; Guo Tong; Maldonado, G. Ivan

    2001-02-15

    A linear superposition model (LSM) for the speedy and accurate estimation of lattice-physics parameters during within-bundle 'pin-by-pin' loading optimization calculations of light water reactor nuclear fuel assemblies has been developed. The LSM has been implemented into the FORMOSA-L code, and typical results show that the run-time requirements can be reduced by at least an order of magnitude relative to performing direct lattice-physics evaluations with the CPM-2 or CASMO-3 code. Moreover, the speedups noted include all overhead expenses associated with the direct lattice-physics calculations required to construct the LSM sensitivity libraries. Additionally, accuracy improvements to the LSM are achieved by inclusion of higher-order cross terms and via quadratic interpolation when perturbing continuous variables. Also, it is shown that the errors generated by this first-order accurate technique can be kept well under control by treating material and spatial shuffles separately during optimizations. The results obtained indicate that the LSM can effectively substitute for direct lattice-physics evaluations throughout the entire optimization process without noticeable loss of fidelity. Finally, both synchronous and asynchronous implementations of parallel computing via the remote-procedure-call approach have been studied to further speed up the creation of LSM sensitivity libraries within FORMOSA-L.

  15. Model for the fast estimation of basis set superposition error in biomolecular systems

    PubMed Central

    Faver, John C.; Zheng, Zheng; Merz, Kenneth M.

    2011-01-01

    Basis set superposition error (BSSE) is a significant contributor to errors in quantum-based energy functions, especially for large chemical systems with many molecular contacts such as folded proteins and protein-ligand complexes. While the counterpoise method has become a standard procedure for correcting intermolecular BSSE, most current approaches to correcting intramolecular BSSE are simply fragment-based analogues of the counterpoise method which require many (two times the number of fragments) additional quantum calculations in their application. We propose that magnitudes of both forms of BSSE can be quickly estimated by dividing a system into interacting fragments, estimating each fragment's contribution to the overall BSSE with a simple statistical model, and then propagating these errors throughout the entire system. Such a method requires no additional quantum calculations, but rather only an analysis of the system's interacting fragments. The method is described herein and is applied to a protein-ligand system, a small helical protein, and a set of native and decoy protein folds. PMID:22010701

  16. Optimal convolution SOR acceleration of waveform relaxation with application to semiconductor device simulation

    NASA Technical Reports Server (NTRS)

    Reichelt, Mark

    1993-01-01

    In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.

  17. A reciprocal space approach for locating symmetry elements in Patterson superposition maps

    SciTech Connect

    Hendrixson, T.

    1990-09-21

    A method for determining the location and possible existence of symmetry elements in Patterson superposition maps has been developed. A comparison of the original superposition map and a superposition map operated on by the symmetry element gives possible translations to the location of the symmetry element. A reciprocal space approach using structure factor-like quantities obtained from the Fourier transform of the superposition function is then used to determine the best'' location of the symmetry element. Constraints based upon the space group requirements are also used as a check on the locations. The locations of the symmetry elements are used to modify the Fourier transform coefficients of the superposition function to give an approximation of the structure factors, which are then refined using the EG relation. The analysis of several compounds using this method is presented. Reciprocal space techniques for locating multiple images in the superposition function are also presented, along with methods to remove the effect of multiple images in the Fourier transform coefficients of the superposition map. In addition, crystallographic studies of the extended chain structure of (NHC{sub 5}H{sub 5})SbI{sub 4} and of the twinning method of the orthorhombic form of the high-{Tc} superconductor YBa{sub 2}Cu{sub 3}O{sub 7-x} are presented. 54 refs.

  18. Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields.

    PubMed

    Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo

    2016-01-01

    Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility. PMID:26752681

  19. Method for Veterbi decoding of large constraint length convolutional codes

    NASA Technical Reports Server (NTRS)

    Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor); Jing, Sun (Inventor)

    1988-01-01

    A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.

  20. Deep convolutional neural networks for classifying GPR B-scans

    NASA Astrophysics Data System (ADS)

    Besaw, Lance E.; Stimac, Philip J.

    2015-05-01

    Symmetric and asymmetric buried explosive hazards (BEHs) present real, persistent, deadly threats on the modern battlefield. Current approaches to mitigate these threats rely on highly trained operatives to reliably detect BEHs with reasonable false alarm rates using handheld Ground Penetrating Radar (GPR) and metal detectors. As computers become smaller, faster and more efficient, there exists greater potential for automated threat detection based on state-of-the-art machine learning approaches, reducing the burden on the field operatives. Recent advancements in machine learning, specifically deep learning artificial neural networks, have led to significantly improved performance in pattern recognition tasks, such as object classification in digital images. Deep convolutional neural networks (CNNs) are used in this work to extract meaningful signatures from 2-dimensional (2-D) GPR B-scans and classify threats. The CNNs skip the traditional "feature engineering" step often associated with machine learning, and instead learn the feature representations directly from the 2-D data. A multi-antennae, handheld GPR with centimeter-accurate positioning data was used to collect shallow subsurface data over prepared lanes containing a wide range of BEHs. Several heuristics were used to prevent over-training, including cross validation, network weight regularization, and "dropout." Our results show that CNNs can extract meaningful features and accurately classify complex signatures contained in GPR B-scans, complementing existing GPR feature extraction and classification techniques.

  1. Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields

    PubMed Central

    Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo

    2016-01-01

    Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility. PMID:26752681

  2. Innervation of the renal proximal convoluted tubule of the rat

    SciTech Connect

    Barajas, L.; Powers, K. )

    1989-12-01

    Experimental data suggest the proximal tubule as a major site of neurogenic influence on tubular function. The functional and anatomical axial heterogeneity of the proximal tubule prompted this study of the distribution of innervation sites along the early, mid, and late proximal convoluted tubule (PCT) of the rat. Serial section autoradiograms, with tritiated norepinephrine serving as a marker for monoaminergic nerves, were used in this study. Freehand clay models and graphic reconstructions of proximal tubules permitted a rough estimation of the location of the innervation sites along the PCT. In the subcapsular nephrons, the early PCT (first third) was devoid of innervation sites with most of the innervation occurring in the mid (middle third) and in the late (last third) PCT. Innervation sites were found in the early PCT in nephrons located deeper in the cortex. In juxtamedullary nephrons, innervation sites could be observed on the PCT as it left the glomerulus. This gradient of PCT innervation can be explained by the different tubulovascular relationships of nephrons at different levels of the cortex. The absence of innervation sites in the early PCT of subcapsular nephrons suggests that any influence of the renal nerves on the early PCT might be due to an effect of neurotransmitter released from renal nerves reaching the early PCT via the interstitium and/or capillaries.

  3. Toward an optimal convolutional neural network for traffic sign recognition

    NASA Astrophysics Data System (ADS)

    Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec

    2015-12-01

    Convolutional Neural Networks (CNN) beat the human performance on German Traffic Sign Benchmark competition. Both the winner and the runner-up teams trained CNNs to recognize 43 traffic signs. However, both networks are not computationally efficient since they have many free parameters and they use highly computational activation functions. In this paper, we propose a new architecture that reduces the number of the parameters 27% and 22% compared with the two networks. Furthermore, our network uses Leaky Rectified Linear Units (ReLU) as the activation function that only needs a few operations to produce the result. Specifically, compared with the hyperbolic tangent and rectified sigmoid activation functions utilized in the two networks, Leaky ReLU needs only one multiplication operation which makes it computationally much more efficient than the two other functions. Our experiments on the Gertman Traffic Sign Benchmark dataset shows 0:6% improvement on the best reported classification accuracy while it reduces the overall number of parameters 85% compared with the winner network in the competition.

  4. Multi-modal vertebrae recognition using Transformed Deep Convolution Network.

    PubMed

    Cai, Yunliang; Landis, Mark; Laidley, David T; Kornecki, Anat; Lum, Andrea; Li, Shuo

    2016-07-01

    Automatic vertebra recognition, including the identification of vertebra locations and naming in multiple image modalities, are highly demanded in spinal clinical diagnoses where large amount of imaging data from various of modalities are frequently and interchangeably used. However, the recognition is challenging due to the variations of MR/CT appearances or shape/pose of the vertebrae. In this paper, we propose a method for multi-modal vertebra recognition using a novel deep learning architecture called Transformed Deep Convolution Network (TDCN). This new architecture can unsupervisely fuse image features from different modalities and automatically rectify the pose of vertebra. The fusion of MR and CT image features improves the discriminativity of feature representation and enhances the invariance of the vertebra pattern, which allows us to automatically process images from different contrast, resolution, protocols, even with different sizes and orientations. The feature fusion and pose rectification are naturally incorporated in a multi-layer deep learning network. Experiment results show that our method outperforms existing detection methods and provides a fully automatic location+naming+pose recognition for routine clinical practice. PMID:27104497

  5. Remote Sensing Image Fusion with Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Zhong, Jinying; Yang, Bin; Huang, Guoyu; Zhong, Fei; Chen, Zhongze

    2016-12-01

    Remote sensing image fusion (RSIF) is referenced as restoring the high-resolution multispectral image from its corresponding low-resolution multispectral (LMS) image aided by the panchromatic (PAN) image. Most RSIF methods assume that the missing spatial details of the LMS image can be obtained from the high resolution PAN image. However, the distortions would be produced due to the much difference between the structural component of LMS image and that of PAN image. Actually, the LMS image can fully utilize its spatial details to improve the resolution. In this paper, a novel two-stage RSIF algorithm is proposed, which makes full use of both spatial details and spectral information of the LMS image itself. In the first stage, the convolutional neural network based super-resolution is used to increase the spatial resolution of the LMS image. In the second stage, Gram-Schmidt transform is employed to fuse the enhanced MS and the PAN images for further improvement the resolution of MS image. Since the spatial resolution enhancement in the first stage, the spectral distortions in the fused image would be decreased in evidence. Moreover, the spatial details can be preserved to construct the fused images. The QuickBird satellite source images are used to test the performances of the proposed method. The experimental results demonstrate that the proposed method can achieve better spatial details and spectral information simultaneously compared with other well-known methods.

  6. Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields

    NASA Astrophysics Data System (ADS)

    Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo

    2016-01-01

    Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.

  7. A discrete convolution kernel for No-DC MRI

    NASA Astrophysics Data System (ADS)

    Zeng, Gengsheng L.; Li, Ya

    2015-08-01

    An analytical inversion formula for the exponential Radon transform with an imaginary attenuation coefficient was developed in 2007 (2007 Inverse Problems 23 1963-71). The inversion formula in that paper suggested that it is possible to obtain an exact MRI (magnetic resonance imaging) image without acquiring low-frequency data. However, this un-measured low-frequency region (ULFR) in the k-space (which is the two-dimensional Fourier transform space in MRI terminology) must be very small. This current paper derives a FBP (filtered backprojection) algorithm based on You’s formula by suggesting a practical discrete convolution kernel. A point spread function is derived for this FBP algorithm. It is demonstrated that the derived FBP algorithm can have a larger ULFR than that in the 2007 paper. The significance of this paper is that we present a closed-form reconstruction algorithm for a special case of under-sampled MRI data. Usually, under-sampled MRI data requires iterative (instead of analytical) algorithms with L1-norm or total variation norm to reconstruct the image.

  8. Adapting line integral convolution for fabricating artistic virtual environment

    NASA Astrophysics Data System (ADS)

    Lee, Jiunn-Shyan; Wang, Chung-Ming

    2003-04-01

    Vector field occurs not only extensively in scientific applications but also in treasured art such as sculptures and paintings. Artist depicts our natural environment stressing valued directional feature besides color and shape information. Line integral convolution (LIC), developed for imaging vector field in scientific visualization, has potential of producing directional image. In this paper we present several techniques of exploring LIC techniques to generate impressionistic images forming artistic virtual environment. We take advantage of directional information given by a photograph, and incorporate many investigations to the work including non-photorealistic shading technique and statistical detail control. In particular, the non-photorealistic shading technique blends cool and warm colors into the photograph to imitate artists painting convention. Besides, we adopt statistical technique controlling integral length according to image variance to preserve details. Furthermore, we also propose method for generating a series of mip-maps, which revealing constant strokes under multi-resolution viewing and achieving frame coherence in an interactive walkthrough system. The experimental results show merits of emulating satisfyingly and computing efficiently, as a consequence, relying on the proposed technique successfully fabricates a wide category of non-photorealistic rendering (NPR) application such as interactive virtual environment with artistic perception.

  9. Cell osmotic water permeability of isolated rabbit proximal convoluted tubules.

    PubMed

    Carpi-Medina, P; González, E; Whittembury, G

    1983-05-01

    Cell osmotic water permeability, Pcos, of the peritubular aspect of the proximal convoluted tubule (PCT) was measured from the time course of cell volume changes subsequent to the sudden imposition of an osmotic gradient, delta Cio, across the cell membrane of PCT that had been dissected and mounted in a chamber. The possibilities of artifact were minimized. The bath was vigorously stirred, the solutions could be 95% changed within 0.1 s, and small osmotic gradients (10-20 mosM) were used. Thus, the osmotically induced water flow was a linear function of delta Cio and the effect of the 70-microns-thick unstirred layers was negligible. In addition, data were extrapolated to delta Cio = 0. Pcos for PCT was 41.6 (+/- 3.5) X 10(-4) cm3 X s-1 X osM-1 per cm2 of peritubular basal area. The standing gradient osmotic theory for transcellular osmosis is incompatible with this value. Published values for Pcos of PST are 25.1 X 10(-4), and for the transepithelial permeability Peos values are 64 X 10(-4) for PCT and 94 X 10(-4) for PST, in the same units. These results indicate that there is room for paracellular water flow in both nephron segments and that the magnitude of the transcellular and paracellular water flows may vary from one segment of the proximal tubule to another. PMID:6846543

  10. Toward Content Based Image Retrieval with Deep Convolutional Neural Networks

    PubMed Central

    Sklan, Judah E.S.; Plassard, Andrew J.; Fabbri, Daniel; Landman, Bennett A.

    2015-01-01

    Content-based image retrieval (CBIR) offers the potential to identify similar case histories, understand rare disorders, and eventually, improve patient care. Recent advances in database capacity, algorithm efficiency, and deep Convolutional Neural Networks (dCNN), a machine learning technique, have enabled great CBIR success for general photographic images. Here, we investigate applying the leading ImageNet CBIR technique to clinically acquired medical images captured by the Vanderbilt Medical Center. Briefly, we (1) constructed a dCNN with four hidden layers, reducing dimensionality of an input scaled to 128×128 to an output encoded layer of 4×384, (2) trained the network using back-propagation 1 million random magnetic resonance (MR) and computed tomography (CT) images, (3) labeled an independent set of 2100 images, and (4) evaluated classifiers on the projection of the labeled images into manifold space. Quantitative results were disappointing (averaging a true positive rate of only 20%); however, the data suggest that improvements would be possible with more evenly distributed sampling across labels and potential re-grouping of label structures. This prelimainry effort at automated classification of medical images with ImageNet is promising, but shows that more work is needed beyond direct adaptation of existing techniques. PMID:25914507

  11. Forecasting natural aquifer discharge using a numerical model and convolution.

    PubMed

    Boggs, Kevin G; Johnson, Gary S; Van Kirk, Rob; Fairley, Jerry P

    2014-01-01

    If the nature of groundwater sources and sinks can be determined or predicted, the data can be used to forecast natural aquifer discharge. We present a procedure to forecast the relative contribution of individual aquifer sources and sinks to natural aquifer discharge. Using these individual aquifer recharge components, along with observed aquifer heads for each January, we generate a 1-year, monthly spring discharge forecast for the upcoming year with an existing numerical model and convolution. The results indicate that a forecast of natural aquifer discharge can be developed using only the dominant aquifer recharge sources combined with the effects of aquifer heads (initial conditions) at the time the forecast is generated. We also estimate how our forecast will perform in the future using a jackknife procedure, which indicates that the future performance of the forecast is good (Nash-Sutcliffe efficiency of 0.81). We develop a forecast and demonstrate important features of the procedure by presenting an application to the Eastern Snake Plain Aquifer in southern Idaho. PMID:23914881

  12. Turbo-decoding of a convolutionally encoded OCDMA system

    NASA Astrophysics Data System (ADS)

    Efinger, Daniel; Fritsch, Robert

    2005-02-01

    We present a novel multiple access scheme for Passive Optical Networks (PON) based on optical Code Division Multiple Access (OCDMA). Di erent from existing proposals for implementing OCDMA, we replaced the predominating orthogonal or weakly correlated signature codes (e.g. Walsh-Hadamard codes (WHC)) by convolutional codes. Thus CDMA user separation and forward error correction (FEC) are combined. The transmission of the coded bits over the multiple access fiber is carried through optical BPSK. This requires electrical field strength detection rather than direct detection (DD) at the receiver end. Since orthogonality gets lost, we have to employ a multiuser receiver to overcome the inherently strong correlation. Computational complexity of multiuser detection is the major challenge and we show how complexity can be reduced by applying the turbo principle known from soft-decoding of concatenated codes. The convergence behavior of the iterative multiuser receiver is investigated by means of extrinsic information transfer charts (EXIT-chart). Finally, we present simulation results of bit error ratio (BER) vs. signal-to-noise ratio (SNR) including a standard single mode fiber in order to demonstrate the superior performance of the proposed scheme compared to those using orthogonal spreading techniques.

  13. A deep convolutional neural network for recognizing foods

    NASA Astrophysics Data System (ADS)

    Jahani Heravi, Elnaz; Habibi Aghdam, Hamed; Puig, Domenec

    2015-12-01

    Controlling the food intake is an efficient way that each person can undertake to tackle the obesity problem in countries worldwide. This is achievable by developing a smartphone application that is able to recognize foods and compute their calories. State-of-art methods are chiefly based on hand-crafted feature extraction methods such as HOG and Gabor. Recent advances in large-scale object recognition datasets such as ImageNet have revealed that deep Convolutional Neural Networks (CNN) possess more representation power than the hand-crafted features. The main challenge with CNNs is to find the appropriate architecture for each problem. In this paper, we propose a deep CNN which consists of 769; 988 parameters. Our experiments show that the proposed CNN outperforms the state-of-art methods and improves the best result of traditional methods 17%. Moreover, using an ensemble of two CNNs that have been trained two different times, we are able to improve the classification performance 21:5%.

  14. Design and development of a new micro-beam treatment planning system: effectiveness of algorithms of optimization and dose calculations and potential of micro-beam treatment.

    PubMed

    Tachibana, Hidenobu; Kojima, Hiroyuki; Yusa, Noritaka; Miyajima, Satoshi; Tsuda, Akihisa; Yamashita, Takashi

    2012-07-01

    A new treatment planning system (TPS) was designed and developed for a new treatment system, which consisted of a micro-beam-enabled linac with robotics and a real-time tracking system. We also evaluated the effectiveness of the implemented algorithms of optimization and dose calculations in the TPS for the new treatment system. In the TPS, the optimization procedure consisted of the pseudo Beam's-Eye-View method for finding the optimized beam directions and the steepest-descent method for determination of beam intensities. We used the superposition-/convolution-based (SC-based) algorithm and Monte Carlo-based (MC-based) algorithm to calculate dose distributions using CT image data sets. In the SC-based algorithm, dose density scaling was applied for the calculation of inhomogeneous corrections. The MC-based algorithm was implemented with Geant4 toolkit and a phase-based approach using a network-parallel computing. From the evaluation of the TPS, the system can optimize the direction and intensity of individual beams. The accuracy of the dose calculated by the SC-based algorithm was less than 1% on average with the calculation time of 15 s for one beam. However, the MC-based algorithm needed 72 min for one beam using the phase-based approach, even though the MC-based algorithm with the parallel computing could decrease multiple beam calculations and had 18.4 times faster calculation speed using the parallel computing. The SC-based algorithm could be practically acceptable for the dose calculation in terms of the accuracy and computation time. Additionally, we have found a dosimetric advantage of proton Bragg peak-like dose distribution in micro-beam treatment. PMID:22544809

  15. Convoluted nozzle design for the RL10 derivative 2B engine

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The convoluted nozzle is a conventional refractory metal nozzle extension that is formed with a portion of the nozzle convoluted to show the extendible nozzle within the length of the rocket engine. The convoluted nozzle (CN) was deployed by a system of four gas driven actuators. For spacecraft applications the optimum CN may be self-deployed by internal pressure retained, during deployment, by a jettisonable exit closure. The convoluted nozzle is included in a study of extendible nozzles for the RL10 Engine Derivative 2B for use in an early orbit transfer vehicle (OTV). Four extendible nozzle configurations for the RL10-2B engine were evaluated. Three configurations of the two position nozzle were studied including a hydrogen dump cooled metal nozzle and radiation cooled nozzles of refractory metal and carbon/carbon composite construction respectively.

  16. Directional Radiometry and Radiative Transfer: the Convoluted Path From Centuries-old Phenomenology to Physical Optics

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.

    2014-01-01

    This Essay traces the centuries-long history of the phenomenological disciplines of directional radiometry and radiative transfer in turbid media, discusses their fundamental weaknesses, and outlines the convoluted process of their conversion into legitimate branches of physical optics.

  17. Convolutions of Hilbert Modular Forms and Their Non-Archimedean Analogues

    NASA Astrophysics Data System (ADS)

    Panchishkin, A. A.

    1989-02-01

    The author constructs non-Archimedean analytic functions which interpolate special values of the convolution of two Hilbert cusp forms on a product of complex upper half-planes.Bibliography: 15 titles.

  18. Quantification of the impact of MLC modeling and tissue heterogeneities on dynamic IMRT dose calculations

    SciTech Connect

    Mihaylov, I. B.; Lerma, F. A.; Fatyga, M.; Siebers, J. V.

    2007-04-15

    This study quantifies the dose prediction errors (DPEs) in dynamic IMRT dose calculations resulting from (a) use of an intensity matrix to estimate the multi-leaf collimator (MLC) modulated photon fluence (DPE{sub IGfluence}) instead of an explicit MLC particle transport, and (b) handling of tissue heterogeneities (DPE{sub hetero}) by superposition/convolution (SC) and pencil beam (PB) dose calculation algorithms. Monte Carlo (MC) computed doses are used as reference standards. Eighteen head-and-neck dynamic MLC IMRT treatment plans are investigated. DPEs are evaluated via comparing the dose received by 98% of the GTV (GTV D{sub 98%}), the CTV D{sub 95%}, the nodal D{sub 90%}, the cord and the brainstem D{sub 02%}, the parotid D{sub 50%}, the parotid mean dose (D{sub Mean}), and generalized equivalent uniform doses (gEUDs) for the above structures. For the MC-generated intensity grids, DPE{sub IGfluence} is within {+-}2.1% for all targets and critical structures. The SC algorithm DPE{sub hetero} is within {+-}3% for 98.3% of the indices tallied, and within {+-}3.4% for all of the tallied indices. The PB algorithm DPE{sub hetero} is within {+-}3% for 92% of the tallied indices. Statistical equivalence tests indicate that PB DPE{sub hetero} requires a {+-}3.6% interval to state equivalence with the MC standard, while the intervals are <1.5% for SC DPE{sub hetero} and DPE{sub IGfluence}. Overall, these results indicate that SC and MC IMRT dose calculations which use MC-derived intensity matrices for fluence prediction do not introduce significant dose errors compared with full Monte Carlo dose computations; however, PB algorithms may result in clinically significant dose deviations.

  19. The principle of superposition and its application in ground-water hydraulics

    USGS Publications Warehouse

    Reilly, Thomas E.; Franke, O. Lehn; Bennett, Gordon D.

    1987-01-01

    The principle of superposition, a powerful mathematical technique for analyzing certain types of complex problems in many areas of science and technology, has important applications in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that problem solutions can be added together to obtain composite solutions. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to ground-water hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader.

  20. The principle of superposition and its application in ground-water hydraulics

    USGS Publications Warehouse

    Reilly, T.E.; Franke, O.L.; Bennett, G.D.

    1984-01-01

    The principle of superposition, a powerful methematical technique for analyzing certain types of complex problems in many areas of science and technology, has important application in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that solutions to individual problems can be added together to obtain solutions to complex problems. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to groundwater hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader. (USGS)

  1. Generating superposition of up-to three photons for continuous variable quantum information processing.

    PubMed

    Yukawa, Mitsuyoshi; Miyata, Kazunori; Mizuta, Takahiro; Yonezawa, Hidehiro; Marek, Petr; Filip, Radim; Furusawa, Akira

    2013-03-11

    We develop an experimental scheme based on a continuous-wave (cw) laser for generating arbitrary superpositions of photon number states. In this experiment, we successfully generate superposition states of zero to three photons, namely advanced versions of superpositions of two and three coherent states. They are fully compatible with developed quantum teleportation and measurement-based quantum operations with cw lasers. Due to achieved high detection efficiency, we observe, without any loss correction, multiple areas of negativity of Wigner function, which confirm strongly nonclassical nature of the generated states. PMID:23482124

  2. A convolution model for computing the far-field directivity of a parametric loudspeaker array.

    PubMed

    Shi, Chuang; Kajikawa, Yoshinobu

    2015-02-01

    This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost. PMID:25698012

  3. Mitochondrial and Metabolic Dysfunction in Renal Convoluted Tubules of Obese Mice: Protective Role of Melatonin

    PubMed Central

    Giugno, Lorena; Lavazza, Antonio; Reiter, Russel J.; Rodella, Luigi Fabrizio; Rezzani, Rita

    2014-01-01

    Obesity is a common and complex health problem, which impacts crucial organs; it is also considered an independent risk factor for chronic kidney disease. Few studies have analyzed the consequence of obesity in the renal proximal convoluted tubules, which are the major tubules involved in reabsorptive processes. For optimal performance of the kidney, energy is primarily provided by mitochondria. Melatonin, an indoleamine and antioxidant, has been identified in mitochondria, and there is considerable evidence regarding its essential role in the prevention of oxidative mitochondrial damage. In this study we evaluated the mechanism(s) of mitochondrial alterations in an animal model of obesity (ob/ob mice) and describe the beneficial effects of melatonin treatment on mitochondrial morphology and dynamics as influenced by mitofusin-2 and the intrinsic apoptotic cascade. Melatonin dissolved in 1% ethanol was added to the drinking water from postnatal week 5–13; the calculated dose of melatonin intake was 100 mg/kg body weight/day. Compared to control mice, obesity-related morphological alterations were apparent in the proximal tubules which contained round mitochondria with irregular, short cristae and cells with elevated apoptotic index. Melatonin supplementation in obese mice changed mitochondria shape and cristae organization of proximal tubules, enhanced mitofusin-2 expression, which in turn modulated the progression of the mitochondria-driven intrinsic apoptotic pathway. These changes possibly aid in reducing renal failure. The melatonin-mediated changes indicate its potential protective use against renal morphological damage and dysfunction associated with obesity and metabolic disease. PMID:25347680

  4. Effects of glucose on water and sodium reabsorption in the proximal convoluted tubule of rat kidney.

    PubMed Central

    Bishop, J H; Green, R; Thomas, S

    1978-01-01

    1. The effects of glucose on sodium and water reabsorption by rat renal proximal tubules was investigated by in situ microperfusion of segments of proximal tubules with solutions containing glucose or no glucose, with and without phlorizin. 2. Absence of glucose did not significantly alter net water flux. Sodium flux was reduced by about 10% but this was not statistically significant. 3. In the absence of glucose in the perfusion fluid net secretion of glucose occurred. 4. Phlorizin reduced either net reabsorption or net secretion of glucose; and net water flux. 5. The data suggest that in later parts of the proximal convoluted tubule some sodium may be co-transported with glucose, but that this represents only a small fraction of the total sodium reabsorption. 6. It is suggested that the glucose carrier is reversible and in appropriate circumstances could cause glucose secretion. 7. Although phlorizin alters net water flux the underlying mechanisms are not clear. 8. The calculated osmolality of the reabsorbate was significantly greater than the perfusate osmolality and greater than plasma osmolality although this was not quite significant statistically. PMID:633143

  5. Dose convolution filter: Incorporating spatial dose information into tissue response modeling

    SciTech Connect

    Huang Yimei; Joiner, Michael; Zhao Bo; Liao Yixiang; Burmeister, Jay

    2010-03-15

    Purpose: A model is introduced to integrate biological factors such as cell migration and bystander effects into physical dose distributions, and to incorporate spatial dose information in plan analysis and optimization. Methods: The model consists of a dose convolution filter (DCF) with single parameter {sigma}. Tissue response is calculated by an existing NTCP model with DCF-applied dose distribution as input. The authors determined {sigma} of rat spinal cord from published data. The authors also simulated the GRID technique, in which an open field is collimated into many pencil beams. Results: After applying the DCF, the NTCP model successfully fits the rat spinal cord data with a predicted value of {sigma}=2.6{+-}0.5 mm, consistent with 2 mm migration distances of remyelinating cells. Moreover, it enables the appropriate prediction of a high relative seriality for spinal cord. The model also predicts the sparing of normal tissues by the GRID technique when the size of each pencil beam becomes comparable to {sigma}. Conclusions: The DCF model incorporates spatial dose information and offers an improved way to estimate tissue response from complex radiotherapy dose distributions. It does not alter the prediction of tissue response in large homogenous fields, but successfully predicts increased tissue tolerance in small or highly nonuniform fields.

  6. Independent absorbed-dose calculation using the Monte Carlo algorithm in volumetric modulated arc therapy

    PubMed Central

    2014-01-01

    Purpose To report the result of independent absorbed-dose calculations based on a Monte Carlo (MC) algorithm in volumetric modulated arc therapy (VMAT) for various treatment sites. Methods and materials All treatment plans were created by the superposition/convolution (SC) algorithm of SmartArc (Pinnacle V9.2, Philips). The beam information was converted into the format of the Monaco V3.3 (Elekta), which uses the X-ray voxel-based MC (XVMC) algorithm. The dose distribution was independently recalculated in the Monaco. The dose for the planning target volume (PTV) and the organ at risk (OAR) were analyzed via comparisons with those of the treatment plan. Before performing an independent absorbed-dose calculation, the validation was conducted via irradiation from 3 different gantry angles with a 10- × 10-cm2 field. For the independent absorbed-dose calculation, 15 patients with cancer (prostate, 5; lung, 5; head and neck, 3; rectal, 1; and esophageal, 1) who were treated with single-arc VMAT were selected. To classify the cause of the dose difference between the Pinnacle and Monaco TPSs, their calculations were also compared with the measurement data. Result In validation, the dose in Pinnacle agreed with that in Monaco within 1.5%. The agreement in VMAT calculations between Pinnacle and Monaco using phantoms was exceptional; at the isocenter, the difference was less than 1.5% for all the patients. For independent absorbed-dose calculations, the agreement was also extremely good. For the mean dose for the PTV in particular, the agreement was within 2.0% in all the patients; specifically, no large difference was observed for high-dose regions. Conversely, a significant difference was observed in the mean dose for the OAR. For patients with prostate cancer, the mean rectal dose calculated in Monaco was significantly smaller than that calculated in Pinnacle. Conclusions There was no remarkable difference between the SC and XVMC calculations in the high-dose regions

  7. Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes

    SciTech Connect

    Houshmand, Monireh; Hosseini-Khayat, Saied

    2011-02-15

    Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation and practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.

  8. True three-dimensional dose computations for megavoltage x-ray therapy: a role for the superposition principle.

    PubMed

    Battista, J J; Sharpe, M B

    1992-12-01

    The objective of radiation therapy is to concentrate a prescribed radiation dose accurately within a target volume in the patient. Major advances in imaging technology have greatly improved our ability to plan radiation treatments in three dimensions (3D) and to verify the treatment geometrically, but there is a concomitant need to improve dosimetric accuracy. It has been recommended that radiation doses should be computed with an accuracy of 3% within the target volume and in radiosensitive normal tissues. We review the rationale behind this recommendation, and describe a new generation of 3D dose algorithms which are capable of achieving this goal. A true 3D dose calculation tracks primary and scattered radiations in 3D space while accounting for tissue inhomogeneities. In the past, dose distributions have been computed in a 2D transverse slice with the assumption that the anatomy of the patient dose not change abruptly in nearby slices. We demonstrate the importance of computing 3D scatter contributions to dose from photons and electrons correctly, and show the magnitude of dose errors caused by using traditional 2D methods. The Monte Carlo technique is the most general and rigorous approach since individual primary and secondary particle tracks are simulated. However, this approach is too time-consuming for clinical treatment planning. We review an approach that is based on the superposition principle and achieves a reasonable compromise between the speed of computation and accuracy in dose. In this approach, dose deposition is separated into two steps. Firstly, the attenuation of incident photons interacting in the absorber is computed to determine the total energy released in the material (TERMA). This quantity is treated as an impulse at each irradiated point. Secondly, the transport of energy by scattered photons and electrons is described by a point dose spread kernel. The dose distribution is the superposition of the kernels, weighted by the magnitude of

  9. Text-Attentional Convolutional Neural Network for Scene Text Detection

    NASA Astrophysics Data System (ADS)

    He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian

    2016-06-01

    Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature computed globally from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this work, we present a new system for scene text detection by proposing a novel Text-Attentional Convolutional Neural Network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/nontext information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates main task of text/non-text classification. In addition, a powerful low-level detector called Contrast- Enhancement Maximally Stable Extremal Regions (CE-MSERs) is developed, which extends the widely-used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 dataset, with a F-measure of 0.82, improving the state-of-the-art results substantially.

  10. A staggered-grid convolutional differentiator for elastic wave modelling

    NASA Astrophysics Data System (ADS)

    Sun, Weijia; Zhou, Binzhong; Fu, Li-Yun

    2015-11-01

    The computation of derivatives in governing partial differential equations is one of the most investigated subjects in the numerical simulation of physical wave propagation. An analytical staggered-grid convolutional differentiator (CD) for first-order velocity-stress elastic wave equations is derived in this paper by inverse Fourier transformation of the band-limited spectrum of a first derivative operator. A taper window function is used to truncate the infinite staggered-grid CD stencil. The truncated CD operator is almost as accurate as the analytical solution, and as efficient as the finite-difference (FD) method. The selection of window functions will influence the accuracy of the CD operator in wave simulation. We search for the optimal Gaussian windows for different order CDs by minimizing the spectral error of the derivative and comparing the windows with the normal Hanning window function for tapering the CD operators. It is found that the optimal Gaussian window appears to be similar to the Hanning window function for tapering the same CD operator. We investigate the accuracy of the windowed CD operator and the staggered-grid FD method with different orders. Compared to the conventional staggered-grid FD method, a short staggered-grid CD operator achieves an accuracy equivalent to that of a long FD operator, with lower computational costs. For example, an 8th order staggered-grid CD operator can achieve the same accuracy of a 16th order staggered-grid FD algorithm but with half of the computational resources and time required. Numerical examples from a homogeneous model and a crustal waveguide model are used to illustrate the superiority of the CD operators over the conventional staggered-grid FD operators for the simulation of wave propagations.

  11. Deep convolutional networks for pancreas segmentation in CT imaging

    NASA Astrophysics Data System (ADS)

    Roth, Holger R.; Farag, Amal; Lu, Le; Turkbey, Evrim B.; Summers, Ronald M.

    2015-03-01

    Automatic organ segmentation is an important prerequisite for many computer-aided diagnosis systems. The high anatomical variability of organs in the abdomen, such as the pancreas, prevents many segmentation methods from achieving high accuracies when compared to state-of-the-art segmentation of organs like the liver, heart or kidneys. Recently, the availability of large annotated training sets and the accessibility of affordable parallel computing resources via GPUs have made it feasible for "deep learning" methods such as convolutional networks (ConvNets) to succeed in image classification tasks. These methods have the advantage that used classification features are trained directly from the imaging data. We present a fully-automated bottom-up method for pancreas segmentation in computed tomography (CT) images of the abdomen. The method is based on hierarchical coarse-to-fine classification of local image regions (superpixels). Superpixels are extracted from the abdominal region using Simple Linear Iterative Clustering (SLIC). An initial probability response map is generated, using patch-level confidences and a two-level cascade of random forest classifiers, from which superpixel regions with probabilities larger 0.5 are retained. These retained superpixels serve as a highly sensitive initial input of the pancreas and its surroundings to a ConvNet that samples a bounding box around each superpixel at different scales (and random non-rigid deformations at training time) in order to assign a more distinct probability of each superpixel region being pancreas or not. We evaluate our method on CT images of 82 patients (60 for training, 2 for validation, and 20 for testing). Using ConvNets we achieve maximum Dice scores of an average 68% +/- 10% (range, 43-80%) in testing. This shows promise for accurate pancreas segmentation, using a deep learning approach and compares favorably to state-of-the-art methods.

  12. Text-Attentional Convolutional Neural Network for Scene Text Detection.

    PubMed

    He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian

    2016-06-01

    Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results. PMID:27093723

  13. Magnetospheric ULF Waves with an Increasing Amplitude as a Superposition of Two Wave Modes

    NASA Astrophysics Data System (ADS)

    Shen, Xiaochen; Zong, Qiugang; Shi, Quanqi; Tian, Anmin; Sun, Weijie; Wang, Yongfu; Zhou, Xuzhi; Fu, Suiyan; Hartinger, Michael; Angelopoulos, Vassilis

    2015-04-01

    Ultra-low frequency (ULF) waves play an important role in transferring energy by buffeting the magnetosphere with solar wind pressure impulses. The amplitudes of magnetospheric ULF waves, which are induced by solar wind dynamic pressure enhancements or shocks, are thought to damp in half or one wave cycle. We report on in situ observations of the solar wind dynamic pressure impulses-induced magnetospheric ULF waves with increasing amplitudes. We have found six ULF wave events, which were induced by solar wind dynamic pressure enhancements, with slow but clear wave amplitude increase. During three or four wave cycles, the amplitudes of ion velocities and electric field of these waves increased continuously by 1.3 ~4.4 times. Two significant events were selected to further study the characteristics of these ULF waves. We have found that the wave amplitude growth is mainly contributed by the toroidal mode wave. We suggest that the wave amplitude increase in the radial electric field is caused by the superposition of two wave modes, a standing wave excited by the solar wind dynamic impulse and a propagating compressional wave. When superposed, the two wave modes fit observations as does a calculation that superposes electric fields from two wave sources.

  14. SUPERPOSE-An excel visual basic program for fracture modeling based on the stress superposition method

    NASA Astrophysics Data System (ADS)

    Ismail Ozkaya, Sait

    2014-03-01

    An Excel Visual Basic program, SUPERPOSE, is presented to predict the distribution, relative size and strike of tensile and shear fractures on anticlinal structures. The program is based on the concept of stress superposition; addition of curvature-related local tensile stress and regional far-field stress. The method accurately predicts fractures on many Middle East Oil Fields that were formed under a strike slip regime as duplexes, flower structures or inverted structures. The program operates on the Excel platform. The program reads the parameters and structural grid data from an Excel template and writes the results to the same template. The program has two routines to import structural grid data in the Eclipse and Zmap formats. The platform of SUPERPOSE is a single layer structural grid of a given cell size (e.g. 50×50 m). In the final output, a single tensile or two conjugate shear fractures are placed in each cell if fracturing criteria are satisfied; otherwise the cell is left blank. Strike of the representative fracture(s) is calculated and exact, but the length is an index of fracture porosity (fracture density×length×aperture) within that cell.

  15. Probing the conductance superposition law in single-molecule circuits with parallel paths.

    PubMed

    Vazquez, H; Skouta, R; Schneebeli, S; Kamenetska, M; Breslow, R; Venkataraman, L; Hybertsen, M S

    2012-10-01

    According to Kirchhoff's circuit laws, the net conductance of two parallel components in an electronic circuit is the sum of the individual conductances. However, when the circuit dimensions are comparable to the electronic phase coherence length, quantum interference effects play a critical role, as exemplified by the Aharonov-Bohm effect in metal rings. At the molecular scale, interference effects dramatically reduce the electron transfer rate through a meta-connected benzene ring when compared with a para-connected benzene ring. For longer conjugated and cross-conjugated molecules, destructive interference effects have been observed in the tunnelling conductance through molecular junctions. Here, we investigate the conductance superposition law for parallel components in single-molecule circuits, particularly the role of interference. We synthesize a series of molecular systems that contain either one backbone or two backbones in parallel, bonded together cofacially by a common linker on each end. Single-molecule conductance measurements and transport calculations based on density functional theory show that the conductance of a double-backbone molecular junction can be more than twice that of a single-backbone junction, providing clear evidence for constructive interference. PMID:22941403

  16. Quantum superposition principle and gravitational collapse: Scattering times for spherical shells

    SciTech Connect

    Ambrus, M.; Hajicek, P.

    2005-09-15

    A quantum theory of spherically symmetric thin shells of null dust and their gravitational field is studied. In Nucl. Phys. B603, 555 (2001), it has been shown how superpositions of quantum states with different geometries can lead to a solution of the singularity problem and black hole information paradox: the shells bounce and re-expand and the evolution is unitary. The corresponding scattering times will be defined in the present paper. To this aim, a spherical mirror of radius R{sub m} is introduced. The classical formula for scattering times of the shell reflected from the mirror is extended to quantum theory. The scattering times and their spreads are calculated. They have a regular limit for R{sub m}{yields}0 and they reveal a resonance at E{sub m}=c{sup 4}R{sub m}/2G. Except for the resonance, they are roughly of the order of the time the light needs to cross the flat space distance between the observer and the mirror. Some ideas are discussed of how the construction of the quantum theory could be changed so that the scattering times become considerably longer.

  17. Design and Evaluation of a Research-Based Teaching Sequence: The Superposition of Electric Field.

    ERIC Educational Resources Information Center

    Viennot, L.; Rainson, S.

    1999-01-01

    Illustrates an approach to research-based teaching strategies and their evaluation. Addresses a teaching sequence on the superposition of electric fields implemented at the college level in an institutional framework subject to severe constraints. Contains 28 references. (DDR)

  18. Intra-cavity generation of a superposition of Bessel-Gauss beams

    NASA Astrophysics Data System (ADS)

    Wong-Campos, Jaime D.; Hernandez-Aranda, Raul I.

    2012-10-01

    The generation of intra-cavity superpositions of Bessel-Gauss beams in an axicon resonator is studied numerically by means of a genetic algorithm. The coherent superposition of low order modes is induced by introducing crossed wires within the simulated cavity. Two different strategies are shown to be equivalent for the generation of the same superposition of two Bessel-Gauss beams with opposite azimuthal orders. In the first strategy the angle between a pair of cross-wires is varied for mode selection, the second consists on introducing a number of crosswires at equally spaced angles in which the number of wires corresponds exactly to the order of the superposed modes. Our results suggest a direct method for generating experimentally a coherent mode superposition of Bessel-Gauss beams using an axicon-based Bessel-Gauss resonator. These beams are relevant in areas such as optical trapping and micromanipulatio

  19. Biases on Initial Mass Function Determinations. II. Real Multiple Systems and Chance Superpositions

    NASA Astrophysics Data System (ADS)

    Maíz Apellániz, J.

    2008-04-01

    When calculating stellar initial mass functions (IMFs) for young clusters, one has to take into account that (1) most massive stars are born in multiple systems, (2) most IMFs are derived from data that cannot resolve such systems, and (3) multiple chance superpositions between members are expected to happen if the cluster is too distant. In this article I use numerical experiments to model the consequences of those phenomena on the observed color-magnitude diagrams and the IMFs derived from them. Real multiple systems affect the observed or apparent massive-star MF slope little but can create a significant population of apparently ultramassive stars. Chance superpositions produce only small biases when the number of superimposed stars is low but, once a certain number threshold is reached, they can affect both the observed slope and the apparent stellar upper mass limit. I apply these experiments to two well known massive young clusters in the Local Group, NGC 3603 and R136. In both cases I show that the observed population of stars with masses above 120 M⊙ can be explained by the effects of unresolved objects, mostly real multiple systems for NGC 3603 and a combination of real and chance-alignment multiple systems for R136. Therefore, the case for the reality of a stellar upper mass limit at solar or near-solar metallicities is strengthened, with a possible value even lower than 150 M⊙. An IMF slope somewhat flatter than Salpeter or Kroupa with γ between -1.6 and -2.0 is derived for the central region of NGC 3603, with a significant contribution to the uncertainty arising from the imprecise knowledge of the distance to the cluster. The IMF at the very center of R136 cannot be measured with the currently available data but the situation could change with new HST observations. This article is partially based on observations made with the NASA/ESA Hubble Space Telescope (HST), some of them associated with GO program 10602 and the rest gathered from the archive

  20. Efficient training of convolutional deep belief networks in the frequency domain for application to high-resolution 2D and 3D images.

    PubMed

    Brosch, Tom; Tam, Roger

    2015-01-01

    Deep learning has traditionally been computationally expensive, and advances in training methods have been the prerequisite for improving its efficiency in order to expand its application to a variety of image classification problems. In this letter, we address the problem of efficient training of convolutional deep belief networks by learning the weights in the frequency domain, which eliminates the time-consuming calculation of convolutions. An essential consideration in the design of the algorithm is to minimize the number of transformations to and from frequency space. We have evaluated the running time improvements using two standard benchmark data sets, showing a speed-up of up to 8 times on 2D images and up to 200 times on 3D volumes. Our training algorithm makes training of convolutional deep belief networks on 3D medical images with a resolution of up to 128×128×128 voxels practical, which opens new directions for using deep learning for medical image analysis. PMID:25380341

  1. Resilience to decoherence of the macroscopic quantum superpositions generated by universally covariant optimal quantum cloning

    SciTech Connect

    Spagnolo, Nicolo; Sciarrino, Fabio; De Martini, Francesco

    2010-09-15

    We show that the quantum states generated by universal optimal quantum cloning of a single photon represent a universal set of quantum superpositions resilient to decoherence. We adopt the Bures distance as a tool to investigate the persistence of quantum coherence of these quantum states. According to this analysis, the process of universal cloning realizes a class of quantum superpositions that exhibits a covariance property in lossy configuration over the complete set of polarization states in the Bloch sphere.

  2. Prediction of color changes in acetaminophen solution using the time-temperature superposition principle.

    PubMed

    Mochizuki, Koji; Takayama, Kozo

    2016-07-01

    A prediction method for color changes based on the time-temperature superposition principle (TTSP) was developed for acetaminophen solution. Color changes of acetaminophen solution are caused by the degradation of acetaminophen, such as hydrolysis and oxidation. In principle, the TTSP can be applied to only thermal aging. Therefore, the impact of oxidation on the color changes of acetaminophen solution was verified. The results of our experiment suggested that the oxidation products enhanced the color changes in acetaminophen solution. Next, the color changes of acetaminophen solution samples of the same head space volume after accelerated aging at various temperatures were investigated using the Commission Internationale de l'Eclairage (CIE) LAB color space (a*, b*, L* and ΔE*ab), following which the TTSP was adopted to kinetic analysis of the color changes. The apparent activation energies using the time-temperature shift factor of a*, b*, L* and ΔE*ab were calculated as 72.4, 69.2, 72.3 and 70.9 (kJ/mol), respectively, which are similar to the values for acetaminophen hydrolysis reported in the literature. The predicted values of a*, b*, L* and ΔE*ab at 40 °C were obtained by calculation using Arrhenius plots. A comparison between the experimental and predicted values for each color parameter revealed sufficiently high R(2) values (>0.98), suggesting the high reliability of the prediction. The kinetic analysis using TTSP was successfully applied to predicting the color changes under the controlled oxygen amount at any temperature and for any length of time. PMID:26559666

  3. Algorithms used in heterogeneous dose calculations show systematic differences as measured with the Radiological Physics Center’s anthropomorphic thorax phantom used for RTOG credentialing

    PubMed Central

    Kry, Stephen F.; Alvarez, Paola; Molineu, Andrea; Amador, Carrie; Galvin, James; Followill, David S.

    2012-01-01

    Purpose To determine the impact of treatment planning algorithm on the accuracy of heterogeneous dose calculations in the Radiological Physics Center (RPC) thorax phantom. Methods and Materials We retrospectively analyzed the results of 304 irradiations of the RPC thorax phantom at 221 different institutions as part of credentialing for RTOG clinical trials; the irradiations were all done using 6-MV beams. Treatment plans included those for intensity-modulated radiation therapy (IMRT) as well as 3D conformal therapy (3D CRT). Heterogeneous plans were developed using Monte Carlo (MC), convolution/superposition (CS) and the anisotropic analytic algorithm (AAA), as well as pencil beam (PB) algorithms. For each plan and delivery, the absolute dose measured in the center of a lung target was compared to the calculated dose, as was the planar dose in 3 orthogonal planes. The difference between measured and calculated dose was examined as a function of planning algorithm as well as use of IMRT. Results PB algorithms overestimated the dose delivered to the center of the target by 4.9% on average. Surprisingly, CS algorithms and AAA also showed a systematic overestimation of the dose to the center of the target, by 3.7% on average. In contrast, the MC algorithm dose calculations agreed with measurement within 0.6% on average. There was no difference observed between IMRT and 3D CRT calculation accuracy. Conclusion Unexpectedly, advanced treatment planning systems (those using CS and AAA algorithms) overestimated the dose that was delivered to the lung target. This issue requires attention in terms of heterogeneity calculations and potentially in terms of clinical practice. PMID:23237006

  4. Convolution effect on TCR log response curve and the correction method for it

    NASA Astrophysics Data System (ADS)

    Chen, Q.; Liu, L. J.; Gao, J.

    2016-09-01

    Through-casing resistivity (TCR) logging has been successfully used in production wells for the dynamic monitoring of oil pools and the distribution of the residual oil, but its vertical resolution has limited its efficiency in identification of thin beds. The vertical resolution is limited by the distortion phenomenon of vertical response of TCR logging. The distortion phenomenon was studied in this work. It was found that the vertical response curve of TCR logging is the convolution of the true formation resistivity and the convolution function of TCR logging tool. Due to the effect of convolution, the measurement error at thin beds can reach 30% or even bigger. Thus the information of thin bed might be covered up very likely. The convolution function of TCR logging tool was obtained in both continuous and discrete way in this work. Through modified Lyle-Kalman deconvolution method, the true formation resistivity can be optimally estimated, so this inverse algorithm can correct the error caused by the convolution effect. Thus it can improve the vertical resolution of TCR logging tool for identification of thin beds.

  5. Iterative sinc-convolution method for solving planar D-bar equation with application to EIT.

    PubMed

    Abbasi, Mahdi; Naghsh-Nilchi, Ahmad-Reza

    2012-08-01

    The numerical solution of D-bar integral equations is the key in inverse scattering solution of many complex problems in science and engineering including conductivity imaging. Recently, a couple of methodologies were considered for the numerical solution of D-bar integral equation, namely product integrals and multigrid. The first one involves high computational complexity and other one has low convergence rate disadvantages. In this paper, a new and efficient sinc-convolution algorithm is introduced to solve the two-dimensional D-bar integral equation to overcome both of these disadvantages and to resolve the singularity problem not tackled before effectively. The method of sinc-convolution is based on using collocation to replace multidimensional convolution-form integrals- including the two-dimensional D-bar integral equations - by a system of algebraic equations. Separation of variables in the proposed method allows elimination of the formulation of the huge full matrices and therefore reduces the computational complexity drastically. In addition, the sinc-convolution method converges exponentially with a convergence rate of O(e-cN). Simulation results on solving a test electrical impedance tomography problem confirm the efficiency of the proposed sinc-convolution-based algorithm. PMID:25099566

  6. A one-parameter family of transforms, linearizing convolution laws for probability distributions

    NASA Astrophysics Data System (ADS)

    Nica, Alexandru

    1995-03-01

    We study a family of transforms, depending on a parameter q∈[0,1], which interpolate (in an algebraic framework) between a relative (namely: - iz(log ℱ(·)) '(-iz)) of the logarithm of the Fourier transform for probability distributions, and its free analogue constructed by D. Voiculescu ([16, 17]). The classical case corresponds to q=1, and the free one to q=0. We describe these interpolated transforms: (a) in terms of partitions of finite sets, and their crossings; (b) in terms of weighted shifts; (c) by a matrix equation related to the method of Stieltjes for expanding continued J-fractions as power series. The main result of the paper is that all these descriptions, which extend basic approaches used for q=0 and/or q=1, remain equivalent for arbitrary q∈[0, 1]. We discuss a couple of basic properties of the convolution laws (for probability distributions) which are linearized by the considered family of transforms (these convolution laws interpolate between the usual convolution — at q=1, and the free convolution introduced by Voiculescu — at q=0). In particular, we note that description (c) mentioned in the preceding paragraph gives an insight of why the central limit law for the interpolated convolution has to do with the q-continuous Hermite orthogonal polynomials.

  7. Nonsymmetrized noise in a quantum dot: Interpretation in terms of energy transfer and coherent superposition of scattering paths

    NASA Astrophysics Data System (ADS)

    Zamoum, R.; Lavagna, M.; Crépieux, A.

    2016-06-01

    We calculate the nonsymmetrized current noise in a quantum dot connected to two reservoirs by using the nonequilibrium Green function technique. We show that both the current autocorrelator (inside a single reservoir) and the current cross-correlator (between the two reservoirs) are expressed in terms of transmission amplitude and coefficient through the barriers. We identify the different energy-transfer processes involved in each contribution to the autocorrelator, and we highlight the fact that when there are several physical processes, the contribution results from a coherent superposition of scattering paths. Varying the gate and bias voltages, we discuss the profile of the differential Fano factor in light of recent experiments, and we identify the conditions for having a distinct value for the autocorrelator in the left and right reservoirs.

  8. Nonadiabatic creation of macroscopic superpositions with strongly correlated one-dimensional bosons in a ring trap

    SciTech Connect

    Schenke, C.; Minguzzi, A.; Hekking, F. W. J.

    2011-11-15

    We consider a strongly interacting quasi-one-dimensional Bose gas on a tight ring trap subjected to a localized barrier potential. We explore the possibility of forming a macroscopic superposition of a rotating and a nonrotating state under nonequilibrium conditions, achieved by a sudden quench of the barrier velocity. Using an exact solution for the dynamical evolution in the impenetrable-boson (Tonks-Girardeau) limit, we find an expression for the many-body wave function corresponding to a superposition state. The superposition is formed when the barrier velocity is tuned close to multiples of an integer or half-integer number of Coriolis flux quanta. As a consequence of the strong interactions, we find that (i) the state of the system can be mapped onto a macroscopic superposition of two Fermi spheres rather than two macroscopically occupied single-particle states as in a weakly interacting gas, and (ii) the barrier velocity should be larger than the sound velocity to better discriminate the two components of the superposition.

  9. Attosecond probing of state-resolved ionization and superpositions of atoms and molecules

    NASA Astrophysics Data System (ADS)

    Leone, Stephen

    2016-05-01

    Isolated attosecond pulses in the extreme ultraviolet are used to probe strong field ionization and to initiate electronic and vibrational superpositions in atoms and small molecules. Few-cycle 800 nm pulses produce strong-field ionization of Xe atoms, and the attosecond probe is used to measure the risetimes of the two spin orbit states of the ion on the 4d inner shell transitions to the 5p vacancies in the valence shell. Step-like features in the risetimes due to the subcycles of the 800 nm pulse are observed and compared with theory to elucidate the instantaneous and effective hole dynamics. Isolated attosecond pulses create massive superpositions of electronic states in Ar and nitrogen as well as vibrational superpositions among electronic states in nitrogen. An 800 nm pulse manipulates the superpositions, and specific subcycle interferences, level shifting, and quantum beats are imprinted onto the attosecond pulse as a function of time delay. Detailed outcomes are compared to theory for measurements of time-dynamic superpositions by attosecond transient absorption. Supported by DOE, NSF, ARO, AFOSR, and DARPA.

  10. A Novel Method of Fabricating Convoluted Shaped Electrode Arrays for Neural and Retinal Prostheses

    PubMed Central

    Bhandari, R.; Negi, S.; Rieth, L.; Normann, R. A.; Solzbacher, F.

    2008-01-01

    A novel fabrication technique has been developed for creating high density (6.25 electrodes/mm2), out of plane, high aspect ratio silicon-based convoluted microelectrode arrays for neural and retinal prostheses. The convoluted shape of the surface defined by the tips of the electrodes could compliment the curved surfaces of peripheral nerves and the cortex, and in the case of retina, its spherical geometry. The geometry of these electrode arrays has the potential to facilitate implantation in the nerve fascicles and to physically stabilize it against displacement after insertion. This report presents a unique combination of variable depth dicing and wet isotropic etching for the fabrication of a variety of convoluted neural array geometries. Also, a method of deinsulating the electrode tips using photoresist as a mask and the limitations of this technique on uniformity are discussed. PMID:19122774

  11. Gamma convolution models for self-diffusion coefficient distributions in PGSE NMR

    NASA Astrophysics Data System (ADS)

    Röding, Magnus; Williamson, Nathan H.; Nydén, Magnus

    2015-12-01

    We introduce a closed-form signal attenuation model for pulsed-field gradient spin echo (PGSE) NMR based on self-diffusion coefficient distributions that are convolutions of n gamma distributions, n ⩾ 1 . Gamma convolutions provide a general class of uni-modal distributions that includes the gamma distribution as a special case for n = 1 and the lognormal distribution among others as limit cases when n approaches infinity. We demonstrate the usefulness of the gamma convolution model by simulations and experimental data from samples of poly(vinyl alcohol) and polystyrene, showing that this model provides goodness of fit superior to both the gamma and lognormal distributions and comparable to the common inverse Laplace transform.

  12. Weighing classes and streams: toward better methods for two-stream convolutional networks

    NASA Astrophysics Data System (ADS)

    Kim, Hoseong; Uh, Youngjung; Ko, Seunghyeon; Byun, Hyeran

    2016-05-01

    The emergence of two-stream convolutional networks has boosted the performance of action recognition by concurrently extracting appearance and motion features from videos. However, most existing approaches simply combine the features by averaging the prediction scores from each recognition stream without realizing that some classes favor greater weight for appearance than motion. We propose a fusion method of two-stream convolutional networks for action recognition by introducing objective functions of weights with two assumptions: (1) the scores from streams do not weigh the same and (2) the weights vary across different classes. We evaluate our method by extensive experiments on UCF101, HMDB51, and Hollywood2 datasets in the context of action recognition. The results show that the proposed approach outperforms the standard two-stream convolutional networks by a large margin (5.7%, 4.8%, and 3.6%) on UCF101, HMDB51, and Hollywood2 datasets, respectively.

  13. Improving Ship Detection with Polarimetric SAR based on Convolution between Co-polarization Channels

    PubMed Central

    Li, Haiyan; He, Yijun; Wang, Wenguang

    2009-01-01

    The convolution between co-polarization amplitude only data is studied to improve ship detection performance. The different statistical behaviors of ships and surrounding ocean are characterized a by two-dimensional convolution function (2D-CF) between different polarization channels. The convolution value of the ocean decreases relative to initial data, while that of ships increases. Therefore the contrast of ships to ocean is increased. The opposite variation trend of ocean and ships can distinguish the high intensity ocean clutter from ships' signatures. The new criterion can generally avoid mistaken detection by a constant false alarm rate detector. Our new ship detector is compared with other polarimetric approaches, and the results confirm the robustness of the proposed method. PMID:22399964

  14. Punctured Parallel and Serial Concatenated Convolutional Codes for BPSK/QPSK Channels

    NASA Technical Reports Server (NTRS)

    Acikel, Omer Fatih

    1999-01-01

    As available bandwidth for communication applications becomes scarce, bandwidth-efficient modulation and coding schemes become ever important. Since their discovery in 1993, turbo codes (parallel concatenated convolutional codes) have been the center of the attention in the coding community because of their bit error rate performance near the Shannon limit. Serial concatenated convolutional codes have also been shown to be as powerful as turbo codes. In this dissertation, we introduce algorithms for designing bandwidth-efficient rate r = k/(k + 1),k = 2, 3,..., 16, parallel and rate 3/4, 7/8, and 15/16 serial concatenated convolutional codes via puncturing for BPSK/QPSK (Binary Phase Shift Keying/Quadrature Phase Shift Keying) channels. Both parallel and serial concatenated convolutional codes have initially, steep bit error rate versus signal-to-noise ratio slope (called the -"cliff region"). However, this steep slope changes to a moderate slope with increasing signal-to-noise ratio, where the slope is characterized by the weight spectrum of the code. The region after the cliff region is called the "error rate floor" which dominates the behavior of these codes in moderate to high signal-to-noise ratios. Our goal is to design high rate parallel and serial concatenated convolutional codes while minimizing the error rate floor effect. The design algorithm includes an interleaver enhancement procedure and finds the polynomial sets (only for parallel concatenated convolutional codes) and the puncturing schemes that achieve the lowest bit error rate performance around the floor for the code rates of interest.

  15. Fast Electron Correlation Methods for Molecular Clusters without Basis Set Superposition Errors

    SciTech Connect

    Kamiya, Muneaki; Hirata, So; Valiev, Marat

    2008-02-19

    Two critical extensions to our fast, accurate, and easy-to-implement binary or ternary interaction method for weakly-interacting molecular clusters [Hirata et al. Mol. Phys. 103, 2255 (2005)] have been proposed, implemented, and applied to water hexamers, hydrogen fluoride chains and rings, and neutral and zwitterionic glycine–water clusters with an excellent result for an initial performance assessment. Our original method included up to two- or three-body Coulomb, exchange, and correlation energies exactly and higher-order Coulomb energies in the dipole–dipole approximation. In this work, the dipole moments are replaced by atom-centered point charges determined so that they reproduce the electrostatic potentials of the cluster subunits as closely as possible and also self-consistently with one another in the cluster environment. They have been shown to lead to dramatic improvement in the description of short-range electrostatic potentials not only of large, charge-separated subunits like zwitterionic glycine but also of small subunits. Furthermore, basis set superposition errors (BSSE) known to plague direct evaluation of weak interactions have been eliminated by com-bining the Valiron–Mayer function counterpoise (VMFC) correction with our binary or ternary interaction method in an economical fashion (quadratic scaling n2 with respect to the number of subunits n when n is small and linear scaling when n is large). A new variant of VMFC has also been proposed in which three-body and all higher-order Coulomb effects on BSSE are estimated approximately. The BSSE-corrected ternary interaction method with atom-centered point charges reproduces the VMFC-corrected results of conventional electron correlation calculations within 0.1 kcal/mol. The proposed method is significantly more accurate and also efficient than conventional correlation methods uncorrected of BSSE.

  16. Generalization of susceptibility of RF systems through far-field pattern superposition

    NASA Astrophysics Data System (ADS)

    Verdin, B.; Debroux, P.

    2015-05-01

    The purpose of this paper is to perform an analysis of RF (Radio Frequency) communication systems in a large electromagnetic environment to identify its susceptibility to jamming systems. We propose a new method that incorporates the use of reciprocity and superposition of the far-field radiation pattern of the RF system and the far-field radiation pattern of the jammer system. By using this method we can find the susceptibility pattern of RF systems with respect to the elevation and azimuth angles. A scenario was modeled with HFSS (High Frequency Structural Simulator) where the radiation pattern of the jammer was simulated as a cylindrical horn antenna. The RF jamming entry point used was a half-wave dipole inside a cavity with apertures that approximates a land-mobile vehicle, the dipole approximates a leaky coax cable. Because of the limitation of the simulation method, electrically large electromagnetic environments cannot be quickly simulated using HFSS's finite element method (FEM). Therefore, the combination of the transmit antenna radiation pattern (horn) superimposed onto the receive antenna pattern (dipole) was performed in MATLAB. A 2D or 3D susceptibility pattern is obtained with respect to the azimuth and elevation angles. In addition, by incorporating the jamming equation into this algorithm, the received jamming power as a function of distance at the RF receiver Pr(Φr, θr) can be calculated. The received power depends on antenna properties, propagation factor and system losses. Test cases include: a cavity with four apertures, a cavity above an infinite ground plane, and a land-mobile vehicle approximation. By using the proposed algorithm a susceptibility analysis of RF systems in electromagnetic environments can be performed.

  17. Oblique superposition of two elliptically polarized lightwaves using geometric algebra: is energy-momentum conserved?

    PubMed

    Sze, Michelle Wynne C; Sugon, Quirino M; McNamara, Daniel J

    2010-11-01

    In this paper, we use Clifford (geometric) algebra Cl(3,0) to verify if electromagnetic energy-momentum density is still conserved for oblique superposition of two elliptically polarized plane waves with the same frequency. We show that energy-momentum conservation is valid at any time only for the superposition of two counter-propagating elliptically polarized plane waves. We show that the time-average energy-momentum of the superposition of two circularly polarized waves with opposite handedness is conserved regardless of the propagation directions of the waves. And, we show that the resulting momentum density of the superposed waves generally has a vector component perpendicular to the momentum densities of the individual waves. PMID:21045912

  18. Mesoscopic Superposition States Generated by Synthetic Spin-Orbit Interaction in Fock-State Lattices

    NASA Astrophysics Data System (ADS)

    Wang, Da-Wei; Cai, Han; Liu, Ren-Bao; Scully, Marlan O.

    2016-06-01

    Mesoscopic superposition states of photons can be prepared in three cavities interacting with the same two-level atom. By periodically modulating the three cavity frequencies around the transition frequency of the atom with a 2 π /3 phase difference, the time reversal symmetry is broken and an optical circulator is generated with chiralities depending on the quantum state of the atom. A superposition of the atomic states can guide photons from one cavity to a mesoscopic superposition of the other two cavities. The physics can be understood in a finite spin-orbit-coupled Fock-state lattice where the atom and the cavities carry the spin and the orbit degrees of freedom, respectively. This scheme can be realized in circuit QED architectures and provides a new platform for exploring quantum information and topological physics in novel lattices.

  19. Mesoscopic Superposition States Generated by Synthetic Spin-Orbit Interaction in Fock-State Lattices.

    PubMed

    Wang, Da-Wei; Cai, Han; Liu, Ren-Bao; Scully, Marlan O

    2016-06-01

    Mesoscopic superposition states of photons can be prepared in three cavities interacting with the same two-level atom. By periodically modulating the three cavity frequencies around the transition frequency of the atom with a 2π/3 phase difference, the time reversal symmetry is broken and an optical circulator is generated with chiralities depending on the quantum state of the atom. A superposition of the atomic states can guide photons from one cavity to a mesoscopic superposition of the other two cavities. The physics can be understood in a finite spin-orbit-coupled Fock-state lattice where the atom and the cavities carry the spin and the orbit degrees of freedom, respectively. This scheme can be realized in circuit QED architectures and provides a new platform for exploring quantum information and topological physics in novel lattices. PMID:27314706

  20. Neural networks learn highly selective representations in order to overcome the superposition catastrophe.

    PubMed

    Bowers, Jeffrey S; Vankov, Ivan I; Damian, Markus F; Davis, Colin J

    2014-04-01

    A key insight from 50 years of neurophysiology is that some neurons in cortex respond to information in a highly selective manner. Why is this? We argue that selective representations support the coactivation of multiple "things" (e.g., words, objects, faces) in short-term memory, whereas nonselective codes are often unsuitable for this purpose. That is, the coactivation of nonselective codes often results in a blend pattern that is ambiguous; the so-called superposition catastrophe. We show that a recurrent parallel distributed processing network trained to code for multiple words at the same time over the same set of units learns localist letter and word codes, and the number of localist codes scales with the level of the superposition. Given that many cortical systems are required to coactivate multiple things in short-term memory, we suggest that the superposition constraint plays a role in explaining the existence of selective codes in cortex. PMID:24564411

  1. A Particle Multi-Target Tracker for Superpositional Measurements Using Labeled Random Finite Sets

    NASA Astrophysics Data System (ADS)

    Papi, Francesco; Kim, Du Yong

    2015-08-01

    In this paper we present a general solution for multi-target tracking with superpositional measurements. Measurements that are functions of the sum of the contributions of the targets present in the surveillance area are called superpositional measurements. We base our modelling on Labeled Random Finite Set (RFS) in order to jointly estimate the number of targets and their trajectories. This modelling leads to a labeled version of Mahler's multi-target Bayes filter. However, a straightforward implementation of this tracker using Sequential Monte Carlo (SMC) methods is not feasible due to the difficulties of sampling in high dimensional spaces. We propose an efficient multi-target sampling strategy based on Superpositional Approximate CPHD (SA-CPHD) filter and the recently introduced Labeled Multi-Bernoulli (LMB) and Vo-Vo densities. The applicability of the proposed approach is verified through simulation in a challenging radar application with closely spaced targets and low signal-to-noise ratio.

  2. Optical Synthesis of Large-Amplitude Squeezed Coherent-State Superpositions with Minimal Resources.

    PubMed

    Huang, K; Le Jeannic, H; Ruaudel, J; Verma, V B; Shaw, M D; Marsili, F; Nam, S W; Wu, E; Zeng, H; Jeong, Y-C; Filip, R; Morin, O; Laurat, J

    2015-07-10

    We propose and experimentally realize a novel versatile protocol that allows the quantum state engineering of heralded optical coherent-state superpositions. This scheme relies on a two-mode squeezed state, linear mixing, and a n-photon detection. It is optimally using expensive non-Gaussian resources to build up only the key non-Gaussian part of the targeted state. In the experimental case of a two-photon detection based on high-efficiency superconducting nanowire single-photon detectors, the freely propagating state exhibits a 67% fidelity with a squeezed even coherent-state superposition with a size |α|(2)=3. The demonstrated procedure and the achieved rate will facilitate the use of such superpositions in subsequent protocols, including fundamental tests and optical hybrid quantum information implementations. PMID:26207468

  3. Quantum tic-tac-toe: A teaching metaphor for superposition in quantum mechanics

    NASA Astrophysics Data System (ADS)

    Goff, Allan

    2006-11-01

    Quantum tic-tac-toe was developed as a metaphor for the counterintuitive nature of superposition exhibited by quantum systems. It offers a way of introducing quantum physics without advanced mathematics, provides a conceptual foundation for understanding the meaning of quantum mechanics, and is fun to play. A single superposition rule is added to the child's game of classical tic-tac-toe. Each move consists of a pair of marks subscripted by the number of the move ("spooky" marks) that must be placed in different squares. When a measurement occurs, one spooky mark becomes real and the other disappears. Quantum tic-tac-toe illustrates a number of quantum principles including states, superposition, collapse, nonlocality, entanglement, the correspondence principle, interference, and decoherence. The game can be played on paper or on a white board. A Web-based version provides a refereed playing board to facilitate the mechanics of play, making it ideal for classrooms with a computer projector.

  4. Towards quantum superposition of a levitated nanodiamond with a NV center

    NASA Astrophysics Data System (ADS)

    Li, Tongcang

    2015-05-01

    Creating large Schrödinger's cat states with massive objects is one of the most challenging goals in quantum mechanics. We have previously achieved an important step of this goal by cooling the center-of-mass motion of a levitated microsphere from room temperature to millikelvin temperatures with feedback cooling. To generate spatial quantum superposition states with an optical cavity, however, requires a very strong quadratic coupling that is difficult to achieve. We proposed to optically trap a nanodiamond with a nitrogen-vacancy (NV) center in vacuum, and generate large spatial superposition states using the NV spin-optomechanical coupling in a strong magnetic gradient field. The large spatial superposition states can be used to study objective collapse theories of quantum mechanics. We have optically trapped nanodiamonds in air and are working towards this goal.

  5. Monte Carlo dosimetric characterization of the Cs-137 selectron/LDR source: evaluation of applicator attenuation and superposition approximation effects.

    PubMed

    Pérez-Calatayud, J; Granero, D; Ballester, F; Puchades, V; Casal, E

    2004-03-01

    The purpose of this study is to calculate the dose rate distribution for the Amersham Cs-137 pellet source used in brachytherapy with the Selectron low-dose-rate remote afterloading system in gynaecological applications using the Monte Carlo code GEANT4. The absolute dose rate distribution for the pellet source was obtained and presented as a one-dimensional absolute dose rate table as well as in the Task Group 43 dose-calculation formalism. In this study, excellent agreement was found between the point source theoretical model using fitted polynomial values and Monte Carlo calculations of the dose rate distribution for the pellet source. A comparison study was also made between the dose rate distribution obtained from a complete Monte Carlo simulation (Cs-137 pellet sources + remote afterloading system plastic guide tube + gynaecological applicator) and that calculated by applying the superposition principle to Monte Carlo data of the individual pellet sources. The data were obtained for a portion of uterine tandem of typical train source configurations. Significant differences with a strong dependence on polar angle have been found that must be kept in mind for clinical dosimetry. PMID:15070245

  6. Quantum decoherence time scales for ionic superposition states in ion channels

    NASA Astrophysics Data System (ADS)

    Salari, V.; Moradi, N.; Sajadi, M.; Fazileh, F.; Shahbazi, F.

    2015-03-01

    There are many controversial and challenging discussions about quantum effects in microscopic structures in neurons of the brain and their role in cognitive processing. In this paper, we focus on a small, nanoscale part of ion channels which is called the "selectivity filter" and plays a key role in the operation of an ion channel. Our results for superposition states of potassium ions indicate that decoherence times are of the order of picoseconds. This decoherence time is not long enough for cognitive processing in the brain, however, it may be adequate for quantum superposition states of ions in the filter to leave their quantum traces on the selectivity filter and action potentials.

  7. Geometric measure of pairwise quantum discord for superpositions of multipartite generalized coherent states

    NASA Astrophysics Data System (ADS)

    Daoud, M.; Ahl Laamara, R.

    2012-07-01

    We give the explicit expressions of the pairwise quantum correlations present in superpositions of multipartite coherent states. A special attention is devoted to the evaluation of the geometric quantum discord. The dynamics of quantum correlations under a dephasing channel is analyzed. A comparison of geometric measure of quantum discord with that of concurrence shows that quantum discord in multipartite coherent states is more resilient to dissipative environments than is quantum entanglement. To illustrate our results, we consider some special superpositions of Weyl-Heisenberg, SU(2) and SU(1,1) coherent states which interpolate between Werner and Greenberger-Horne-Zeilinger states.

  8. Coherent Scattering of a Multiphoton Quantum Superposition by a Mirror BEC

    SciTech Connect

    De Martini, Francesco; Sciarrino, Fabio; Vitelli, Chiara; Cataliotti, Francesco S.

    2010-02-05

    We present the proposition of an experiment in which the multiphoton quantum superposition consisting of Napprox =10{sup 5} particles generated by a quantum-injected optical parametric amplifier, seeded by a single-photon belonging to an Einstein-Podolsky-Rosen entangled pair, is made to interact with a mirror-Bose-Einstein condensate (BEC) shaped as a Bragg interference structure. The overall process will realize a macroscopic quantum superposition involving a microscopic single-photon state of polarization entangled with the coherent macroscopic transfer of momentum to the BEC structure, acting in spacelike separated distant places.

  9. Intensity improvement in the attosecond pulse generation with the coherent superposition initial state

    NASA Astrophysics Data System (ADS)

    Feng, Liqiang; Chu, Tianshu

    2012-03-01

    We investigate the coherent superposition initial state effect and found that when the initial active electron state is prepared in the coherent superposition of the 1s and 2s states of the He+ ion and the chirp parameter of the fundamental field in the two-color scheme is chosen to be β=0.3, the harmonic cutoff energy is remarkably extended and the harmonic yield is enhanced by at least 6 orders of magnitude compared with the case of the single 1s ground state with chirp-free pulse. An ultrabroad supercontinuum with a 458 eV bandwidth is formed, directly producing an intense isolated 34 as pulse.

  10. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    NASA Astrophysics Data System (ADS)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to

  11. Digital Tomosynthesis System Geometry Analysis Using Convolution-Based Blur-and-Add (BAA) Model.

    PubMed

    Wu, Meng; Yoon, Sungwon; Solomon, Edward G; Star-Lack, Josh; Pelc, Norbert; Fahrig, Rebecca

    2016-01-01

    Digital tomosynthesis is a three-dimensional imaging technique with a lower radiation dose than computed tomography (CT). Due to the missing data in tomosynthesis systems, out-of-plane structures in the depth direction cannot be completely removed by the reconstruction algorithms. In this work, we analyzed the impulse responses of common tomosynthesis systems on a plane-to-plane basis and proposed a fast and accurate convolution-based blur-and-add (BAA) model to simulate the backprojected images. In addition, the analysis formalism describing the impulse response of out-of-plane structures can be generalized to both rotating and parallel gantries. We implemented a ray tracing forward projection and backprojection (ray-based model) algorithm and the convolution-based BAA model to simulate the shift-and-add (backproject) tomosynthesis reconstructions. The convolution-based BAA model with proper geometry distortion correction provides reasonably accurate estimates of the tomosynthesis reconstruction. A numerical comparison indicates that the simulated images using the two models differ by less than 6% in terms of the root-mean-squared error. This convolution-based BAA model can be used in efficient system geometry analysis, reconstruction algorithm design, out-of-plane artifacts suppression, and CT-tomosynthesis registration. PMID:26208308

  12. Artificial convolution neural network techniques and applications for lung nodule detection.

    PubMed

    Lo, S B; Lou, S A; Lin, J S; Freedman, M T; Chien, M V; Mun, S K

    1995-01-01

    We have developed a double-matching method and an artificial visual neural network technique for lung nodule detection. This neural network technique is generally applicable to the recognition of medical image pattern in gray scale imaging. The structure of the artificial neural net is a simplified network structure of human vision. The fundamental operation of the artificial neural network is local two-dimensional convolution rather than full connection with weighted multiplication. Weighting coefficients of the convolution kernels are formed by the neural network through backpropagated training. In addition, we modeled radiologists' reading procedures in order to instruct the artificial neural network to recognize the image patterns predefined and those of interest to experts in radiology. We have tested this method for lung nodule detection. The performance studies have shown the potential use of this technique in a clinical setting. This program first performed an initial nodule search with high sensitivity in detecting round objects using a sphere template double-matching technique. The artificial convolution neural network acted as a final classifier to determine whether the suspected image block contains a lung nodule. The total processing time for the automatic detection of lung nodules using both prescan and convolution neural network evaluation was about 15 seconds in a DEC Alpha workstation. PMID:18215875

  13. A generalized recursive convolution method for time-domain propagation in porous media.

    PubMed

    Dragna, Didier; Pineau, Pierre; Blanc-Benon, Philippe

    2015-08-01

    An efficient numerical method, referred to as the auxiliary differential equation (ADE) method, is proposed to compute convolutions between relaxation functions and acoustic variables arising in sound propagation equations in porous media. For this purpose, the relaxation functions are approximated in the frequency domain by rational functions. The time variation of the convolution is thus governed by first-order differential equations which can be straightforwardly solved. The accuracy of the method is first investigated and compared to that of recursive convolution methods. It is shown that, while recursive convolution methods are first or second-order accurate in time, the ADE method does not introduce any additional error. The ADE method is then applied for outdoor sound propagation using the equations proposed by Wilson et al. in the ground [(2007). Appl. Acoust. 68, 173-200]. A first one-dimensional case is performed showing that only five poles are necessary to accurately approximate the relaxation functions for typical applications. Finally, the ADE method is used to compute sound propagation in a three-dimensional geometry over an absorbing ground. Results obtained with Wilson's equations are compared to those obtained with Zwikker and Kosten's equations and with an impedance surface for different flow resistivities. PMID:26328719

  14. Convolutional neural network based sensor fusion for forward looking ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Rayn; Crosskey, Miles; Chen, David; Walenz, Brett; Morton, Kenneth

    2016-05-01

    Forward looking ground penetrating radar (FLGPR) is an alternative buried threat sensing technology designed to offer additional standoff compared to downward looking GPR systems. Due to additional flexibility in antenna configurations, FLGPR systems can accommodate multiple sensor modalities on the same platform that can provide complimentary information. The different sensor modalities present challenges in both developing informative feature extraction methods, and fusing sensor information in order to obtain the best discrimination performance. This work uses convolutional neural networks in order to jointly learn features across two sensor modalities and fuse the information in order to distinguish between target and non-target regions. This joint optimization is possible by modifying the traditional image-based convolutional neural network configuration to extract data from multiple sources. The filters generated by this process create a learned feature extraction method that is optimized to provide the best discrimination performance when fused. This paper presents the results of applying convolutional neural networks and compares these results to the use of fusion performed with a linear classifier. This paper also compares performance between convolutional neural networks architectures to show the benefit of fusing the sensor information in different ways.

  15. The VLSI design of an error-trellis syndrome decoder for certain convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Jensen, J. M.; Hsu, I.-S.; Truong, T. K.

    1986-01-01

    A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.

  16. The VLSI design of error-trellis syndrome decoding for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Jensen, J. M.; Truong, T. K.; Hsu, I. S.

    1985-01-01

    A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.

  17. Experimental Post-Hole Convolute Plasma Studies on a 1-MA Linear Transformer Driver (LTD)*

    NASA Astrophysics Data System (ADS)

    Gomez, M. R.; Gilgenbach, R. M.; French, D. M.; Zier, J. C.; Lau, Y. Y.; Cuneo, M. E.; Lopez, M. R.; Mazarakis, M. G.

    2009-11-01

    Post-hole convolutes are used to combine current from several parallel transmission lines, such that there is a low-inductance path to a single anode-cathode gap at the load. Experimental observations of the post-hole convolute are difficult to make on large systems, such as the Z-Machine at Sandia National Laboratories. A single post-hole convolute has been designed as the load for the 1 MA LTD at U. of Michigan. The geometry of the design allows diagnostic access to the post-hole region. The goal of these experiments is to monitor plasma formation in the convolute and to measure the current losses as a result of that plasma. Diagnostics under development for this experiment include B-dots for current measurement, optical spectroscopy for plasma composition, temperature and density measurements, and pinhole and laser diagnostics for imaging plasma dynamics. Experimental results will be compared to Particle-In-Cell simulations of this system using MAGIC PIC.* Research supported by Sandia National Labs subcontacts to UM. MRG sponsored by SSGF through NNSA and JZ sponsored by NPSC through DOE. Sandia is a multi-program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the US DOE's NNSA under contract DE-AC04-94AL85000.

  18. Convolutional FEC design considerations for data transmission over PSK satellite channels

    NASA Astrophysics Data System (ADS)

    Garrison, G. J.; Wong, V. C.

    Simulation results are provided for rate R = 1/2 convolutional error correcting codes suited to data transmission over BPSK, gray coded QPSK, and OQPSK channels. The burst generation mechanism resulting from differential encoding/decoding is analyzed in terms of the impairment to code performance and offsetting internal/external interleaving techniques are described.

  19. Application of time-temperature-stress superposition on creep of wood-plastic composites

    NASA Astrophysics Data System (ADS)

    Chang, Feng-Cheng; Lam, Frank; Kadla, John F.

    2013-08-01

    Time-temperature-stress superposition principle (TTSSP) was widely applied in studies of viscoelastic properties of materials. It involves shifting curves at various conditions to construct master curves. To extend the application of this principle, a temperature-stress hybrid shift factor and a modified Williams-Landel-Ferry (WLF) equation that incorporated variables of stress and temperature for the shift factor fitting were studied. A wood-plastic composite (WPC) was selected as the test subject to conduct a series of short-term creep tests. The results indicate that the WPC were rheologically simple materials and merely a horizontal shift was needed for the time-temperature superposition, whereas vertical shifting would be needed for time-stress superposition. The shift factor was independent of the stress for horizontal shifts in time-temperature superposition. In addition, the temperature- and stress-shift factors used to construct master curves were well fitted with the WLF equation. Furthermore, the parameters of the modified WLF equation were also successfully calibrated. The application of this method and equation can be extended to curve shifting that involves the effects of both temperature and stress simultaneously.

  20. Drawings and Ideas of Physics Teacher Candidates Relating to the Superposition Principle on a Continuous Rope

    ERIC Educational Resources Information Center

    Sengoren, Serap Kaya; Tanel, Rabia; Kavcar, Nevzat

    2006-01-01

    The superposition principle is used to explain many phenomena in physics. Incomplete knowledge about this topic at a basic level leads to physics students having problems in the future. As long as prospective physics teachers have difficulties in the subject, it is inevitable that high school students will have the same difficulties. The aim of…

  1. Using musical intervals to demonstrate superposition of waves and Fourier analysis

    NASA Astrophysics Data System (ADS)

    LoPresto, Michael C.

    2013-09-01

    What follows is a description of a demonstration of superposition of waves and Fourier analysis using a set of four tuning forks mounted on resonance boxes and oscilloscope software to create, capture and analyze the waveforms and Fourier spectra of musical intervals.

  2. Using Musical Intervals to Demonstrate Superposition of Waves and Fourier Analysis

    ERIC Educational Resources Information Center

    LoPresto, Michael C.

    2013-01-01

    What follows is a description of a demonstration of superposition of waves and Fourier analysis using a set of four tuning forks mounted on resonance boxes and oscilloscope software to create, capture and analyze the waveforms and Fourier spectra of musical intervals.

  3. Precise two-dimensional D-bar reconstructions of human chest and phantom tank via sinc-convolution algorithm

    PubMed Central

    2012-01-01

    Background Electrical Impedance Tomography (EIT) is used as a fast clinical imaging technique for monitoring the health of the human organs such as lungs, heart, brain and breast. Each practical EIT reconstruction algorithm should be efficient enough in terms of convergence rate, and accuracy. The main objective of this study is to investigate the feasibility of precise empirical conductivity imaging using a sinc-convolution algorithm in D-bar framework. Methods At the first step, synthetic and experimental data were used to compute an intermediate object named scattering transform. Next, this object was used in a two-dimensional integral equation which was precisely and rapidly solved via sinc-convolution algorithm to find the square root of the conductivity for each pixel of image. For the purpose of comparison, multigrid and NOSER algorithms were implemented under a similar setting. Quality of reconstructions of synthetic models was tested against GREIT approved quality measures. To validate the simulation results, reconstructions of a phantom chest and a human lung were used. Results Evaluation of synthetic reconstructions shows that the quality of sinc-convolution reconstructions is considerably better than that of each of its competitors in terms of amplitude response, position error, ringing, resolution and shape-deformation. In addition, the results confirm near-exponential and linear convergence rates for sinc-convolution and multigrid, respectively. Moreover, the least degree of relative errors and the most degree of truth were found in sinc-convolution reconstructions from experimental phantom data. Reconstructions of clinical lung data show that the related physiological effect is well recovered by sinc-convolution algorithm. Conclusions Parametric evaluation demonstrates the efficiency of sinc-convolution to reconstruct accurate conductivity images from experimental data. Excellent results in phantom and clinical reconstructions using sinc-convolution

  4. Convolution-Based Forced Detection Monte Carlo Simulation Incorporating Septal Penetration Modeling

    PubMed Central

    Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.

    2010-01-01

    In SPECT imaging, photon transport effects such as scatter, attenuation and septal penetration can negatively affect the quality of the reconstructed image and the accuracy of quantitation estimation. As such, it is useful to model these effects as carefully as possible during the image reconstruction process. Many of these effects can be included in Monte Carlo (MC) based image reconstruction using convolution-based forced detection (CFD). With CFD Monte Carlo (CFD-MC), often only the geometric response of the collimator is modeled, thereby making the assumption that the collimator materials are thick enough to completely absorb photons. However, in order to retain high collimator sensitivity and high spatial resolution, it is required that the septa be as thin as possible, thus resulting in a significant amount of septal penetration for high energy radionuclides. A method for modeling the effects of both collimator septal penetration and geometric response using ray tracing (RT) techniques has been performed and included into a CFD-MC program. Two look-up tables are pre-calculated based on the specific collimator parameters and radionuclides, and subsequently incorporated into the SIMIND MC program. One table consists of the cumulative septal thickness between any point on the collimator and the center location of the collimator. The other table presents the resultant collimator response for a point source at different distances from the collimator and for various energies. A series of RT simulations have been compared to experimental data for different radionuclides and collimators. Results of the RT technique matches experimental data of collimator response very well, producing correlation coefficients higher than 0.995. Reasonable values of the parameters in the lookup table and computation speed are discussed in order to achieve high accuracy while using minimal storage space for the look-up tables. In order to achieve noise-free projection images from MC, it

  5. SU-E-J-60: Efficient Monte Carlo Dose Calculation On CPU-GPU Heterogeneous Systems

    SciTech Connect

    Xiao, K; Chen, D. Z; Hu, X. S; Zhou, B

    2014-06-01

    Purpose: It is well-known that the performance of GPU-based Monte Carlo dose calculation implementations is bounded by memory bandwidth. One major cause of this bottleneck is the random memory writing patterns in dose deposition, which leads to several memory efficiency issues on GPU such as un-coalesced writing and atomic operations. We propose a new method to alleviate such issues on CPU-GPU heterogeneous systems, which achieves overall performance improvement for Monte Carlo dose calculation. Methods: Dose deposition is to accumulate dose into the voxels of a dose volume along the trajectories of radiation rays. Our idea is to partition this procedure into the following three steps, which are fine-tuned for CPU or GPU: (1) each GPU thread writes dose results with location information to a buffer on GPU memory, which achieves fully-coalesced and atomic-free memory transactions; (2) the dose results in the buffer are transferred to CPU memory; (3) the dose volume is constructed from the dose buffer on CPU. We organize the processing of all radiation rays into streams. Since the steps within a stream use different hardware resources (i.e., GPU, DMA, CPU), we can overlap the execution of these steps for different streams by pipelining. Results: We evaluated our method using a Monte Carlo Convolution Superposition (MCCS) program and tested our implementation for various clinical cases on a heterogeneous system containing an Intel i7 quad-core CPU and an NVIDIA TITAN GPU. Comparing with a straightforward MCCS implementation on the same system (using both CPU and GPU for radiation ray tracing), our method gained 2-5X speedup without losing dose calculation accuracy. Conclusion: The results show that our new method improves the effective memory bandwidth and overall performance for MCCS on the CPU-GPU systems. Our proposed method can also be applied to accelerate other Monte Carlo dose calculation approaches. This research was supported in part by NSF under Grants CCF

  6. Two-level pipelined systolic array for multi-dimensional convolution

    SciTech Connect

    Kung, H.T.; Ruane, L.M.; Yen, D.W.L.

    1982-11-01

    This paper describes a systolic array for the computation of n-dimensional (n-D) convolutions of any positive integer n. Systolic systems usually achieve high performance by allowing computations to be pipelined over a large array of processing elements. To achieve even higher performance, the systolic array of this paper utilizes a second level of pipelining by allowing the processing elements themselves to be pipelined to an arbitrary degree. Moreover, it is shown that as far as orders of magnitude are concerned, the total amount of memory required by the systolic array is no more than that needed by any convolution device that reads in each input data item only once. Thus if only schemes that use the minimum-possible I/O are considered, the systolic array is not only high performance, but also optimal in terms of the amount of required memory.

  7. Systolic array architecture for convolutional decoding algorithms: Viterbi algorithm and stack algorithm

    SciTech Connect

    Chang, C.Y.

    1986-01-01

    New results on efficient forms of decoding convolutional codes based on Viterbi and stack algorithms using systolic array architecture are presented. Some theoretical aspects of systolic arrays are also investigated. First, systolic array implementation of Viterbi algorithm is considered, and various properties of convolutional codes are derived. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. The issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computations overhead ratio are also investigated. Second, the existing stack algorithm is modified and restated in a more concise version so that it can be efficiently implemented by a special type of systolic array called systolic priority queue. Three general schemes of systolic priority queue based on random access memory, shift register, and ripple register are proposed. Finally, a systematic approach is presented to design systolic arrays for certain general classes of recursively formulated algorithms.

  8. Automatic detection of cell divisions (mitosis) in live-imaging microscopy images using Convolutional Neural Networks.

    PubMed

    Shkolyar, Anat; Gefen, Amit; Benayahu, Dafna; Greenspan, Hayit

    2015-08-01

    We propose a semi-automated pipeline for the detection of possible cell divisions in live-imaging microscopy and the classification of these mitosis candidates using a Convolutional Neural Network (CNN). We use time-lapse images of NIH3T3 scratch assay cultures, extract patches around bright candidate regions that then undergo segmentation and binarization, followed by a classification of the binary patches into either containing or not containing cell division. The classification is performed by training a Convolutional Neural Network on a specially constructed database. We show strong results of AUC = 0.91 and F-score = 0.89, competitive with state-of-the-art methods in this field. PMID:26736369

  9. Eye and sheath folds in turbidite convolute lamination: Aberystwyth Grits Group, Wales

    NASA Astrophysics Data System (ADS)

    McClelland, H. L. O.; Woodcock, N. H.; Gladstone, C.

    2011-07-01

    Eye and sheath folds are described from the turbidites of the Aberystwyth Group, in the Silurian of west Wales. They have been studied at outcrop and on high resolution optical scans of cut surfaces. The folds are not tectonic in origin. They occur as part of the convolute-laminated interval of each sand-mud turbidite bed. The thickness of this interval is most commonly between 20 and 100 mm. Lamination patterns confirm previous interpretations that convolute lamination nucleated on ripples and grew during continued sedimentation of the bed. The folds amplified vertically and were sheared horizontally by continuing turbidity flow, but only to average values of about γ = 1. The strongly curvilinear fold hinges are due not to high shear strains, but to nucleation on sinuous or linguoid ripples. The Aberystwyth Group structures provide a warning that not all eye folds in sedimentary or metasedimentary rocks should be interpreted as sections through high shear strain sheath folds.

  10. Quantum Fields Obtained from Convoluted Generalized White Noise Never Have Positive Metric

    NASA Astrophysics Data System (ADS)

    Albeverio, Sergio; Gottschalk, Hanno

    2016-05-01

    It is proven that the relativistic quantum fields obtained from analytic continuation of convoluted generalized (Lévy type) noise fields have positive metric, if and only if the noise is Gaussian. This follows as an easy observation from a criterion by Baumann, based on the Dell'Antonio-Robinson-Greenberg theorem, for a relativistic quantum field in positive metric to be a free field.

  11. Time-convoluted hotspot temperature field on a metal skin due to sustained arc stroke heating

    NASA Astrophysics Data System (ADS)

    Lee, T. S.; Su, W. Y.

    A previously developed time-convoluted heat-conduction theory is applied to the case of a metal plate whose heat source is sustained over time. Integral formulas are formally derived, and their utilization in practical arc-heating work is examined. The results are compared with experimental ones from titanium and aluminum plates subjected to sustained heating due to step switch-on dc arc sources, and reasonable agreement is found.

  12. Generation of a Superposition of Odd Photon Number States for Quantum Information Networks

    NASA Astrophysics Data System (ADS)

    Neergaard-Nielsen, J. S.; Nielsen, B. Melholt; Hettich, C.; Mølmer, K.; Polzik, E. S.

    2006-08-01

    We report on the experimental observation of quantum-network-compatible light described by a nonpositive Wigner function. The state is generated by photon subtraction from a squeezed vacuum state produced by a continuous wave optical parametric amplifier. Ideally, the state is a coherent superposition of odd photon number states, closely resembling a superposition of weak coherent states |α⟩-|-α⟩. In the limit of low squeezing the state is basically a single photon state. Light is generated with about 10 000 and more events per second in a nearly perfect spatial mode with a Fourier-limited frequency bandwidth which matches well atomic quantum memory requirements. The generated state of light is an excellent input state for testing quantum memories, quantum repeaters, and linear optics quantum computers.

  13. Generation of photonic orbital angular momentum superposition states using vortex beam emitters with superimposed gratings.

    PubMed

    Xiao, Qingsheng; Klitis, Charalambos; Li, Shimao; Chen, Yueyang; Cai, Xinlun; Sorel, Marc; Yu, Siyuan

    2016-02-22

    An integrated approach to produce photonic orbital angular momentum (OAM) superposition states with arbitrary OAM spectrum has been demonstrated. Superposition states between two vector OAM modes have been achieved by integrating a superimposed angular grating in one silicon micro-ring resonator, with each mode having near equal weight. The topological charge difference between the two compositional OAM modes is determined by the difference between the numbers of elements in the two original gratings being superimposed, while the absolute values of the topological charge can be changed synchronously by switching WGM resonant wavelengths. This novel approach provides a scalable and flexible source for the OAM-based quantum information and optical manipulation applications. PMID:26906981

  14. Quantum superposition of the order of parties as a communication resource

    NASA Astrophysics Data System (ADS)

    Feix, Adrien; Araújo, Mateus; Brukner, Časlav

    2015-11-01

    In a variant of communication complexity tasks, two or more separated parties cooperate to compute a function of their local data, using a limited amount of communication. It is known that communication of quantum systems and shared entanglement can increase the probability for the parties to arrive at the correct value of the function, compared to classical resources. Here we show that quantum superpositions of the direction of communication between parties can also serve as a resource to improve the probability of success. We present a tripartite task for which such a superposition provides an advantage compared to the case where the parties communicate in a fixed order. In a more general context, our result also provides semi-device-independent certification of the absence of a definite order of communication.

  15. On basis set superposition error corrected stabilization energies for large n-body clusters.

    PubMed

    Walczak, Katarzyna; Friedrich, Joachim; Dolg, Michael

    2011-10-01

    In this contribution, we propose an approximate basis set superposition error (BSSE) correction scheme for the site-site function counterpoise and for the Valiron-Mayer function counterpoise correction of second order to account for the basis set superposition error in clusters with a large number of subunits. The accuracy of the proposed scheme has been investigated for a water cluster series at the CCSD(T), CCSD, MP2, and self-consistent field levels of theory using Dunning's correlation consistent basis sets. The BSSE corrected stabilization energies for a series of water clusters are presented. A study regarding the possible savings with respect to computational resources has been carried out as well as a monitoring of the basis set dependence of the approximate BSSE corrections. PMID:21992293

  16. Superposition and detection of two helical beams for optical orbital angular momentum communication

    NASA Astrophysics Data System (ADS)

    Liu, Yi-Dong; Gao, Chunqing; Gao, Mingwei; Qi, Xiaoqing; Weber, Horst

    2008-07-01

    A loop-like system with a Dove prism is used to generate a collinear superposition of two helical beams with different azimuthal quantum numbers in this manuscript. After the generation of the helical beams distributed on the circle centered at the optical axis by using a binary amplitude grating, the diffractive field is separated into two polarized ones with the same distribution. Rotated by the Dove prism in the loop-like system in counter directions and combined together, the two fields will generate the collinear superposition of two helical beams in certain direction. The experiment shows consistency with the theoretical analysis. This method has potential applications in optical communication by using orbital angular momentum of laser beams (optical vortices).

  17. Quantum decoherence time scales for ionic superposition states in ion channels.

    PubMed

    Salari, V; Moradi, N; Sajadi, M; Fazileh, F; Shahbazi, F

    2015-03-01

    There are many controversial and challenging discussions about quantum effects in microscopic structures in neurons of the brain and their role in cognitive processing. In this paper, we focus on a small, nanoscale part of ion channels which is called the "selectivity filter" and plays a key role in the operation of an ion channel. Our results for superposition states of potassium ions indicate that decoherence times are of the order of picoseconds. This decoherence time is not long enough for cognitive processing in the brain, however, it may be adequate for quantum superposition states of ions in the filter to leave their quantum traces on the selectivity filter and action potentials. PMID:25871141

  18. From constants of motion to superposition rules for Lie-Hamilton systems

    NASA Astrophysics Data System (ADS)

    Ballesteros, A.; Cariñena, J. F.; Herranz, F. J.; de Lucas, J.; Sardón, C.

    2013-07-01

    A Lie system is a non-autonomous system of first-order differential equations possessing a superposition rule, i.e. a map expressing its general solution in terms of a generic finite family of particular solutions and some constants. Lie-Hamilton systems form a subclass of Lie systems whose dynamics is governed by a curve in a finite-dimensional real Lie algebra of functions on a Poisson manifold. It is shown that Lie-Hamilton systems are naturally endowed with a Poisson coalgebra structure. This allows us to devise methods for deriving in an algebraic way their constants of motion and superposition rules. We illustrate our methods by studying Kummer-Schwarz equations, Riccati equations, Ermakov systems and Smorodinsky-Winternitz systems with time-dependent frequency.

  19. Optical information encryption based on incoherent superposition with the help of the QR code

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Gong, Qiong

    2014-01-01

    In this paper, a novel optical information encryption approach is proposed with the help of QR code. This method is based on the concept of incoherent superposition which we introduce for the first time. The information to be encrypted is first transformed into the corresponding QR code, and thereafter the QR code is further encrypted into two phase only masks analytically by use of the intensity superposition of two diffraction wave fields. The proposed method has several advantages over the previous interference-based method, such as a higher security level, a better robustness against noise attack, a more relaxed work condition, and so on. Numerical simulation results and actual smartphone collected results are shown to validate our proposal.

  20. Robot Behavior Acquisition Superposition and Composting of Behaviors Learned through Teleoperation

    NASA Technical Reports Server (NTRS)

    Peters, Richard Alan, II

    2004-01-01

    Superposition of a small set of behaviors, learned via teleoperation, can lead to robust completion of a simple articulated reach-and-grasp task. Results support the hypothesis that a set of learned behaviors can be combined to generate new behaviors of a similar type. This supports the hypothesis that a robot can learn to interact purposefully with its environment through a developmental acquisition of sensory-motor coordination. Teleoperation bootstraps the process by enabling the robot to observe its own sensory responses to actions that lead to specific outcomes. A reach-and-grasp task, learned by an articulated robot through a small number of teleoperated trials, can be performed autonomously with success in the face of significant variations in the environment and perturbations of the goal. Superpositioning was performed using the Verbs and Adverbs algorithm that was developed originally for the graphical animation of articulated characters. Work was performed on Robonaut at NASA-JSC.

  1. Optical threshold secret sharing scheme based on basic vector operations and coherence superposition

    NASA Astrophysics Data System (ADS)

    Deng, Xiaopeng; Wen, Wei; Mi, Xianwu; Long, Xuewen

    2015-04-01

    We propose, to our knowledge for the first time, a simple optical algorithm for secret image sharing with the (2,n) threshold scheme based on basic vector operations and coherence superposition. The secret image to be shared is firstly divided into n shadow images by use of basic vector operations. In the reconstruction stage, the secret image can be retrieved by recording the intensity of the coherence superposition of any two shadow images. Compared with the published encryption techniques which focus narrowly on information encryption, the proposed method can realize information encryption as well as secret sharing, which further ensures the safety and integrality of the secret information and prevents power from being kept centralized and abused. The feasibility and effectiveness of the proposed method are demonstrated by numerical results.

  2. Brain-wave representation of words by superposition of a few sine waves

    PubMed Central

    Suppes, Patrick; Han, Bing

    2000-01-01

    Data from three previous experiments were analyzed to test the hypothesis that brain waves of spoken or written words can be represented by the superposition of a few sine waves. First, we averaged the data over trials and a set of subjects, and, in one case, over experimental conditions as well. Next we applied a Fourier transform to the averaged data and selected those frequencies with high energy, in no case more than nine in number. The superpositions of these selected sine waves were taken as prototypes. The averaged unfiltered data were the test samples. The prototypes were used to classify the test samples according to a least-squares criterion of fit. The results were seven of seven correct classifications for the first experiment using only three frequencies, six of eight for the second experiment using nine frequencies, and eight of eight for the third experiment using five frequencies. PMID:10890906

  3. Convolution modeling of two-domain, nonlinear water-level responses in karst aquifers (Invited)

    NASA Astrophysics Data System (ADS)

    Long, A. J.

    2009-12-01

    Convolution modeling is a useful method for simulating the hydraulic response of water levels to sinking streamflow or precipitation infiltration at the macro scale. This approach is particularly useful in karst aquifers, where the complex geometry of the conduit and pore network is not well characterized but can be represented approximately by a parametric impulse-response function (IRF) with very few parameters. For many applications, one-dimensional convolution models can be equally effective as complex two- or three-dimensional models for analyzing water-level responses to recharge. Moreover, convolution models are well suited for identifying and characterizing the distinct domains of quick flow and slow flow (e.g., conduit flow and diffuse flow). Two superposed lognormal functions were used in the IRF to approximate the impulses of the two flow domains. Nonlinear response characteristics of the flow domains were assessed by observing temporal changes in the IRFs. Precipitation infiltration was simulated by filtering the daily rainfall record with a backward-in-time exponential function that weights each day’s rainfall with the rainfall of previous days and thus accounts for the effects of soil moisture on aquifer infiltration. The model was applied to the Edwards aquifer in Texas and the Madison aquifer in South Dakota. Simulations of both aquifers showed similar characteristics, including a separation on the order of years between the quick-flow and slow-flow IRF peaks and temporal changes in the IRF shapes when water levels increased and empty pore spaces became saturated.

  4. Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification.

    PubMed

    Pang, Shan; Yang, Xinyi

    2016-01-01

    In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods. PMID:27610128

  5. Flexible algorithm for real-time convolution supporting dynamic event-related fMRI

    NASA Astrophysics Data System (ADS)

    Eaton, Brent L.; Frank, Randall J.; Bolinger, Lizann; Grabowski, Thomas J.

    2002-04-01

    An efficient algorithm for generation of the task reference function has been developed that allows real-time statistical analysis of fMRI data, within the framework of the general linear model, for experiments with event-related stimulus designs. By leveraging time-stamped data collection in the Input/Output time-aWare Architecture (I/OWA), we detect the onset time of a stimulus as it is delivered to a subject. A dynamically updated list of detected stimulus event times is maintained in shared memory as a data stream and delivered as input to a real-time convolution algorithm. As each image is acquired from the MR scanner, the time-stamp of its acquisition is delivered via a second dynamically updated stream to the convolution algorithm, where a running convolution of the events with an estimated hemodynamic response function is computed at the image acquisition time and written to a third stream in memory. Output is interpreted as the activation reference function and treated as the covariate of interest in the I/OWA implementation of the general linear model. Statistical parametric maps are computed and displayed to the I/OWA user interface in less than the time between successive image acquisitions.

  6. Using hybrid GPU/CPU kernel splitting to accelerate spherical convolutions

    NASA Astrophysics Data System (ADS)

    Sutter, P. M.; Wandelt, B. D.; Elsner, F.

    2015-06-01

    We present a general method for accelerating by more than an order of magnitude the convolution of pixelated functions on the sphere with a radially-symmetric kernel. Our method splits the kernel into a compact real-space component and a compact spherical harmonic space component. These components can then be convolved in parallel using an inexpensive commodity GPU and a CPU. We provide models for the computational cost of both real-space and Fourier space convolutions and an estimate for the approximation error. Using these models we can determine the optimum split that minimizes the wall clock time for the convolution while satisfying the desired error bounds. We apply this technique to the problem of simulating a cosmic microwave background (CMB) anisotropy sky map at the resolution typical of the high resolution maps produced by the Planck mission. For the main Planck CMB science channels we achieve a speedup of over a factor of ten, assuming an acceptable fractional rms error of order 10-5 in the power spectrum of the output map.

  7. Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification

    PubMed Central

    Yang, Xinyi

    2016-01-01

    In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods. PMID:27610128

  8. Convolutional neural network approach for buried target recognition in FL-LWIR imagery

    NASA Astrophysics Data System (ADS)

    Stone, K.; Keller, J. M.

    2014-05-01

    A convolutional neural network (CNN) approach to recognition of buried explosive hazards in forward-looking long-wave infrared (FL-LWIR) imagery is presented. The convolutional filters in the first layer of the network are learned in the frequency domain, making enforcement of zero-phase and zero-dc response characteristics much easier. The spatial domain representations of the filters are forced to have unit l2 norm, and penalty terms are added to the online gradient descent update to encourage orthonormality among the convolutional filters, as well smooth first and second order derivatives in the spatial domain. The impact of these modifications on the generalization performance of the CNN model is investigated. The CNN approach is compared to a second recognition algorithm utilizing shearlet and log-gabor decomposition of the image coupled with cell-structured feature extraction and support vector machine classification. Results are presented for multiple FL-LWIR data sets recently collected from US Army test sites. These data sets include vehicle position information allowing accurate transformation between image and world coordinates and realistic evaluation of detection and false alarm rates.

  9. A generalization of the Boltzmann superposition principle to polymer networks undergoing scission

    NASA Technical Reports Server (NTRS)

    Moacanin, J.; Landel, R. F.; Aklonis, J. J.

    1976-01-01

    Methods reported by Moacanin et al. (1975) and Moacanin and Aklonis (1971) are generalized with the objective to include strains (or stress) applied in an arbitrary manner to linearly viscoelastic materials. An imposition of changes in both the strain and the density of elastically effective chains in discrete increments is considered. In accordance with the Boltzmann superposition principle, each strain increment may be treated as a new independent experiment which adds linearly to the total response of the system.

  10. Note: An explicit solution of the optimal superposition and Eckart frame problems

    NASA Astrophysics Data System (ADS)

    Cioslowski, Jerzy

    2016-07-01

    Attention is called to an explicit solution of both the optimal superposition and Eckart frame problems that requires neither matrix diagonalization nor quaternion algebra. A simple change in one variable that enters the expression for the solution matrix T allows for selection of T representing either a proper rotation or a more general orthogonal transformation. The issues concerning the use of these alternative selections and the equivalence of the two problems are addressed.

  11. Implementation and validation of collapsed cone superposition for radiopharmaceutical dosimetry of photon emitters.

    PubMed

    Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud

    2015-10-21

    Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and (18)F, (99m)Tc, (131)I and (177)Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the (99m)Tc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity. PMID:26406778

  12. Entanglement, EPR correlations, and mesoscopic quantum superposition by the high-gain quantum injected parametric amplification

    SciTech Connect

    Caminati, Marco; De Martini, Francesco; Perris, Riccardo; Secondi, Veronica; Sciarrino, Fabio

    2006-12-15

    We investigate the multiparticle quantum superposition and the persistence of bipartite entanglement of the output field generated by the quantum injected high-gain optical parametric amplification of a single photon. The physical configuration based on the optimal universal quantum cloning has been adopted to investigate how the entanglement and the quantum coherence of the system persists for large values of the nonlinear parametric gain g.

  13. Stability of a superposition of shock waves with contact discontinuities for systems of viscous conservation laws

    NASA Astrophysics Data System (ADS)

    Zeng, Huihui

    In this paper, we show the large time asymptotic nonlinear stability of a superposition of viscous shock waves with viscous contact waves for systems of viscous conservation laws with small initial perturbations, provided that the strengths of these viscous waves are small with the same order. The results are obtained by elementary weighted energy estimates based on the underlying wave structure and a new estimate on the heat equation.

  14. Robust coherent superposition of states by single-shot shaped pulse

    NASA Astrophysics Data System (ADS)

    Ndong, Mamadou; Djotyan, Gagik; Ruschhaupt, Andreas; Guérin, Stéphane

    2015-09-01

    We adapt a single-shot shaped pulse technique to produce robust coherent superpositions of quantum states with a high fidelity of control. We derive simple pulses of low areas for the corresponding Rabi frequency which are robust with respect to pulse area imperfections. Such features of robustness, high-fidelity, and low Rabi frequency area are crucial steps towards the experimental implementation of scalable quantum gates.

  15. Analysis of a teleportation scheme involving cavity field states in a linear superposition of Fock states

    NASA Astrophysics Data System (ADS)

    Carvalho, C. R.; Guerra, E. S.; Jalbert, Ginette

    2008-04-01

    We analyse a teleportation scheme of cavity field states. The experimental sketch discussed makes use of cavity quantum electrodynamics involving the interaction of Rydberg atoms with superconducting (micromaser) cavities as well as with classical microwave (Ramsey) cavities. In our scheme the Ramsey cavities and the atoms play the role of auxiliary systems used to teleport a field state, which is formed by a linear superposition of vacuum |∅> and the one-photon state |1>, from a micromaser cavity to another.

  16. Note: An explicit solution of the optimal superposition and Eckart frame problems.

    PubMed

    Cioslowski, Jerzy

    2016-07-14

    Attention is called to an explicit solution of both the optimal superposition and Eckart frame problems that requires neither matrix diagonalization nor quaternion algebra. A simple change in one variable that enters the expression for the solution matrix T allows for selection of T representing either a proper rotation or a more general orthogonal transformation. The issues concerning the use of these alternative selections and the equivalence of the two problems are addressed. PMID:27421427

  17. Implementation and validation of collapsed cone superposition for radiopharmaceutical dosimetry of photon emitters

    NASA Astrophysics Data System (ADS)

    Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud

    2015-10-01

    Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and 18F, 99mTc, 131I and 177Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the 99mTc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.

  18. [Superposition impact character of air pollution from decentralization docks in a freshwater port].

    PubMed

    Liu, Jian-chang; Li, Xing-hua; Xu, Hong-lei; Cheng, Jin-xiang; Wang, Zhong-dai; Xiao, Yang

    2013-05-01

    Air pollution from freshwater port is mainly caused by dust pollution, including material loading and unloading dust, road dust, and wind erosion dust from stockpile, bare soil. The dust pollution from a single dock characterized in obvious difference with air pollution from multiple scattered docks. Jining Port of Shandong Province was selected as a case study to get superposition impact contribution of air pollution for regional air environment from multiple scattered docks and to provide technical support for system evaluation of port air pollution. The results indicate that (1) the air pollution from freshwater port occupies a low proportion of pollution impact on regional environmental quality because the port is consisted of serveral small scattered docks; (2) however, the geometric center of the region distributed by docks is severely affected with the most superposition of the air pollution; and (3) the ADMS model is helpful to attain an effective and integrated assessment to predict a superposition impact of multiple non-point pollution sources when the differences of high-altitude weather conditions was not considered on a large scale. PMID:23914566

  19. Sagnac interferometry with coherent vortex superposition states in exciton-polariton condensates

    NASA Astrophysics Data System (ADS)

    Moxley, Frederick Ira; Dowling, Jonathan P.; Dai, Weizhong; Byrnes, Tim

    2016-05-01

    We investigate prospects of using counter-rotating vortex superposition states in nonequilibrium exciton-polariton Bose-Einstein condensates for the purposes of Sagnac interferometry. We first investigate the stability of vortex-antivortex superposition states, and show that they survive at steady state in a variety of configurations. Counter-rotating vortex superpositions are of potential interest to gyroscope and seismometer applications for detecting rotations. Methods of improving the sensitivity are investigated by targeting high momentum states via metastable condensation, and the application of periodic lattices. The sensitivity of the polariton gyroscope is compared to its optical and atomic counterparts. Due to the large interferometer areas in optical systems and small de Broglie wavelengths for atomic BECs, the sensitivity per detected photon is found to be considerably less for the polariton gyroscope than with competing methods. However, polariton gyroscopes have an advantage over atomic BECs in a high signal-to-noise ratio, and have other practical advantages such as room-temperature operation, area independence, and robust design. We estimate that the final sensitivities including signal-to-noise aspects are competitive with existing methods.

  20. Some rate 1/3 and 1/4 binary convolutional codes with an optimum distance profile

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1977-01-01

    A tabulation of binary systematic convolutional codes with an optimum distance profile for rates 1/3 and 1/4 is given. A number of short rate 1/3 binary nonsystematic convolutional codes are listed. These latter codes are simultaneously optimal for the following distance measures: distance profile, minimum distance, and free distance; they appear attractive for use with Viterbi decoders. Comparisons with previously known codes are made.

  1. Fully 3D Particle-in-Cell Simulation of Double Post-Hole Convolute on PTS Facility

    NASA Astrophysics Data System (ADS)

    Zhao, Hailong; Dong, Ye; Zhou, Haijing; Zou, Wenkang; Institute of Fluid Physics Collaboration; Institute of Applied Physics; Computational Mathematics Collaboration

    2015-11-01

    In order to get better understand of energy transforming and converging process during High Energy Density Physics (HEDP) experiments, fully 3D particle-in-cell (PIC) simulation code NEPTUNE3D was used to provide numerical approach towards parameters which could hardly be acquired through diagnostics. Cubic region (34cm × 34cm × 18cm) including the double post-hole convolute (DPHC) on the primary test stand (PTS) facility was chosen to perform a series of fully 3D PIC simulations, calculating ability of codes were tested and preliminary simulation results about DPHC on PTS facility were discussed. Taking advantages of 3D simulation codes and large-scale parallel computation, massive data (~ 250GB) could be acquired in less than 5 hours and clear process of current transforming and electron emission in DPHC were demonstrated with the help of visualization tools. Cold-chamber tests were performed during which only cathode electron emission was considered without temperature rise or ion emission, current loss efficiency was estimated to be 0.46% ~ 0.48% by comparisons between output magnetic field profiles with or without electron emission. Project supported by the National Natural Science Foundation of China (Grant No. 11205145, 11305015, 11475155).

  2. Convolution-based estimation of organ dose in tube current modulated CT

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan

    2016-05-01

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460–7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18–70 years, weight range: 60–180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled

  3. TH-E-BRE-03: A Novel Method to Account for Ion Chamber Volume Averaging Effect in a Commercial Treatment Planning System Through Convolution

    SciTech Connect

    Barraclough, B; Li, J; Liu, C; Yan, G

    2014-06-15

    Purpose: Fourier-based deconvolution approaches used to eliminate ion chamber volume averaging effect (VAE) suffer from measurement noise. This work aims to investigate a novel method to account for ion chamber VAE through convolution in a commercial treatment planning system (TPS). Methods: Beam profiles of various field sizes and depths of an Elekta Synergy were collected with a finite size ion chamber (CC13) to derive a clinically acceptable beam model for a commercial TPS (Pinnacle{sup 3}), following the vendor-recommended modeling process. The TPS-calculated profiles were then externally convolved with a Gaussian function representing the chamber (σ = chamber radius). The agreement between the convolved profiles and measured profiles was evaluated with a one dimensional Gamma analysis (1%/1mm) as an objective function for optimization. TPS beam model parameters for focal and extra-focal sources were optimized and loaded back into the TPS for new calculation. This process was repeated until the objective function converged using a Simplex optimization method. Planar dose of 30 IMRT beams were calculated with both the clinical and the re-optimized beam models and compared with MapCHEC™ measurements to evaluate the new beam model. Results: After re-optimization, the two orthogonal source sizes for the focal source reduced from 0.20/0.16 cm to 0.01/0.01 cm, which were the minimal allowed values in Pinnacle. No significant change in the parameters for the extra-focal source was observed. With the re-optimized beam model, average Gamma passing rate for the 30 IMRT beams increased from 92.1% to 99.5% with a 3%/3mm criterion and from 82.6% to 97.2% with a 2%/2mm criterion. Conclusion: We proposed a novel method to account for ion chamber VAE in a commercial TPS through convolution. The reoptimized beam model, with VAE accounted for through a reliable and easy-to-implement convolution and optimization approach, outperforms the original beam model in standard IMRT QA

  4. Connecting a new non-adiabatic vibrational mass to the bonding mechanism of LiH: A quantum superposition of ionic and covalent states

    NASA Astrophysics Data System (ADS)

    Diniz, Leonardo G.; Alijah, Alexander; Adamowicz, Ludwik; Mohallem, José R.

    2015-07-01

    Non-adiabatic vibrational calculations performed with the accuracy of 0.2 cm-1 spanning the whole energy spectrum up to the dissociation limit for 7LiH are reported. A so far unknown v = 23 energy level is predicted. The key feature of the approach used in the calculations is a valence-bond (VB) based procedure for determining the effective masses of the two vibrating atoms, which depend on the internuclear distance, R. It is found that all LiH electrons participate in the vibrational motion. The R-dependent masses are obtained from the analysis of the simple VB two-configuration ionic-covalent representation of the electronic wave function. These findings are consistent with an interpretation of the chemical bond in LiH as a quantum mechanical superposition of one-electron ionic and covalent states.

  5. Triplet correlations in the hard rod fluid: A test for topological reduction of graph-theoretic corrections to the superposition approximation

    NASA Astrophysics Data System (ADS)

    Haymet, A. D. J.

    1984-04-01

    Two series expansions for the triplet correlation function, which have been used previously to study three-dimensional liquids, are evaluated in a case where the exact triple correlation function is known, namely, hard rods in one dimension. These series are studied in the context of the Yvon-Born-Green (YBG) integral equation. The coefficients in the f-bond series are evaluated analytically, but the resultant corrections to the superposition approximation are minor. In contrast, the coefficients of the h-bond series, which are calculated numerically, provide an accurate approximation to the triplet correlation function for densities of interest below two-thirds of the close-packed density. The validity of the ``scaling'' approximation of the h-bond series, which has been used in theories of quantum liquids, is also examined, and these calculations are shown to be relevant to earlier studies of three-dimensional liquids.

  6. Solenoid magnetic fields calculated from superposed semi-infinite solenoids

    NASA Technical Reports Server (NTRS)

    Brown, G. V.; Flax, L.

    1966-01-01

    Calculation of a thick solenoid coils magnetic field components is made by a superposition of the fields produced by four solenoids of infinite length and zero inner radius. The field produced by this semi-infinite solenoid is dependent on only two variables, the radial and axial field point coordinates.

  7. Monte Carlo calculation of helical tomotherapy dose delivery

    SciTech Connect

    Zhao Yingli; Mackenzie, M.; Kirkby, C.; Fallone, B. G.

    2008-08-15

    Helical tomotherapy delivers intensity modulated radiation therapy using a binary multileaf collimator (MLC) to modulate a fan beam of radiation. This delivery occurs while the linac gantry and treatment couch are both in constant motion, so the beam describes, from a patient/phantom perspective, a spiral or helix of dose. The planning system models this continuous delivery as a large number (51) of discrete gantry positions per rotation, and given the small jaw/fan width setting typically used (1 or 2.5 cm) and the number of overlapping rotations used to cover the target (pitch often <0.5), the treatment planning system (TPS) potentially employs a very large number of static beam directions and leaf opening configurations to model the modulated fields. All dose calculations performed by the system employ a convolution/superposition model. In this work the authors perform a full Monte Carlo (MC) dose calculation of tomotherapy deliveries to phantom computed tomography (CT) data sets to verify the TPS calculations. All MC calculations are performed with the EGSnrc-based MC simulation codes, BEAMnrc and DOSXYZnrc. Simulations are performed by taking the sinogram (leaf opening versus time) of the treatment plan and decomposing it into 51 different projections per rotation, as does the TPS, each of which is segmented further into multiple MLC opening configurations, each with different weights that correspond to leaf opening times. Then the projection is simulated by the summing of all of the opening configurations, and the overall rotational treatment is simulated by the summing of all of the projection simulations. Commissioning of the source model was verified by comparing measured and simulated values for the percent depth dose and beam profiles shapes for various jaw settings. The accuracy of the MLC leaf width and tongue and groove spacing were verified by comparing measured and simulated values for the MLC leakage and a picket fence pattern. The validated source

  8. Mode superposition transient dynamic analysis for dental implants with stress-absorbing elements: a finite element analysis.

    PubMed

    Tanimoto, Yasuhiro; Hayakawa, Tohru; Nemoto, Kimiya

    2006-09-01

    The purpose of this study was to analyze the dynamic behavior of a dental implant with a stress-absorbing element, using dynamic analysis. Two model types, stress-absorbing model with a resilient stress absorber made of polyoxymethylene and non-stress-absorbing model with rigid titanium, were employed. In both model types, the implant was 4.0 mm in diameter and 13.0 mm in length and placed in the mandibular first molar region. Shapes of the finite element implant and implant-bone were modeled using computer-aided design. All calculations for the dynamic analysis were performed using the finite element method. It was found that the stress-absorbing model had a lower natural frequency than the non-stress-absorbing model. In addition, the stress-absorbing model had a higher damping effect than the non-stress-absorbing model. It was concluded that mode superposition transient dynamic analysis is a useful technique for determining dynamic behavior around dental implants. PMID:17076317

  9. A Bäcklund Transformation and Nonlinear Superposition Formula of the Caudrey-Dodd-Gibbon-Kotera-Sawada Hierarchy

    NASA Astrophysics Data System (ADS)

    Hu, Xing-Biao; Bullough, Robin

    1998-03-01

    In this paper, the Caudrey-Dodd-Gibbon-Kotera-Sawada hierarchy in bilinear form is considered. A Bäcklund transformation for the CDGKS hierarchy is presented. Under certain conditions, the corresponding nonlinear superposition formula is proved.

  10. Microstructures in CoPtC magnetic thin films studied by superpositioning of micro-electron diffraction

    PubMed

    Tomita; Sugiyama; Sato; Delaunay; Hayashi

    2000-01-01

    Cross-sectional transmission electron microscopy observation of CoPtC thin films showed that 10 nm sized ultrafine particles of CoPt typically were elongated along the substrate normal. Analysis of the superposition of 40 micro-electron diffraction patterns showed that there was no preferred crystal orientation of CoPt particles. This superpositioning technique can be applied to thin films, whose X-ray diffraction analysis is difficult due to the small size of the crystals. PMID:10791426

  11. Convolution method and CTV-to-PTV margins for finite fractions and small systematic errors

    NASA Astrophysics Data System (ADS)

    Gordon, J. J.; Siebers, J. V.

    2007-04-01

    The van Herk margin formula (VHMF) relies on the accuracy of the convolution method (CM) to determine clinical target volume (CTV) to planning target volume (PTV) margins. This work (1) evaluates the accuracy of the CM and VHMF as a function of the number of fractions N and other parameters, and (2) proposes an alternative margin algorithm which ensures target coverage for a wider range of parameter values. Dose coverage was evaluated for a spherical target with uniform margin, using the same simplified dose model and CTV coverage criterion as were used in development of the VHMF. Systematic and random setup errors were assumed to be normally distributed with standard deviations Σ and σ. For clinically relevant combinations of σ, Σ and N, margins were determined by requiring that 90% of treatment course simulations have a CTV minimum dose greater than or equal to the static PTV minimum dose. Simulation results were compared with the VHMF and the alternative margin algorithm. The CM and VHMF were found to be accurate for parameter values satisfying the approximate criterion: σ[1 - γN/25] < 0.2, where γ = Σ/σ. They were found to be inaccurate for σ[1 - γN/25] > 0.2, because they failed to account for the non-negligible dose variability associated with random setup errors. These criteria are applicable when σ gap σP, where σP = 0.32 cm is the standard deviation of the normal dose penumbra. (Qualitative behaviour of the CM and VHMF will remain the same, though the criteria might vary if σP takes values other than 0.32 cm.) When σ Lt σP, dose variability due to random setup errors becomes negligible, and the CM and VHMF are valid regardless of the values of Σ and N. When σ gap σP, consistent with the above criteria, it was found that the VHMF can underestimate margins for large σ, small Σ and small N. A potential consequence of this underestimate is that the CTV minimum dose can fall below its planned value in more than the prescribed 10% of

  12. Convolution method and CTV-to-PTV margins for finite fractions and small systematic errors.

    PubMed

    Gordon, J J; Siebers, J V

    2007-04-01

    The van Herk margin formula (VHMF) relies on the accuracy of the convolution method (CM) to determine clinical target volume (CTV) to planning target volume (PTV) margins. This work (1) evaluates the accuracy of the CM and VHMF as a function of the number of fractions N and other parameters, and (2) proposes an alternative margin algorithm which ensures target coverage for a wider range of parameter values. Dose coverage was evaluated for a spherical target with uniform margin, using the same simplified dose model and CTV coverage criterion as were used in development of the VHMF. Systematic and random setup errors were assumed to be normally distributed with standard deviations Sigma and sigma. For clinically relevant combinations of sigma, Sigma and N, margins were determined by requiring that 90% of treatment course simulations have a CTV minimum dose greater than or equal to the static PTV minimum dose. Simulation results were compared with the VHMF and the alternative margin algorithm. The CM and VHMF were found to be accurate for parameter values satisfying the approximate criterion: sigma[1 - gammaN/25] < 0.2, where gamma = Sigma/sigma. They were found to be inaccurate for sigma[1 - gammaN/25] > 0.2, because they failed to account for the non-negligible dose variability associated with random setup errors. These criteria are applicable when sigma greater than or approximately egual sigma(P), where sigma(P) = 0.32 cm is the standard deviation of the normal dose penumbra. (Qualitative behaviour of the CM and VHMF will remain the same, though the criteria might vary if sigma(P) takes values other than 0.32 cm.) When sigma < sigma(P), dose variability due to random setup errors becomes negligible, and the CM and VHMF are valid regardless of the values of Sigma and N. When sigma greater than or approximately egual sigma(P), consistent with the above criteria, it was found that the VHMF can underestimate margins for large sigma, small Sigma and small N. A

  13. Knowledge Based 3d Building Model Recognition Using Convolutional Neural Networks from LIDAR and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2016-06-01

    In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.

  14. Convolutions of Rayleigh functions and their application to semi-linear equations in circular domains

    NASA Astrophysics Data System (ADS)

    Varlamov, Vladimir

    2007-03-01

    Rayleigh functions [sigma]l([nu]) are defined as series in inverse powers of the Bessel function zeros [lambda][nu],n[not equal to]0, where ; [nu] is the index of the Bessel function J[nu](x) and n=1,2,... is the number of the zeros. Convolutions of Rayleigh functions with respect to the Bessel index, are needed for constructing global-in-time solutions of semi-linear evolution equations in circular domains [V. Varlamov, On the spatially two-dimensional Boussinesq equation in a circular domain, Nonlinear Anal. 46 (2001) 699-725; V. Varlamov, Convolution of Rayleigh functions with respect to the Bessel index, J. Math. Anal. Appl. 306 (2005) 413-424]. The study of this new family of special functions was initiated in [V. Varlamov, Convolution of Rayleigh functions with respect to the Bessel index, J. Math. Anal. Appl. 306 (2005) 413-424], where the properties of R1(m) were investigated. In the present work a general representation of Rl(m) in terms of [sigma]l([nu]) is deduced. On the basis of this a representation for the function R2(m) is obtained in terms of the [psi]-function. An asymptotic expansion is computed for R2(m) as m-->[infinity]. Such asymptotics are needed for establishing function spaces for solutions of semi-linear equations in bounded domains with periodicity conditions in one coordinate. As an example of application of Rl(m) a forced Boussinesq equationutt-2b[Delta]ut=-[alpha][Delta]2u+[Delta]u+[beta][Delta](u2)+f with [alpha],b=const>0 and [beta]=const[set membership, variant]R is considered in a unit disc with homogeneous boundary and initial data. Construction of its global-in-time solutions involves the use of the functions R1(m) and R2(m) which are responsible for the nonlinear smoothing effect.

  15. Prediction in cases with superposition of different hydrological phenomena, such as from weather "cold drops

    NASA Astrophysics Data System (ADS)

    Anton, J. M.; Grau, J. B.; Tarquis, A. M.; Andina, D.; Sanchez, M. E.

    2012-04-01

    The authors have been involved in Model Codes for Construction prior to Eurocodes now Euronorms, and in a Drainage Instruction for Roads for Spain that adopted a prediction model from BPR (Bureau of Public Roads) of USA to take account of evident regional differences in Iberian Peninsula and Spanish Isles, and in some related studies. They used Extreme Value Type I (Gumbell law) models, with independent actions in superposition; this law was also adopted then to obtain maps of extreme rains by CEDEX. These methods could be extrapolated somehow with other extreme values distributions, but the first step was useful to set valid superposition schemas for actions in norms. As real case, in East of Spain rain comes usually extensively from normal weather perturbations, but in other cases from "cold drop" local high rains of about 400mm in a day occur, causing inundations and in cases local disasters. The city of Valencia in East of Spain was inundated at 1,5m high from a cold drop in 1957, and the river Turia formerly through that city was just later diverted some kilometers to South in a wider canal. With Gumbell law the expected intensity grows with time for occurrence, indicating a value for each given "return period", but the increasing speed grows with the "annual dispersion" of the Gumbell law, and some rare dangerous events may become really very possible in periods of many years. That can be proved with relatively simple models, e.g. with Extreme Law type I, and they could be made more precise or discussed. Such effects were used for superposition of actions on a structure for Model Codes, and may be combined with hydraulic effects, e.g. for bridges on rivers. These different Gumbell laws, or other extreme laws, with different dispersion may occur for marine actions of waves, earthquakes, tsunamis, and maybe for human perturbations, that could include industrial catastrophes, or civilization wars if considering historical periods.

  16. Evaluation of Compton attenuation and photoelectric absorption coefficients by convolution of scattering and primary functions and counts ratio on energy spectra

    PubMed Central

    Ashoor, Mansour; Asgari, Afrouz; Khorshidi, Abdollah; Rezaei, Ali

    2015-01-01

    Purpose: Estimation of Compton attenuation and the photoelectric absorption coefficients were explored at various depths. Methods: A new method was proposed for estimating the depth based on the convolution of two exponential functions, namely convolution of scattering and primary functions (CSPF), which the convolved result will conform to the photopeak region of energy spectrum with the variable energy-window widths (EWWs) and a theory on the scattering cross-section. The triple energy-windows (TEW) and extended triple energy-windows scatter correction (ETEW) methods were used to estimate the scattered and primary photons according to the energy spectra at various depths due to a better performance than the other methods in nuclear medicine. For this purpose, the energy spectra were employed, and a distinct phantom along with a technetium-99 m source was simulated by Monte Carlo method. Results: The simulated results indicate that the EWW, used to calculate the scattered and primary counts in terms of the integral operators on the functions, was proportional to the depth as an exponential function. The depth will be calculated by the combination of either TEW or ETEW and proposed method resulting in the distinct energy-window. The EWWs for primary photons were in good agreement with those of scattered photons at the same as depths. The average errors between these windows for both methods TEW, and ETEW were 7.25% and 6.03% at different depths, respectively. The EWW value for functions of scattered and primary photons was reduced by increasing the depth in the CSPF method. Conclusions: This coefficient may be an index for the scattering cross-section. PMID:26170567

  17. Automatic breast density classification using a convolutional neural network architecture search procedure

    NASA Astrophysics Data System (ADS)

    Fonseca, Pablo; Mendoza, Julio; Wainer, Jacques; Ferrer, Jose; Pinto, Joseph; Guerrero, Jorge; Castaneda, Benjamin

    2015-03-01

    Breast parenchymal density is considered a strong indicator of breast cancer risk and therefore useful for preventive tasks. Measurement of breast density is often qualitative and requires the subjective judgment of radiologists. Here we explore an automatic breast composition classification workflow based on convolutional neural networks for feature extraction in combination with a support vector machines classifier. This is compared to the assessments of seven experienced radiologists. The experiments yielded an average kappa value of 0.58 when using the mode of the radiologists' classifications as ground truth. Individual radiologist performance against this ground truth yielded kappa values between 0.56 and 0.79.

  18. Processing circuit with asymmetry corrector and convolutional encoder for digital data

    NASA Technical Reports Server (NTRS)

    Pfiffner, Harold J. (Inventor)

    1987-01-01

    A processing circuit is provided for correcting for input parameter variations, such as data and clock signal symmetry, phase offset and jitter, noise and signal amplitude, in incoming data signals. An asymmetry corrector circuit performs the correcting function and furnishes the corrected data signals to a convolutional encoder circuit. The corrector circuit further forms a regenerated clock signal from clock pulses in the incoming data signals and another clock signal at a multiple of the incoming clock signal. These clock signals are furnished to the encoder circuit so that encoded data may be furnished to a modulator at a high data rate for transmission.

  19. Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator

    SciTech Connect

    Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; Vesey, R. A.; Jones, B.; Ampleford, D. J.; Lemke, R. W.; Martin, M. R.; Schrafel, P. C.; Lewis, S. A.; Moore, J. K.; Savage, M. E.; Stygar, W. A.

    2014-12-08

    Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator’s vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator’s vacuum-insulator stack (at a radius of 1.6 m) by using standard D-dot and B-dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator’s magnetically insulated transmission lines (MITLs) and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z. These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that

  20. Real-time minimal bit error probability decoding of convolutional codes

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1973-01-01

    A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.

  1. Parity retransmission hybrid ARQ using rate 1/2 convolutional codes on a nonstationary channel

    NASA Technical Reports Server (NTRS)

    Lugand, Laurent R.; Costello, Daniel J., Jr.; Deng, Robert H.

    1989-01-01

    A parity retransmission hybrid automatic repeat request (ARQ) scheme is proposed which uses rate 1/2 convolutional codes and Viterbi decoding. A protocol is described which is capable of achieving higher throughputs than previously proposed parity retransmission schemes. The performance analysis is based on a two-state Markov model of a nonstationary channel. This model constitutes a first approximation to a nonstationary channel. The two-state channel model is used to analyze the throughput and undetected error probability of the protocol presented when the receiver has both an infinite and a finite buffer size. It is shown that the throughput improves as the channel becomes more bursty.

  2. The use of interleaving for reducing radio loss in convolutionally coded systems

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Simon, M. K.; Yuen, J. H.

    1989-01-01

    The use of interleaving after convolutional coding and deinterleaving before Viterbi decoding is proposed. This effectively reduces radio loss at low-loop Signal to Noise Ratios (SNRs) by several decibels and at high-loop SNRs by a few tenths of a decibel. Performance of the coded system can further be enhanced if the modulation index is optimized for this system. This will correspond to a reduction of bit SNR at a certain bit error rate for the overall system. The introduction of interleaving/deinterleaving into communication systems designed for future deep space missions does not substantially complicate their hardware design or increase their system cost.

  3. Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator

    NASA Astrophysics Data System (ADS)

    Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; Vesey, R. A.; Jones, B.; Ampleford, D. J.; Lemke, R. W.; Martin, M. R.; Schrafel, P. C.; Lewis, S. A.; Moore, J. K.; Savage, M. E.; Stygar, W. A.

    2014-12-01

    Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator's vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator's vacuum-insulator stack (at a radius of 1.6 m) by using standard D -dot and B -dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator's magnetically insulated transmission lines (MITLs) and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z . These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that showed efficient

  4. Semi-supervised Convolutional Neural Networks for Text Categorization via Region Embedding

    PubMed Central

    Johnson, Rie; Zhang, Tong

    2016-01-01

    This paper presents a new semi-supervised framework with convolutional neural networks (CNNs) for text categorization. Unlike the previous approaches that rely on word embeddings, our method learns embeddings of small text regions from unlabeled data for integration into a supervised CNN. The proposed scheme for embedding learning is based on the idea of two-view semi-supervised learning, which is intended to be useful for the task of interest even though the training is done on unlabeled data. Our models achieve better results than previous approaches on sentiment classification and topic classification tasks. PMID:27087766

  5. Introducing electron capture into the unitary-convolution-approximation energy-loss theory at low velocities

    SciTech Connect

    Schiwietz, G.; Grande, P. L.

    2011-11-15

    Recent developments in the theoretical treatment of electronic energy losses of bare and screened ions in gases are presented. Specifically, the unitary-convolution-approximation (UCA) stopping-power model has proven its strengths for the determination of nonequilibrium effects for light as well as heavy projectiles at intermediate to high projectile velocities. The focus of this contribution will be on the UCA and its extension to specific projectile energies far below 100 keV/u, by considering electron-capture contributions at charge-equilibrium conditions.

  6. Some optimal partial-unit-memory codes. [time-invariant binary convolutional codes

    NASA Technical Reports Server (NTRS)

    Lauer, G. S.

    1979-01-01

    A class of time-invariant binary convolutional codes is defined, called partial-unit-memory codes. These codes are optimal in the sense of having maximum free distance for given values of R, k (the number of encoder inputs), and mu (the number of encoder memory cells). Optimal codes are given for rates R = 1/4, 1/3, 1/2, and 2/3, with mu not greater than 4 and k not greater than mu + 3, whenever such a code is better than previously known codes. An infinite class of optimal partial-unit-memory codes is also constructed based on equidistant block codes.

  7. Performance of DPSK with convolutional encoding on time-varying fading channels

    NASA Technical Reports Server (NTRS)

    Mui, S. Y.; Modestino, J. W.

    1977-01-01

    The bit error probability performance of a differentially-coherent phase-shift keyed (DPSK) modem with convolutional encoding and Viterbi decoding on time-varying fading channels is examined. Both the Rician and the lognormal channels are considered. Bit error probability upper bounds on fully-interleaved (zero-memory) fading channels are derived and substantiated by computer simulation. It is shown that the resulting coded system performance is a relatively insensitive function of the choice of channel model provided that the channel parameters are related according to the correspondence developed as part of this paper. Finally, a comparison of DPSK with a number of other modulation strategies is provided.

  8. Cygrid: Cython-powered convolution-based gridding module for Python

    NASA Astrophysics Data System (ADS)

    Winkel, B.; Lenz, D.; Flöer, L.

    2016-06-01

    The Python module Cygrid grids (resamples) data to any collection of spherical target coordinates, although its typical application involves FITS maps or data cubes. The module supports the FITS world coordinate system (WCS) standard; its underlying algorithm is based on the convolution of the original samples with a 2D Gaussian kernel. A lookup table scheme allows parallelization of the code and is combined with the HEALPix tessellation of the sphere for fast neighbor searches. Cygrid's runtime scales between O(n) and O(nlog n), with n being the number of input samples.

  9. Using convolutional neural networks for human activity classification on micro-Doppler radar spectrograms

    NASA Astrophysics Data System (ADS)

    Jordan, Tyler S.

    2016-05-01

    This paper presents the findings of using convolutional neural networks (CNNs) to classify human activity from micro-Doppler features. An emphasis on activities involving potential security threats such as holding a gun are explored. An automotive 24 GHz radar on chip was used to collect the data and a CNN (normally applied to image classification) was trained on the resulting spectrograms. The CNN achieves an error rate of 1.65 % on classifying running vs. walking, 17.3 % error on armed walking vs. unarmed walking, and 22 % on classifying six different actions.

  10. Diffuse dispersive delay and the time convolution/attenuation of transients

    NASA Technical Reports Server (NTRS)

    Bittner, Burt J.

    1991-01-01

    Test data and analytic evaluations are presented to show that relatively poor 100 KHz shielding of 12 Db can effectively provide an electromagnetic pulse transient reduction of 100 Db. More importantly, several techniques are shown for lightning surge attenuation as an alternative to crowbar, spark gap, or power zener type clipping which simply reflects the surge. A time delay test method is shown which allows CW testing, along with a convolution program to define transient shielding effectivity where the Fourier phase characteristics of the transient are known or can be broadly estimated.

  11. Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator

    DOE PAGESBeta

    Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; et al

    2014-12-08

    Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator’s vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator’s vacuum-insulator stack (at a radius of 1.6 m) by using standard D-dot and B-dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator’s magnetically insulated transmission lines (MITLs)more » and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z. These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that showed

  12. Heisenberg-limited quantum sensing and metrology with superpositions of twin-Fock states

    NASA Astrophysics Data System (ADS)

    Gerry, Christopher C.; Mimih, Jihane

    2011-03-01

    We discuss the prospects of performing Heisenberg-limited quantum sensing and metrology using a Mach-Zehnder interferometer with input states that are superpositions of twin-Fock states and where photon number parity measurements are made on one of the output beams of the interferometer. This study is motivated by the experimental challenge of producing twin-Fock states on opposite sides of a beam splitter. We focus on the use of the so-called pair coherent states for this purpose and discuss a possible mechanism for generating them. We also discuss the prospect of using other superstitions of twin-Fock states, for the purpose of interferometry.

  13. Superposition of two nonlinear coherent states {π}/{2} out of phase and their nonclassical properties

    NASA Astrophysics Data System (ADS)

    Abbasi, O.; Tavassoly, M. K.

    2009-09-01

    Considering the concept of " nonlinear coherent states", we will study the interference effects by introducing the " superposition of two classes of nonlinear coherent states" which are {π}/{2} out of phase. The formalism has then been applied to a few physical systems as "harmonious states", " SU(1,1) coherent states" and "the center of mass motion of trapped ion". Finally, the nonclassical properties such as sub-Poissonian statistics, quadrature squeezing, amplitude-squared squeezing and Wigner distribution function of the superposed states have been investigated, numerically. Especially, as we will observe the Wigner functions of the superposed states take negative values in phase space, while their original components do not.

  14. Relativistic Inverse Scattering Problem for a Superposition of a Nonlocal Separable and a Local Quasipotential

    SciTech Connect

    Chernichenko, Yu.D.

    2005-01-01

    Within the relativistic quasipotential approach to quantum field theory, the relativistic inverse scattering problem is solved for the case where the total quasipotential describing the interaction of two relativistic spinless particles having different masses is a superposition of a nonlocal separable and a local quasipotential. It is assumed that the local component of the total quasipotential is known and that there exist bound states in this local component. It is shown that the nonlocal separable component of the total interaction can be reconstructed provided that the local component, an increment of the phase shift, and the energies of bound states are known.

  15. Security-enhanced asymmetric optical cryptosystem based on coherent superposition and equal modulus decomposition

    NASA Astrophysics Data System (ADS)

    Cai, Jianjun; Shen, Xueju; Lin, Chao

    2016-01-01

    We propose a security-enhanced asymmetric optical cryptosystem based on coherent superposition and equal modulus decomposition by combining full phase encryption technique with our previous cryptosystem. In the encryption process, the original image is phase encoded rather than bonded with a RPM. In the decryption process, two phase-contrast filters (PCFs) are employed to obtain the plaintext. As a consequence, the new cryptosystem guarantees high-level security to the attack based on iterative Fourier transform and maintains the good performance of our previous cryptosystem, especially conveniences. Some numerical simulations are presented to verify the validity and the performance of the modified cryptosystem.

  16. Superposition of states by adiabatic passage in N-pod systems

    SciTech Connect

    Amniat-Talab, M.; Saadati-Niari, M.; Nader-Ali, R.; Guerin, S.

    2011-01-15

    We study the stimulated Raman adiabatic passage technique in an N-pod system driven by N pulsed fields when N-2 and N-1 pulses not connected to the initial state have the same shape. We show that, for properly timed pulses, robust population transfer from an initial ground state to an arbitrary coherent superposition of the ground states can be achieved in a single step. The case of N-2 pulses of the same shape involves a geometric phase of the same type as the one appearing in tripod systems.

  17. A Fillable Micro-Hollow Sphere Lesion Detection Phantom Using Superposition

    PubMed Central

    DiFilippo, Frank P.; Gallo, Sven L.; Klatte, Ryan S.; Patel, Sagar

    2010-01-01

    The lesion detection performance of SPECT and PET scanners is most commonly evaluated with a phantom containing hollow spheres in a background chamber at a specified radionuclide contrast ratio. However there are limitations associated with a miniature version of a hollow sphere phantom for small animal SPECT and PET scanners. One issue is that the “wall effect” associated with zero activity in the sphere wall and fill port causes significant errors for small diameter spheres. Another issue is that there are practical difficulties in fabricating and in filling very small spheres (< 3 mm diameter). The need for lesion detection performance assessment of small-animal scanners has motivated our development of a micro-hollow sphere phantom that utilizes the principle of superposition. The phantom is fabricated by stereolithography and has interchangeable sectors containing hollow spheres with volumes ranging from 1 to 14 μL (diameters ranging from 1.25 to 3.0 mm). A simple 60° internal rotation switches the positions of three such sectors with their corresponding background regions. Raw data from scans of each rotated configuration are combined and reconstructed to yield superposition images. Since the sphere counts and background counts are acquired separately, the wall effect is eliminated. The raw data are subsampled randomly prior to summation and reconstruction to specify the desired spheres-to-background contrast ratio of the superposition image. A set of images with multiple contrast ratios is generated for visual assessment of lesion detection thresholds. To demonstrate the utility of the phantom, data were acquired with a multi-pinhole SPECT/CT scanner. Micro-liter syringes were successful in filling the small hollow spheres, and the accuracy of the dispensed volume was validated through repeated filling and weighing of the spheres. The phantom’s internal rotation and the data analysis process were successful in producing the expected superposition

  18. Laser transmission welding of absorber-free thermoplastics using dynamic beam superposition

    NASA Astrophysics Data System (ADS)

    Mamuschkin, Viktor; Olowinsky, Alexander; van der Straeten, Kira; Engelmann, Christoph

    2015-03-01

    So far, the main approach to weld absorber-free thermoplastics is exploiting their intrinsic absorption by choosing a proper wavelength of the laser. In order to melt the joining partners spatially restricted at the interface usually optics with a high numerical aperture are used. However, practice shows that the heat affected zone (HAZ) extends over a large area along the beam axis regardless of the optics used. Without clamping or convective cooling thermally induced expansion of the material can cause blowholes or deformation of the irradiated surface. To reduce the thermal stress on the part surface a dynamic beam superposition is investigated with the laser beam performing a precession movement.

  19. Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni

    NASA Astrophysics Data System (ADS)

    Shuttleworth, I. G.

    2015-11-01

    The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.

  20. Effects of Convoluted Divergent Flap Contouring on the Performance of a Fixed-Geometry Nonaxisymmetric Exhaust Nozzle

    NASA Technical Reports Server (NTRS)

    Asbury, Scott C.; Hunter, Craig A.

    1999-01-01

    An investigation was conducted in the model preparation area of the Langley 16-Foot Transonic Tunnel to determine the effects of convoluted divergent-flap contouring on the internal performance of a fixed-geometry, nonaxisymmetric, convergent-divergent exhaust nozzle. Testing was conducted at static conditions using a sub-scale nozzle model with one baseline and four convoluted configurations. All tests were conducted with no external flow at nozzle pressure ratios from 1.25 to approximately 9.50. Results indicate that baseline nozzle performance was dominated by unstable, shock-induced, boundary-layer separation at overexpanded conditions. Convoluted configurations were found to significantly reduce, and in some cases totally alleviate separation at overexpanded conditions. This result was attributed to the ability of convoluted contouring to energize and improve the condition of the nozzle boundary layer. Separation alleviation offers potential for installed nozzle aeropropulsive (thrust-minus-drag) performance benefits by reducing drag at forward flight speeds, even though this may reduce nozzle thrust ratio as much as 6.4% at off-design conditions. At on-design conditions, nozzle thrust ratio for the convoluted configurations ranged from 1% to 2.9% below the baseline configuration; this was a result of increased skin friction and oblique shock losses inside the nozzle.

  1. Gamma Knife radiosurgery with CT image-based dose calculation.

    PubMed

    Xu, Andy Yuanguang; Bhatnagar, Jagdish; Bednarz, Greg; Niranjan, Ajay; Kondziolka, Douglas; Flickinger, John; Lunsford, L Dade; Huq, M Saiful

    2015-01-01

    The Leksell GammaPlan software version 10 introduces a CT image-based segmentation tool for automatic skull definition and a convolution dose calculation algorithm for tissue inhomogeneity correction. The purpose of this work was to evaluate the impact of these new approaches on routine clinical Gamma Knife treatment planning. Sixty-five patients who underwent CT image-guided Gamma Knife radiosurgeries at the University of Pittsburgh Medical Center in recent years were retrospectively investigated. The diagnoses for these cases include trigeminal neuralgia, meningioma, acoustic neuroma, AVM, glioma, and benign and metastatic brain tumors. Dose calculations were performed for each patient with the same dose prescriptions and the same shot arrangements using three different approaches: 1) TMR 10 dose calculation with imaging skull definition; 2) convolution dose calculation with imaging skull definition; 3) TMR 10 dose calculation with conventional measurement-based skull definition. For each treatment matrix, the total treatment time, the target coverage index, the selectivity index, the gradient index, and a set of dose statistics parameters were compared between the three calculations. The dose statistics parameters investigated include the prescription isodose volume, the 12 Gy isodose volume, the minimum, maximum and mean doses on the treatment targets, and the critical structures under consideration. The difference between the convolution and the TMR 10 dose calculations for the 104 treatment matrices were found to vary with the patient anatomy, location of the treatment shots, and the tissue inhomogeneities around the treatment target. An average difference of 8.4% was observed for the total treatment times between the convolution and the TMR algorithms. The maximum differences in the treatment times, the prescription isodose volumes, the 12 Gy isodose volumes, the target coverage indices, the selectivity indices, and the gradient indices from the convolution

  2. Phase sensitivity in deformed-state superposition considering nonlinear phase shifts

    NASA Astrophysics Data System (ADS)

    Berrada, K.

    2016-07-01

    We study the problem of the phase estimation for the deformation-state superposition (DSS) under perfect and lossy (due to a dissipative interaction of DSS with their environment) regimes. The study is also devoted to the phase enhancement of the quantum states resulting from a generalized non-linearity of the phase shifts, both without and with losses. We find that such a kind of superposition can give the smallest variance in the phase parameter in comparison with usual Schrödinger cat states in different order of non-linearity even if for a larger average number of photons. Due to the significance of how a system is quantum correlated with its environment in the construction of a scalable quantum computer, the entanglement between the DSS and its environment is investigated during the dissipation. We show that partial entanglement trapping occurs during the dynamics depending on the kind of deformation and mean photon number. These features make the DSS with a larger average number of photons a good candidate for implementation of schemes of quantum optics and information with high precision.

  3. Strain-Rate Frequency Superposition (SRFS) - A rheological probe of structural relaxation in soft materials

    NASA Astrophysics Data System (ADS)

    Wyss, Hans M.

    2007-03-01

    The rheological properties of soft materials such as concentrated suspensions, emulsions, or foams often exhibit surprisingly universal linear and nonlinear features. Here we show that their linear and nonlinear viscoelastic responses can be unified in a single picture by considering the effect of the strain-rate amplitude on the structural relaxation of the material. We present a new approach to oscillatory rheology, which keeps the strain rate amplitude fixed as the oscillation frequency is varied. This allows for a detailed study of the effects of strain rate on the structural relaxation of soft materials. Our data exhibits a characteristic scaling, which isolates the response due to structural relaxation, even when it occurs at frequencies too low to be accessible with standard techniques. Our approach is reminiscent of a technique called time-temperature superposition (TTS), where rheological curves measured at different temperatures are shifted onto a single master curve that reflects the viscoelastic behavior in a dramatically extended range of frequencies. By analogy, we call our approach strain-rate frequency superposition (SRFS). Our experimental results show that nonlinear viscoelastic measurements contain useful information on the slow relaxation dynamics of soft materials. The data indicates that the yielding behavior of soft materials directly probes the structural relaxation process itself, shifted towards higher frequencies by an applied strain rate. This suggests that SRFS will provide new insight into the physical mechanisms that govern the viscoelastic response of a wide range of soft materials.

  4. More accurate matrix-matched quantification using standard superposition method for herbal medicines.

    PubMed

    Liu, Ying; Shi, Xiao-Wei; Liu, E-Hu; Sheng, Long-Sheng; Qi, Lian-Wen; Li, Ping

    2012-09-01

    Various analytical technologies have been developed for quantitative determination of marker compounds in herbal medicines (HMs). One important issue is matrix effects that must be addressed in method validation for different detections. Unlike biological fluids, blank matrix samples for calibration are usually unavailable for HMs. In this work, practical approaches for minimizing matrix effects in HMs analysis were proposed. The matrix effects in quantitative analysis of five saponins from Panax notoginseng were assessed using high-performance liquid chromatography (HPLC). Matrix components were found to interfere with the ionization of target analytes when mass spectrometry (MS) detection were employed. To compensate the matrix signal suppression/enhancement, two matrix-matched methods, standard addition method with the target-knockout extract and standard superposition method with a HM extract were developed and tested in this work. The results showed that the standard superposition method is simple and practical for overcoming matrix effects for quantitative analysis of HMs. Moreover, the interference components were observed to interfere with light scattering of target analytes when evaporative light scattering detection (ELSD) was utilized for quantitative analysis of HMs but was not indicated when Ultraviolet detection (UV) were employed. Thus, the issue of interference effects should be addressed and minimized for quantitative HPLC-ELSD and HPLC-MS methodologies for quality control of HMs. PMID:22835696

  5. Berry phase and its sign in quantum superposition states of thermal 87Rb atoms

    NASA Astrophysics Data System (ADS)

    Welte, S.; Basler, C.; Helm, H.

    2014-02-01

    We investigate the Berry phase in an ensemble of thermal 87Rb atoms which we prepare in a superposition state under conditions near and at electromagnetically induced transparency. The geometric phase is imprinted in the atoms by rotating the laboratory magnetic field. Phase-stabilized light fields permit us to monitor phase changes of the atomic sample in a Ramsey-type interferometer by instant probing of the absorptive response of the atoms as well as by monitoring the free-induction decay of the coherent superposition. The absolute sign of the phase is determined by reference to controllable phase shifts imposed by the experimenter. We prove that the geometric phase is independent of the rotational frequency of the magnetic field in the adiabatic regime, that the phase is additive in multiple rotations, and it is independent of the Landé factor of the atomic magnetic moment, as predicted in Berry's seminal paper. We show that the absolute sign of the phase encodes the sign of the observable angular momentum in relation to laboratory coordinates.

  6. BetaSuperposer: superposition of protein surfaces using beta-shapes.

    PubMed

    Kim, Jae-Kwan; Kim, Deok-Soo

    2012-01-01

    The comparison between two protein structures is important for understanding a molecular function. In particular, the comparison of protein surfaces to measure their similarity provides another challenge useful for studying molecular evolution, docking, and drug design. This paper presents an algorithm, called the BetaSuperposer, which evaluates the similarity between the surfaces of two structures using the beta-shape which is a geometric structure derived from the Voronoi diagram of molecule. The algorithm performs iterations of mix-and-match between the beta-shapes of two structures for the optimal superposition from which a similarity measure is computed, where each mix-and-match step attempts to solve an NP-hard problem. The devised heuristic algorithm based on the assignment problem formulation quickly produces a good superposition and an assessment of similarity. The BetaSuperposer was fully implemented and benchmarked against popular programs, the Dali and the Click, using the SCOP models. The BetaSuperposer is freely available to the public from the Voronoi Diagram Research Center ( http://voronoi.hanyang.ac.kr ). PMID:22812415

  7. Role of superposition of dislocation avalanches in the statistics of acoustic emission during plastic deformation.

    PubMed

    Lebyodkin, M A; Shashkov, I V; Lebedkina, T A; Mathis, K; Dobron, P; Chmelik, F

    2013-10-01

    Various dynamical systems with many degrees of freedom display avalanche dynamics, which is characterized by scale invariance reflected in power-law statistics. The superposition of avalanche processes in real systems driven at a finite velocity may influence the experimental determination of the underlying power law. The present paper reports results of an investigation of this effect using the example of acoustic emission (AE) accompanying plastic deformation of crystals. Indeed, recent studies of AE did not only prove that the dynamics of crystal defects obeys power-law statistics, but also led to a hypothesis of universality of the scaling law. We examine the sensitivity of the apparent statistics of AE to the parameters applied to individualize AE events. Two different alloys, MgZr and AlMg, both displaying strong AE but characterized by different plasticity mechanisms, are investigated. It is shown that the power-law indices display a good robustness in wide ranges of parameters even in the conditions leading to very strong superposition of AE events, although some deviations from the persistent values are also detected. The totality of the results confirms the scale-invariant character of deformation processes on the scale relevant to AE, but uncovers essential differences between the power-law exponents found for two kinds of alloys. PMID:24229184

  8. SAS-Pro: Simultaneous Residue Assignment and Structure Superposition for Protein Structure Alignment

    PubMed Central

    Shah, Shweta B.; Sahinidis, Nikolaos V.

    2012-01-01

    Protein structure alignment is the problem of determining an assignment between the amino-acid residues of two given proteins in a way that maximizes a measure of similarity between the two superimposed protein structures. By identifying geometric similarities, structure alignment algorithms provide critical insights into protein functional similarities. Existing structure alignment tools adopt a two-stage approach to structure alignment by decoupling and iterating between the assignment evaluation and structure superposition problems. We introduce a novel approach, SAS-Pro, which addresses the assignment evaluation and structure superposition simultaneously by formulating the alignment problem as a single bilevel optimization problem. The new formulation does not require the sequentiality constraints, thus generalizing the scope of the alignment methodology to include non-sequential protein alignments. We employ derivative-free optimization methodologies for searching for the global optimum of the highly nonlinear and non-differentiable RMSD function encountered in the proposed model. Alignments obtained with SAS-Pro have better RMSD values and larger lengths than those obtained from other alignment tools. For non-sequential alignment problems, SAS-Pro leads to alignments with high degree of similarity with known reference alignments. The source code of SAS-Pro is available for download at http://eudoxus.cheme.cmu.edu/saspro/SAS-Pro.html. PMID:22662161

  9. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.

    PubMed

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation. PMID:26797612

  10. ARKCoS: artifact-suppressed accelerated radial kernel convolution on the sphere

    NASA Astrophysics Data System (ADS)

    Elsner, F.; Wandelt, B. D.

    2011-08-01

    We describe a hybrid Fourier/direct space convolution algorithm for compact radial (azimuthally symmetric) kernels on the sphere. For high resolution maps covering a large fraction of the sky, our implementation takes advantage of the inexpensive massive parallelism afforded by consumer graphics processing units (GPUs). Its applications include modeling of instrumental beam shapes in terms of compact kernels, computation of fine-scale wavelet transformations, and optimal filtering for the detection of point sources. Our algorithm works for any pixelization where pixels are grouped into isolatitude rings. Even for kernels that are not bandwidth-limited, ringing features are completely absent on an ECP grid. We demonstrate that they can be highly suppressed on the popular HEALPix pixelization, for which we develop a freely available implementation of the algorithm. As an example application, we show that running on a high-end consumer graphics card our method speeds up beam convolution for simulations of a characteristic Planck high frequency instrument channel by two orders of magnitude compared to the commonly used HEALPix implementation on one CPU core, while typically maintaining a fractional RMS accuracy of about 1 part in 105.

  11. The Probabilistic Convolution Tree: Efficient Exact Bayesian Inference for Faster LC-MS/MS Protein Inference

    PubMed Central

    Serang, Oliver

    2014-01-01

    Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called “causal independence”). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to and the space to where is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions. PMID:24626234

  12. Muon Neutrino Disappearance in NOvA with a Deep Convolutional Neural Network Classifier

    NASA Astrophysics Data System (ADS)

    Rocco, Dominick Rosario

    The NuMI Off-axis Neutrino Appearance Experiment (NOvA) is designed to study neutrino oscillation in the NuMI (Neutrinos at the Main Injector) beam. NOvA observes neutrino oscillation using two detectors separated by a baseline of 810 km; a 14 kt Far Detector in Ash River, MN and a functionally identical 0.3 kt Near Detector at Fermilab. The experiment aims to provide new measurements of $[special characters omitted]. and theta23 and has potential to determine the neutrino mass hierarchy as well as observe CP violation in the neutrino sector. Essential to these analyses is the classification of neutrino interaction events in NOvA detectors. Raw detector output from NOvA is interpretable as a pair of images which provide orthogonal views of particle interactions. A recent advance in the field of computer vision is the advent of convolutional neural networks, which have delivered top results in the latest image recognition contests. This work presents an approach novel to particle physics analysis in which a convolutional neural network is used for classification of particle interactions. The approach has been demonstrated to improve the signal efficiency and purity of the event selection, and thus physics sensitivity. Early NOvA data has been analyzed (2.74 x 1020 POT, 14 kt equivalent) to provide new best-fit measurements of sin2(theta23) = 0.43 (with a statistically-degenerate compliment near 0.60) and [special characters omitted]..

  13. Projection of fMRI data onto the cortical surface using anatomically-informed convolution kernels.

    PubMed

    Operto, G; Bulot, R; Anton, J-L; Coulon, O

    2008-01-01

    As surface-based data analysis offer an attractive approach for intersubject matching and comparison, the projection of voxel-based 3D volumes onto the cortical surface is an essential problem. We present here a method that aims at producing representations of functional brain data on the cortical surface from functional MRI volumes. Such representations are for instance required for subsequent cortical-based functional analysis. We propose a projection technique based on the definition, around each node of the gray/white matter interface mesh, of convolution kernels whose shape and distribution rely on the geometry of the local anatomy. For one anatomy, a set of convolution kernels is computed that can be used to project any functional data registered with this anatomy. Therefore resulting in anatomically-informed projections of data onto the cortical surface, this kernel-based approach offers better sensitivity, specificity than other classical methods and robustness to misregistration errors. Influences of mesh and volumes spatial resolutions were also estimated for various projection techniques, using simulated functional maps. PMID:17931891

  14. Efficient pedestrian detection from aerial vehicles with object proposals and deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Minnehan, Breton; Savakis, Andreas

    2016-05-01

    As Unmanned Aerial Systems grow in numbers, pedestrian detection from aerial platforms is becoming a topic of increasing importance. By providing greater contextual information and a reduced potential for occlusion, the aerial vantage point provided by Unmanned Aerial Systems is highly advantageous for many surveillance applications, such as target detection, tracking, and action recognition. However, due to the greater distance between the camera and scene, targets of interest in aerial imagery are generally smaller and have less detail. Deep Convolutional Neural Networks (CNN's) have demonstrated excellent object classification performance and in this paper we adopt them to the problem of pedestrian detection from aerial platforms. We train a CNN with five layers consisting of three convolution-pooling layers and two fully connected layers. We also address the computational inefficiencies of the sliding window method for object detection. In the sliding window configuration, a very large number of candidate patches are generated from each frame, while only a small number of them contain pedestrians. We utilize the Edge Box object proposal generation method to screen candidate patches based on an "objectness" criterion, so that only regions that are likely to contain objects are processed. This method significantly reduces the number of image patches processed by the neural network and makes our classification method very efficient. The resulting two-stage system is a good candidate for real-time implementation onboard modern aerial vehicles. Furthermore, testing on three datasets confirmed that our system offers high detection accuracy for terrestrial pedestrian detection in aerial imagery.

  15. Calcium-activated chloride currents in primary cultures of rabbit distal convoluted tubule.

    PubMed

    Bidet, M; Tauc, M; Rubera, I; de Renzis, G; Poujeol, C; Bohn, M T; Poujeol, P

    1996-10-01

    Chloride (Cl-) conductances were studied in primary cultures of rabbit distal convoluted tubule (very early distal "bright" convoluted tubule, DCTb) by the whole cell patch-clamp technique. We identified a Cl- current activated by 2 microM extracellular ionomycin. The kinetics of the macroscopic current were time dependent for depolarizing potentials with a slow developing component. The steady state current presented outward rectification, and the ion selectivity sequence was I- > Br- > > Cl > glutamate. The current was inhibited by 0.1 mM 5-nitro-2-(3-phenylpropyl-amino)benzoic acid, 1 mM 4,4'-diisothiocyanostilbene-2,2'-disulfonic acid, and 1 mM diphenylamine-2-carboxylate. To identify the location of the Cl- conductance, 6-methoxy-N-(3-sulfopropyl)quinolinium fluorescence experiments were carried out in confluent cultures developed on collagen-coated permeable filters. Cl- removal from the apical solution induced a Cl- efflux that was stimulated by 10 microM forskolin. Forskolin had no effect on the basolateral Cl- permeability Cl- substitution in the basolateral solution induced an efflux stimulated by 2 microM ionomycin or 50 microM extracellular ATP Ionomycin had no effect on the apical Cl- fluxes. Thus cultured DCTb cells exhibit Ca(2+)-activated Cl- channels located in the basolateral membrane. This Cl- permeability was active at a resting membrane potential and could participate in the Cl- reabsorption across the DCTb in control conditions. PMID:8898026

  16. Localization and Classification of Paddy Field Pests using a Saliency Map and Deep Convolutional Neural Network.

    PubMed

    Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong

    2016-01-01

    We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods. PMID:26864172

  17. Performance of convolutional codes on fading channels typical of planetary entry missions

    NASA Technical Reports Server (NTRS)

    Modestino, J. W.; Mui, S. Y.; Reale, T. J.

    1974-01-01

    The performance of convolutional codes in fading channels typical of the planetary entry channel is examined in detail. The signal fading is due primarily to turbulent atmospheric scattering of the RF signal transmitted from an entry probe through a planetary atmosphere. Short constraint length convolutional codes are considered in conjunction with binary phase-shift keyed modulation and Viterbi maximum likelihood decoding, and for longer constraint length codes sequential decoding utilizing both the Fano and Zigangirov-Jelinek (ZJ) algorithms are considered. Careful consideration is given to the modeling of the channel in terms of a few meaningful parameters which can be correlated closely with theoretical propagation studies. For short constraint length codes the bit error probability performance was investigated as a function of E sub b/N sub o parameterized by the fading channel parameters. For longer constraint length codes the effect was examined of the fading channel parameters on the computational requirements of both the Fano and ZJ algorithms. The effects of simple block interleaving in combatting the memory of the channel is explored, using the analytic approach or digital computer simulation.

  18. Large patch convolutional neural networks for the scene classification of high spatial resolution imagery

    NASA Astrophysics Data System (ADS)

    Zhong, Yanfei; Fei, Feng; Zhang, Liangpei

    2016-04-01

    The increase of the spatial resolution of remote-sensing sensors helps to capture the abundant details related to the semantics of surface objects. However, it is difficult for the popular object-oriented classification approaches to acquire higher level semantics from the high spatial resolution remote-sensing (HSR-RS) images, which is often referred to as the "semantic gap." Instead of designing sophisticated operators, convolutional neural networks (CNNs), a typical deep learning method, can automatically discover intrinsic feature descriptors from a large number of input images to bridge the semantic gap. Due to the small data volume of the available HSR-RS scene datasets, which is far away from that of the natural scene datasets, there have been few reports of CNN approaches for HSR-RS image scene classifications. We propose a practical CNN architecture for HSR-RS scene classification, named the large patch convolutional neural network (LPCNN). The large patch sampling is used to generate hundreds of possible scene patches for the feature learning, and a global average pooling layer is used to replace the fully connected network as the classifier, which can greatly reduce the total parameters. The experiments confirm that the proposed LPCNN can learn effective local features to form an effective representation for different land-use scenes, and can achieve a performance that is comparable to the state-of-the-art on public HSR-RS scene datasets.

  19. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    PubMed Central

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612

  20. Noise-induced bias for convolution-based interpolation in digital image correlation.

    PubMed

    Su, Yong; Zhang, Qingchuan; Gao, Zeren; Xu, Xiaohai

    2016-01-25

    In digital image correlation (DIC), the noise-induced bias is significant if the noise level is high or the contrast of the image is low. However, existing methods for the estimation of the noise-induced bias are merely applicable to traditional interpolation methods such as linear and cubic interpolation, but are not applicable to generalized interpolation methods such as BSpline and OMOMS. Both traditional interpolation and generalized interpolation belong to convolution-based interpolation. Considering the widely use of generalized interpolation, this paper presents a theoretical analysis of noise-induced bias for convolution-based interpolation. A sinusoidal approximate formula for noise-induced bias is derived; this formula motivates an estimating strategy which is with speed, ease, and accuracy; furthermore, based on this formula, the mechanism of sophisticated interpolation methods generally reducing noise-induced bias is revealed. The validity of the theoretical analysis is established by both numerical simulations and actual subpixel translation experiment. Compared to existing methods, formulae provided by this paper are simpler, briefer, and more general. In addition, a more intuitionistic explanation of the cause of noise-induced bias is provided by quantitatively characterized the position-dependence of noise variability in the spatial domain. PMID:26832501

  1. Localization and Classification of Paddy Field Pests using a Saliency Map and Deep Convolutional Neural Network

    PubMed Central

    Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong

    2016-01-01

    We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods. PMID:26864172

  2. Region-Based Convolutional Networks for Accurate Object Detection and Segmentation.

    PubMed

    Girshick, Ross; Donahue, Jeff; Darrell, Trevor; Malik, Jitendra

    2016-01-01

    Object detection performance, as measured on the canonical PASCAL VOC Challenge datasets, plateaued in the final years of the competition. The best-performing methods were complex ensemble systems that typically combined multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 50 percent relative to the previous best result on VOC 2012-achieving a mAP of 62.4 percent. Our approach combines two ideas: (1) one can apply high-capacity convolutional networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data are scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, boosts performance significantly. Since we combine region proposals with CNNs, we call the resulting model an R-CNN or Region-based Convolutional Network. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn. PMID:26656583

  3. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1976-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  4. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1977-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  5. Revision of the theory of tracer transport and the convolution model of dynamic contrast enhanced magnetic resonance imaging

    PubMed Central

    Bammer, Roland; Stollberger, Rudolf

    2012-01-01

    Counterexamples are used to motivate the revision of the established theory of tracer transport. Then dynamic contrast enhanced magnetic resonance imaging in particular is conceptualized in terms of a fully distributed convection–diffusion model from which a widely used convolution model is derived using, alternatively, compartmental discretizations or semigroup theory. On this basis, applications and limitations of the convolution model are identified. For instance, it is proved that perfusion and tissue exchange states cannot be identified on the basis of a single convolution equation alone. Yet under certain assumptions, particularly that flux is purely convective at the boundary of a tissue region, physiological parameters such as mean transit time, effective volume fraction, and volumetric flow rate per unit tissue volume can be deduced from the kernel. PMID:17429633

  6. The effect of dose calculation accuracy on inverse treatment planning

    NASA Astrophysics Data System (ADS)

    Jeraj, Robert; Keall, Paul J.; Siebers, Jeffrey V.

    2002-02-01

    The effect of dose calculation accuracy during inverse treatment planning for intensity modulated radiotherapy (IMRT) was studied in this work. Three dose calculation methods were compared: Monte Carlo, superposition and pencil beam. These algorithms were used to calculate beamlets, which were subsequently used by a simulated annealing algorithm to determine beamlet weights which comprised the optimal solution to the objective function. Three different cases (lung, prostate and head and neck) were investigated and several different objective functions were tested for their effect on inverse treatment planning. It is shown that the use of inaccurate dose calculation introduces two errors in a treatment plan, a systematic error and a convergence error. The systematic error is present because of the inaccuracy of the dose calculation algorithm. The convergence error appears because the optimal intensity distribution for inaccurate beamlets differs from the optimal solution for the accurate beamlets. While the systematic error for superposition was found to be ~1% of Dmax in the tumour and slightly larger outside, the error for the pencil beam method is typically ~5% of Dmax and is rather insensitive to the given objectives. On the other hand, the convergence error was found to be very sensitive to the objective function, is only slightly correlated to the systematic error and should be determined for each case individually. Our results suggest that because of the large systematic and convergence errors, inverse treatment planning systems based on pencil beam algorithms alone should be upgraded either to superposition or Monte Carlo based dose calculations.

  7. Implementation of a Gauss convoluted Pandel PDF for track reconstruction in neutrino telescopes

    NASA Astrophysics Data System (ADS)

    van Eijndhoven, N.; Fadiran, O.; Japaridze, G.

    2007-12-01

    A probability distribution function is presented which provides a realistic description of the detection of scattered photons. The resulting probabilities can be described analytically by means of a superposition of several special functions. These exact expressions can be evaluated numerically only for small distances and limited time residuals, due to computer accuracy limitations. In this report, we provide approximations for the exact expressions in different regions of the distance-time residual space, defined by the detector geometry and the space-time scale of an event. These approximations can be evaluated numerically with a relative error with respect to the exact expression at the boundaries of less than 10-3.

  8. Tally modifying of MCNP and post processing of pile-up simulation with time convolution method in PGNAA

    NASA Astrophysics Data System (ADS)

    Asghar Mowlavi, Ali; Koohi-Fayegh, Rahim

    2005-11-01

    Time convolution method has been employed for pile-up simulation in prompt gamma neutron activation analysis with an Am-Be neutron source and a 137Cs gamma source. A TALLYX subroutine has been written to design a new tally in the MCNP code. This tally records gamma particle information for the detector cell into an output file to be processed later. The times at which the particles are emitted by the source have been randomly generated following an exponential decay time distribution. A time convolution program was written to process the data produced and simulate more realistic pile-up. This method can be applied in optimization studies.

  9. On the application of a fast polynomial transform and the Chinese remainder theorem to compute a two-dimensional convolution

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Lipes, R.; Reed, I. S.; Wu, C.

    1980-01-01

    A fast algorithm is developed to compute two dimensional convolutions of an array of d sub 1 X d sub 2 complex number points, where d sub 2 = 2(M) and d sub 1 = 2(m-r+) for some 1 or = r or = m. This algorithm requires fewer multiplications and about the same number of additions as the conventional fast fourier transform method for computing the two dimensional convolution. It also has the advantage that the operation of transposing the matrix of data can be avoided.

  10. Superposition-additive approach in the description of thermodynamic parameters of formation and clusterization of substituted alkanes at the air/water interface.

    PubMed

    Vysotsky, Yu B; Belyaeva, E A; Fomina, E S; Vasylyev, A O; Vollhardt, D; Fainerman, V B; Aksenenko, E V; Miller, R

    2012-12-01

    The superposition-additive approach developed previously was shown to be applicable for the calculations of the thermodynamic parameters of formation and atomization of conjugate systems, their dipole polarizability, molecular diamagnetic susceptibility, π-electronic ring currents, etc. In the present work, the applicability of this approach for the calculation of the thermodynamic parameters of formation and clusterization at the water/air interface of alkanes, fatty alcohols, thioalcohols, amines, nitriles, fatty acids (C(n)H(2n+1)X, X is the functional group) and cis-unsaturated carboxylic acids (C(n)H(2n-1)COOH) is studied. Using the proposed approach the thermodynamic quantities determined agree well with the available data, either calculated using the semiempirical (PM3) quantum chemical method, or obtained in experiments. In particular, for enthalpy and Gibbs' energy of the formation of substituted alkane monomers from the elementary substances, and their absolute entropy, the standard deviations of the values calculated according to the superposition-additive scheme with the mutual superimposition domain C(n-2)H(2n-4) (n is the number of carbon atoms in the alkyl chain) from the results of PM3 calculations for alkanes, alcohols, thioalcohols, amines, fatty acids, nitriles and cis-unsaturated carboxylic acids are respectively: 0.05, 0.004, 2.87, 0.02, 0.01, 0.77, and 0.01 kJ/mol for enthalpy; 2.32, 5.26, 4.49, 0.53, 1.22, 1.02, 5.30 J/(molK) for absolute entropy; 0.69, 1.56, 3.82, 0.15, 0.37, 0.69, 1.58 kJ/mol for Gibbs' energy, whereas the deviations from the experimental data are: 0.52, 5.75, 1.40, 1.00, 4.86 kJ/mol; 0.52, 0.63, 1.40, 6.11, 2.21 J/(molK); 2.52, 5.76, 1.58, 1.78, 4.86 kJ/mol, respectively (for nitriles and cis-unsaturated carboxylic acids experimental data are not available). The proposed approach provides also quite accurate estimates of enthalpy, entropy and Gibbs' energy of boiling and melting, critical temperatures and standard heat

  11. Study and verification of the superposition method used for determining the pressure losses of the heat exchangers

    NASA Astrophysics Data System (ADS)

    Petru, Michal; Kulhavy, Petr; Srb, Pavel; Rachitsky, Gary

    2015-05-01

    This paper deals with study of the pressure losses of the new heat convectors product line. For all devices connected to the heating circuit of the building, it`s required to declare a tabulated values of pressure drops. The heat exchangers are manufactured in a lot of different dimensions and atypical shapes. An individual assessment of the pressure losses for each type is very time consuming. Therefore based on the resulting data of the experiments and numerical models, an electronic database was created that can be used for calculating the total values of the pressure losses in the optionally assembled exchanger. The measurements are standardly performed by the manufacturer Licon heat hydrodynamic laboratory and the numerical models are carried out in COMSOL Multiphysics. Different variations of the convectors geometry cause non-linear process of energy losses, which is proportionately about 30% larger for the smaller exchanger than for the larger types. The results of the experiments and the numerical simulations were in a very good conjuncture. Considerable influence of the water temperature onto the total size of incurred energy losses has been proven. This is mainly caused by the different ranges of the Reynolds number depending on the viscosity of the used liquid. Concerning to the tested method of superposition, it is not possible to easily find the characteristic values appropriate for the each individual components of the heat exchanger. Every of the components behaves differently, depend on the complexity of the exchanger. However, the correction coefficient, depended on the matrix of the exchanger, that is suitable for the entire range of the developed product line has been found.

  12. Superposition of nonparaxial vectorial complex-source spherically focused beams: Axial Poynting singularity and reverse propagation

    NASA Astrophysics Data System (ADS)

    Mitri, F. G.

    2016-08-01

    In this work, counterintuitive effects such as the generation of an axial (i.e., long the direction of wave motion) zero-energy flux density (i.e., axial Poynting singularity) and reverse (i.e., negative) propagation of nonparaxial quasi-Gaussian electromagnetic (EM) beams are examined. Generalized analytical expressions for the EM field's components of a coherent superposition of two high-order quasi-Gaussian vortex beams of opposite handedness and different amplitudes are derived based on the complex-source-point method, stemming from Maxwell's vector equations and the Lorenz gauge condition. The general solutions exhibiting unusual effects satisfy the Helmholtz and Maxwell's equations. The EM beam components are characterized by nonzero integer degree and order (n ,m ) , respectively, an arbitrary waist w0, a diffraction convergence length known as the Rayleigh range zR, and a weighting (real) factor 0 ≤α ≤1 that describes the transition of the beam from a purely vortex (α =0 ) to a nonvortex (α =1 ) type. An attractive feature for this superposition is the description of strongly focused (or strongly divergent) wave fields. Computations of the EM power density as well as the linear and angular momentum density fluxes illustrate the analysis with particular emphasis on the polarization states of the vector potentials forming the beams and the weight of the coherent beam superposition causing the transition from the vortex to the nonvortex type. Should some conditions determined by the polarization state of the vector potentials and the beam parameters be met, an axial zero-energy flux density is predicted in addition to a negative retrograde propagation effect. Moreover, rotation reversal of the angular momentum flux density with respect to the beam handedness is anticipated, suggesting the possible generation of negative (left-handed) torques. The results are particularly useful in applications involving the design of strongly focused optical laser

  13. Convolution-based estimation of organ dose in tube current modulated CT.

    PubMed

    Tian, Xiaoyu; Segars, W Paul; Dixon, Robert L; Samei, Ehsan

    2016-05-21

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ([Formula: see text]) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate [Formula: see text] values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying [Formula: see text] with the organ dose coefficients ([Formula: see text]). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The discrepancy between the estimated organ dose and dose simulated using TCM Monte Carlo program was quantified. We further compared the

  14. Comment on "Fast determination of the optimal rotational matrix for macromolecular superpositions" [J. Comp. Chem. 31, 1561 (2010)].

    PubMed

    Kneller, Gerald R

    2011-01-15

    Recently Liu et al. published a fast algorithm to solve the eigenvector problem arising in the quaternion-based method for the rotational superposition of molecular structures (J Comput Chem 2010, 31, 1561.). In this Comment, it is shown that the construction of the 4 × 4 matrix to be diagonalized—and not the diagonalization itself—represents the dominating part of the computational effort for the quaternion-based solution of the rotational superposition problem if molecules with more than about 100 atoms are considered. PMID:20662082

  15. Decoherence-free evolution of time-dependent superposition states of two-level systems and thermal effects

    SciTech Connect

    Prado, F. O.; Duzzioni, E. I.; Almeida, N. G. de; Moussa, M. H. Y.; Villas-Boas, C. J.

    2011-07-15

    In this paper we detail some results advanced in a recent letter [Prado et al., Phys. Rev. Lett. 102, 073008 (2009).] showing how to engineer reservoirs for two-level systems at absolute zero by means of a time-dependent master equation leading to a nonstationary superposition equilibrium state. We also present a general recipe showing how to build nonadiabatic coherent evolutions of a fermionic system interacting with a bosonic mode and investigate the influence of thermal reservoirs at finite temperature on the fidelity of the protected superposition state. Our analytical results are supported by numerical analysis of the full Hamiltonian model.

  16. Superposition of Cohesive Elements to Account for R-Curve Toughening in the Fracture of Composites

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Rose, Cheryl A.; Song, Kyongchan

    2008-01-01

    The relationships between a resistance curve (R-curve), the corresponding fracture process zone length, the shape of the traction/displacement softening law, and the propagation of fracture are examined in the context of the through-the-thickness fracture of composite laminates. A procedure that accounts for R-curve toughening mechanisms by superposing bilinear cohesive elements is proposed. Simple equations are developed for determining the separation of the critical energy release rates and the strengths that define the independent contributions of each bilinear softening law in the superposition. It is shown that the R-curve measured with a Compact Tension specimen test can be reproduced by superposing two bilinear softening laws. It is also shown that an accurate representation of the R-curve is essential for predicting the initiation and propagation of fracture in composite laminates.

  17. The external magnetic field created by the superposition of identical parallel finite solenoids

    NASA Astrophysics Data System (ADS)

    Lim, Melody Xuan; Greenside, Henry

    2016-08-01

    We use superposition and numerical methods to show that the external magnetic field generated by parallel identical solenoids can be nearly uniform and substantial, even when the solenoids have lengths that are large compared to their radii. We examine both a ring of solenoids and a large hexagonal array of solenoids. In both cases, we discuss how the magnitude and uniformity of the external field depend on the length of and the spacing between the solenoids. We also discuss some novel properties of a single solenoid, e.g., that even for short solenoids the energy stored in the internal magnetic field exceeds the energy stored in the spatially infinite external magnetic field. These results should be broadly interesting to undergraduates learning about electricity and magnetism.

  18. Digital coherent superposition of optical OFDM subcarrier pairs with Hermitian symmetry for phase noise mitigation.

    PubMed

    Yi, Xingwen; Chen, Xuemei; Sharma, Dinesh; Li, Chao; Luo, Ming; Yang, Qi; Li, Zhaohui; Qiu, Kun

    2014-06-01

    Digital coherent superposition (DCS) provides an approach to combat fiber nonlinearities by trading off the spectrum efficiency. In analogy, we extend the concept of DCS to the optical OFDM subcarrier pairs with Hermitian symmetry to combat the linear and nonlinear phase noise. At the transmitter, we simply use a real-valued OFDM signal to drive a Mach-Zehnder (MZ) intensity modulator biased at the null point and the so-generated OFDM signal is Hermitian in the frequency domain. At receiver, after the conventional OFDM signal processing, we conduct DCS of the optical OFDM subcarrier pairs, which requires only conjugation and summation. We show that the inter-carrier-interference (ICI) due to phase noise can be reduced because of the Hermitain symmetry. In a simulation, this method improves the tolerance to the laser phase noise. In a nonlinear WDM transmission experiment, this method also achieves better performance under the influence of cross phase modulation (XPM). PMID:24921539

  19. Composite vortex beams by coaxial superposition of Laguerre-Gaussian beams

    NASA Astrophysics Data System (ADS)

    Huang, Sujuan; Miao, Zhuang; He, Chao; Pang, Fufei; Li, Yingchun; Wang, Tingyun

    2016-03-01

    We propose the generation of novel composite vortex beams by coaxial superposition of Laguerre-Gaussian (LG) beams with common waist position and waist parameter. Computer-generated holography by conjugate-symmetric extension is applied to produce the holograms of several composite vortex beams. Utilizing the holograms, fantastic light modes including optical ring lattice, double dark-ring and double bright-ring composite vortex beams etc. are numerically reconstructed. The generated composite vortex beams show diffraction broadening with some of them showing dynamic rotation around beam centers while propagating. Optical experiments based on a computer-controlled spatial light modulator (SLM) verify the numerical results. These novel composite vortex beams possess more complicated distribution and more controllable parameters for their potential application in comparison to conventional optical ring lattice.

  20. Time-Temperature Superposition to Determine the Stress-Rupture of Aramid Fibres

    NASA Astrophysics Data System (ADS)

    Alwis, K. G. N. C.; Burgoyne, C. J.

    2006-07-01

    Conventional creep testing takes a long time to obtain stress-rupture data for aramid fibres at the low stress levels likely to be used in practical applications. However, the rate of creep of aramid can be accelerated by a thermally activated process to obtain the failure of fibres within a few hours. It is possible to obtain creep curves at different temperature levels which can be shifted along the time axis to generate a single curve know as a master curve, from which stress-rupture data can be obtained. This technique is known as the time-temperature superposition principle and will be applied to Kevlar 49 yarns. Important questions relating to the techniques needed to obtain smooth master curves will be discussed, as will the validity the resulting curves and the corresponding stress-rupture lifetime.