Fast convolution-superposition dose calculation on graphics hardware.
Hissoiny, Sami; Ozell, Benoît; Després, Philippe
2009-06-01
The numerical calculation of dose is central to treatment planning in radiation therapy and is at the core of optimization strategies for modern delivery techniques. In a clinical environment, dose calculation algorithms are required to be accurate and fast. The accuracy is typically achieved through the integration of patient-specific data and extensive beam modeling, which generally results in slower algorithms. In order to alleviate execution speed problems, the authors have implemented a modern dose calculation algorithm on a massively parallel hardware architecture. More specifically, they have implemented a convolution-superposition photon beam dose calculation algorithm on a commodity graphics processing unit (GPU). They have investigated a simple porting scenario as well as slightly more complex GPU optimization strategies. They have achieved speed improvement factors ranging from 10 to 20 times with GPU implementations compared to central processing unit (CPU) implementations, with higher values corresponding to larger kernel and calculation grid sizes. In all cases, they preserved the numerical accuracy of the GPU calculations with respect to the CPU calculations. These results show that streaming architectures such as GPUs can significantly accelerate dose calculation algorithms and let envision benefits for numerically intensive processes such as optimizing strategies, in particular, for complex delivery techniques such as IMRT and are therapy.
A convolution-superposition dose calculation engine for GPUs
Hissoiny, Sami; Ozell, Benoit; Despres, Philippe
2010-03-15
Purpose: Graphic processing units (GPUs) are increasingly used for scientific applications, where their parallel architecture and unprecedented computing power density can be exploited to accelerate calculations. In this paper, a new GPU implementation of a convolution/superposition (CS) algorithm is presented. Methods: This new GPU implementation has been designed from the ground-up to use the graphics card's strengths and to avoid its weaknesses. The CS GPU algorithm takes into account beam hardening, off-axis softening, kernel tilting, and relies heavily on raytracing through patient imaging data. Implementation details are reported as well as a multi-GPU solution. Results: An overall single-GPU acceleration factor of 908x was achieved when compared to a nonoptimized version of the CS algorithm implemented in PlanUNC in single threaded central processing unit (CPU) mode, resulting in approximatively 2.8 s per beam for a 3D dose computation on a 0.4 cm grid. A comparison to an established commercial system leads to an acceleration factor of approximately 29x or 0.58 versus 16.6 s per beam in single threaded mode. An acceleration factor of 46x has been obtained for the total energy released per mass (TERMA) calculation and a 943x acceleration factor for the CS calculation compared to PlanUNC. Dose distributions also have been obtained for a simple water-lung phantom to verify that the implementation gives accurate results. Conclusions: These results suggest that GPUs are an attractive solution for radiation therapy applications and that careful design, taking the GPU architecture into account, is critical in obtaining significant acceleration factors. These results potentially can have a significant impact on complex dose delivery techniques requiring intensive dose calculations such as intensity-modulated radiation therapy (IMRT) and arc therapy. They also are relevant for adaptive radiation therapy where dose results must be obtained rapidly.
Vanderstraeten, Barbara; Reynaert, Nick; Paelinck, Leen; Madani, Indira; Wagter, Carlos de; Gersem, Werner de; Neve, Wilfried de; Thierens, Hubert
2006-09-15
The accuracy of dose computation within the lungs depends strongly on the performance of the calculation algorithm in regions of electronic disequilibrium that arise near tissue inhomogeneities with large density variations. There is a lack of data evaluating the performance of highly developed analytical dose calculation algorithms compared to Monte Carlo computations in a clinical setting. We compared full Monte Carlo calculations (performed by our Monte Carlo dose engine MCDE) with two different commercial convolution/superposition (CS) implementations (Pinnacle-CS and Helax-TMS's collapsed cone model Helax-CC) and one pencil beam algorithm (Helax-TMS's pencil beam model Helax-PB) for 10 intensity modulated radiation therapy (IMRT) lung cancer patients. Treatment plans were created for two photon beam qualities (6 and 18 MV). For each dose calculation algorithm, patient, and beam quality, the following set of clinically relevant dose-volume values was reported: (i) minimal, median, and maximal dose (D{sub min}, D{sub 50}, and D{sub max}) for the gross tumor and planning target volumes (GTV and PTV); (ii) the volume of the lungs (excluding the GTV) receiving at least 20 and 30 Gy (V{sub 20} and V{sub 30}) and the mean lung dose; (iii) the 33rd percentile dose (D{sub 33}) and D{sub max} delivered to the heart and the expanded esophagus; and (iv) D{sub max} for the expanded spinal cord. Statistical analysis was performed by means of one-way analysis of variance for repeated measurements and Tukey pairwise comparison of means. Pinnacle-CS showed an excellent agreement with MCDE within the target structures, whereas the best correspondence for the organs at risk (OARs) was found between Helax-CC and MCDE. Results from Helax-PB were unsatisfying for both targets and OARs. Additionally, individual patient results were analyzed. Within the target structures, deviations above 5% were found in one patient for the comparison of MCDE and Helax-CC, while all differences
Vanderstraeten, Barbara; Reynaert, Nick; Paelinck, Leen; Madani, Indira; De Wagter, Carlos; De Gersem, Werner; De Neve, Wilfried; Thierens, Hubert
2006-09-01
The accuracy of dose computation within the lungs depends strongly on the performance of the calculation algorithm in regions of electronic disequilibrium that arise near tissue inhomogeneities with large density variations. There is a lack of data evaluating the performance of highly developed analytical dose calculation algorithms compared to Monte Carlo computations in a clinical setting. We compared full Monte Carlo calculations (performed by our Monte Carlo dose engine MCDE) with two different commercial convolution/superposition (CS) implementations (Pinnacle-CS and Helax-TMS's collapsed cone model Helax-CC) and one pencil beam algorithm (Helax-TMS's pencil beam model Helax-PB) for 10 intensity modulated radiation therapy (IMRT) lung cancer patients. Treatment plans were created for two photon beam qualities (6 and 18 MV). For each dose calculation algorithm, patient, and beam quality, the following set of clinically relevant dose-volume values was reported: (i) minimal, median, and maximal dose (Dmin, D50, and Dmax) for the gross tumor and planning target volumes (GTV and PTV); (ii) the volume of the lungs (excluding the GTV) receiving at least 20 and 30 Gy (V20 and V30) and the mean lung dose; (iii) the 33rd percentile dose (D33) and Dmax delivered to the heart and the expanded esophagus; and (iv) Dmax for the expanded spinal cord. Statistical analysis was performed by means of one-way analysis of variance for repeated measurements and Tukey pairwise comparison of means. Pinnacle-CS showed an excellent agreement with MCDE within the target structures, whereas the best correspondence for the organs at risk (OARs) was found between Helax-CC and MCDE. Results from Helax-PB were unsatisfying for both targets and OARs. Additionally, individual patient results were analyzed. Within the target structures, deviations above 5% were found in one patient for the comparison of MCDE and Helax-CC, while all differences between MCDE and Pinnacle-CS were below 5%. For both
NASA Astrophysics Data System (ADS)
Alaei, Parham
2000-11-01
A number of procedures in diagnostic radiology and cardiology make use of long exposures to x rays from fluoroscopy units. Adverse effects of these long exposure times on the patients' skin have been documented in recent years. These include epilation, erythema, and, in severe cases, moist desquamation and tissue necrosis. Potential biological effects from these exposures to other organs include radiation-induced cataracts and pneumonitis. Although there have been numerous studies to measure or calculate the dose to skin from these procedures, there have only been a handful of studies to determine the dose to other organs. Therefore, there is a need for accurate methods to measure the dose in tissues and organs other than the skin. This research was concentrated in devising a method to determine accurately the radiation dose to these tissues and organs. The work was performed in several stages: First, a three dimensional (3D) treatment planning system used in radiation oncology was modified and complemented to make it usable with the low energies of x rays used in diagnostic radiology. Using the system for low energies required generation of energy deposition kernels using Monte Carlo methods. These kernels were generated using the EGS4 Monte Carlo system of codes and added to the treatment planning system. Following modification, the treatment planning system was evaluated for its accuracy of calculations in low energies within homogeneous and heterogeneous media. A study of the effects of lungs and bones on the dose distribution was also performed. The next step was the calculation of dose distributions in humanoid phantoms using this modified system. The system was used to calculate organ doses in these phantoms and the results were compared to those obtained from other methods. These dose distributions can subsequently be used to create dose-volume histograms (DVHs) for internal organs irradiated by these beams. Using this data and the concept of normal tissue
Real-time dose computation: GPU-accelerated source modeling and superposition/convolution
Jacques, Robert; Wong, John; Taylor, Russell; McNutt, Todd
2011-01-15
Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for the total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3
Ultrafast convolution/superposition using tabulated and exponential kernels on GPU
Chen Quan; Chen Mingli; Lu Weiguo
2011-03-15
Purpose: Collapsed-cone convolution/superposition (CCCS) dose calculation is the workhorse for IMRT dose calculation. The authors present a novel algorithm for computing CCCS dose on the modern graphic processing unit (GPU). Methods: The GPU algorithm includes a novel TERMA calculation that has no write-conflicts and has linear computation complexity. The CCCS algorithm uses either tabulated or exponential cumulative-cumulative kernels (CCKs) as reported in literature. The authors have demonstrated that the use of exponential kernels can reduce the computation complexity by order of a dimension and achieve excellent accuracy. Special attentions are paid to the unique architecture of GPU, especially the memory accessing pattern, which increases performance by more than tenfold. Results: As a result, the tabulated kernel implementation in GPU is two to three times faster than other GPU implementations reported in literature. The implementation of CCCS showed significant speedup on GPU over single core CPU. On tabulated CCK, speedups as high as 70 are observed; on exponential CCK, speedups as high as 90 are observed. Conclusions: Overall, the GPU algorithm using exponential CCK is 1000-3000 times faster over a highly optimized single-threaded CPU implementation using tabulated CCK, while the dose differences are within 0.5% and 0.5 mm. This ultrafast CCCS algorithm will allow many time-sensitive applications to use accurate dose calculation.
Huang, Jessie Y.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.; Eklund, David; Childress, Nathan L.
2013-12-15
Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm.Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels.Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found
An Improved Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm
NASA Astrophysics Data System (ADS)
Jacques, Robert; McNutt, Todd
2014-03-01
Purpose: To improve the accuracy of convolution/superposition (C/S) in heterogeneous material by developing a new algorithm: heterogeneity compensated superposition (HCS). Methods: C/S has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to the faster fall-off and re-buildup of dose. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to C/S. We implemented the effective density function as a multivariate first-order recursive filter and incorporated it into GPU-accelerated, multi-energetic C/S implementation. We compared HCS against C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. Results: Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. We defined the per-voxel error, %|mm, as the minimum of the distance to agreement in mm and the dosimetric percentage error relative to the maximum MC dose. HCS improved the average mean error by 0.79 %|mm for the patient volumes; reducing the average mean error from 1.93 %|mm to 1.14 %|mm. Very low densities (i.e. < 0.1 g / cm3) remained problematic, but may be solvable with a better filter function. Conclusions: HCS improved upon C/S's density scaled heterogeneity correction with a position and direction sensitive density filter. This method significantly improved the accuracy of the GPU based algorithm reaching the accuracy levels of Monte Carlo based methods with performance in a few tenths of seconds per beam. Acknowledgement: Funding for this research was provided by the NSF Cooperative Agreement EEC9731748, Elekta / IMPAC Medical Systems, Inc. and the Johns Hopkins University. James Satterthwaite provided the Monte Carlo benchmark simulations.
Hardcastle, Nicholas; Oborn, Bradley M; Haworth, Annette
2016-01-01
Stereotactic body radiation therapy (SBRT) aims to deliver a highly conformal ablative dose to a small target. Dosimetric verification of SBRT for lung tumors presents a challenge due to heterogeneities, moving targets, and small fields. Recent software (M3D) designed for dosimetric verification of lung SBRT treatment plans using an advanced convolution-superposition algorithm was evaluated. Ten lung SBRT patients covering a range of tumor volumes were selected. 3D CRT plans were created using the XiO treatment planning system (TPS) with the superposition algorithm. Dose was recalculated in the Eclipse TPS using the AAA algorithm, M3D verification software using the collapsed-cone-convolution algorithm, and in-house Monte Carlo (MC). Target point doses were calculated with RadCalc software. Near-maximum, median, and near-minimum target doses, conformity indices, and lung doses were compared with MC as the reference calculation. M3D 3D gamma passing rates were compared with the XiO and Eclipse. Wilcoxon signed-rank test was used to compare each calculation method with XiO with a threshold of significance of p < 0.05. M3D and RadCalc point dose calculations were greater than MC by up to 7.7% and 13.1%, respectively, with M3D being statistically significant (s.s.). AAA and XiO calculated point doses were less than MC by 11.3% and 5.2%, respectively (AAA s.s.). Median and near-minimum and near-maximum target doses were less than MC when calculated with AAA and XiO (all s.s.). Near-maximum and median target doses were higher with M3D compared with MC (s.s.), but there was no difference in near-minimum M3D doses compared with MC. M3D-calculated ipsilateral lung V20 Gy and V5 Gy were greater than that calculated with MC (s.s.); AAA- and XiO-calculated V20 Gy was lower than that calculated with MC, but not statistically different to MC for V5 Gy. Nine of the 10 plans achieved M3D gamma passing rates greater than 95% and 80%for 5%/1 mm and 3%/1 mm criteria, respectively. M3
Hardcastle, Nicholas; Oborn, Bradley M; Haworth, Annette
2016-01-01
Stereotactic body radiation therapy (SBRT) aims to deliver a highly conformal ablative dose to a small target. Dosimetric verification of SBRT for lung tumors presents a challenge due to heterogeneities, moving targets, and small fields. Recent software (M3D) designed for dosimetric verification of lung SBRT treatment plans using an advanced convolution-superposition algorithm was evaluated. Ten lung SBRT patients covering a range of tumor volumes were selected. 3D CRT plans were created using the XiO treatment planning system (TPS) with the superposition algorithm. Dose was recalculated in the Eclipse TPS using the AAA algorithm, M3D verification software using the collapsed-cone-convolution algorithm, and in-house Monte Carlo (MC). Target point doses were calculated with RadCalc software. Near-maximum, median, and near-minimum target doses, conformity indices, and lung doses were compared with MC as the reference calculation. M3D 3D gamma passing rates were compared with the XiO and Eclipse. Wilcoxon signed-rank test was used to compare each calculation method with XiO with a threshold of significance of p < 0.05. M3D and RadCalc point dose calculations were greater than MC by up to 7.7% and 13.1%, respectively, with M3D being statistically significant (s.s.). AAA and XiO calculated point doses were less than MC by 11.3% and 5.2%, respectively (AAA s.s.). Median and near-minimum and near-maximum target doses were less than MC when calculated with AAA and XiO (all s.s.). Near-maximum and median target doses were higher with M3D compared with MC (s.s.), but there was no difference in near-minimum M3D doses compared with MC. M3D-calculated ipsilateral lung V20 Gy and V5 Gy were greater than that calculated with MC (s.s.); AAA- and XiO-calculated V20 Gy was lower than that calculated with MC, but not statistically different to MC for V5 Gy. Nine of the 10 plans achieved M3D gamma passing rates greater than 95% and 80%for 5%/1 mm and 3%/1 mm criteria, respectively. M3
Naqvi, Shahid A.; D'Souza, Warren D.
2005-04-01
Current methods to calculate dose distributions with organ motion can be broadly classified as 'dose convolution' and 'fluence convolution' methods. In the former, a static dose distribution is convolved with the probability distribution function (PDF) that characterizes the motion. However, artifacts are produced near the surface and around inhomogeneities because the method assumes shift invariance. Fluence convolution avoids these artifacts by convolving the PDF with the incident fluence instead of the patient dose. In this paper we present an alternative method that improves the accuracy, generality as well as the speed of dose calculation with organ motion. The algorithm starts by sampling an isocenter point from a parametrically defined space curve corresponding to the patient-specific motion trajectory. Then a photon is sampled in the linac head and propagated through the three-dimensional (3-D) collimator structure corresponding to a particular MLC segment chosen randomly from the planned IMRT leaf sequence. The photon is then made to interact at a point in the CT-based simulation phantom. Randomly sampled monoenergetic kernel rays issued from this point are then made to deposit energy in the voxels. Our method explicitly accounts for MLC-specific effects (spectral hardening, tongue-and-groove, head scatter) as well as changes in SSD with isocentric displacement, assuming that the body moves rigidly with the isocenter. Since the positions are randomly sampled from a continuum, there is no motion discretization, and the computation takes no more time than a static calculation. To validate our method, we obtained ten separate film measurements of an IMRT plan delivered on a phantom moving sinusoidally, with each fraction starting with a random phase. For 2 cm motion amplitude, we found that a ten-fraction average of the film measurements gave an agreement with the calculated infinite fraction average to within 2 mm in the isodose curves. The results also
Sterpin, E.; Salvat, F.; Olivera, G.; Vynckier, S.
2009-05-15
The reliability of the convolution/superposition (C/S) algorithm of the Hi-Art tomotherapy system is evaluated by using the Monte Carlo model TomoPen, which has been already validated for homogeneous phantoms. The study was performed in three stages. First, measurements with EBT Gafchromic film for a 1.25x2.5 cm{sup 2} field in a heterogeneous phantom consisting of two slabs of polystyrene separated with Styrofoam were compared to simulation results from TomoPen. The excellent agreement found in this comparison justifies the use of TomoPen as the reference for the remaining parts of this work. Second, to allow analysis and interpretation of the results in clinical cases, dose distributions calculated with TomoPen and C/S were compared for a similar phantom geometry, with multiple slabs of various densities. Even in conditions of lack of lateral electronic equilibrium, overall good agreement was obtained between C/S and TomoPen results, with deviations within 3%/2 mm, showing that the C/S algorithm accounts for modifications in secondary electron transport due to the presence of a low density medium. Finally, calculations were performed with TomoPen and C/S of dose distributions in various clinical cases, from large bilateral head and neck tumors to small lung tumors with diameter of <3 cm. To ensure a ''fair'' comparison, identical dose calculation grid and dose-volume histogram calculator were used. Very good agreement was obtained for most of the cases, with no significant differences between the DVHs obtained from both calculations. However, deviations of up to 4% for the dose received by 95% of the target volume were found for the small lung tumors. Therefore, the approximations in the C/S algorithm slightly influence the accuracy in small lung tumors even though the C/S algorithm of the tomotherapy system shows very good overall behavior.
A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures
Neylon, J. Sheng, K.; Yu, V.; Low, D. A.; Kupelian, P.; Santhanam, A.; Chen, Q.
2014-10-15
, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. Results: The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. Conclusions: The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.
Calvo, Oscar I; Gutiérrez, Alonso N; Stathakis, Sotirios; Esquivel, Carlos; Papanikolaou, Nikos
2012-01-01
Specialized techniques that make use of small field dosimetry are common practice in today's clinics. These new techniques represent a big challenge to the treatment planning systems due to the lack of lateral electronic equilibrium. Because of this, the necessity of planning systems to overcome such difficulties and provide an accurate representation of the true value is of significant importance. Pinnacle3 is one such planning system. During the IMRT optimization process, Pinnacle3 treatment planning system allows the user to specify a minimum segment size which results in multiple beams composed of several subsets of different widths. In this study, the accuracy of the engine dose calculation, collapsed cone convolution superposition algorithm (CCCS) used by Pinnacle3, was quantified by Monte Carlo simulations, ionization chamber, and Kodak extended dose range film (EDR2) measurements for 11 SBRT lung patients. Lesions were < 3.0 cm in maximal diameter and <27.0cm3 in volume. The Monte Carlo EGSnrc\\BEAMnrc and EGS4\\MCSIM were used in the comparison. The minimum segment size allowable during optimization had a direct impact on the number of monitor units calculated for each beam. Plans with the smallest minimum segment size (0.1 cm2 to 2.0 cm2) had the largest number of MUs. Although PTV coverage remained unaffected, the segment size did have an effect on the dose to the organs at risk. Pinnacle3-calculated PTV mean doses were in agreement with Monte Carlo-calculated mean doses to within 5.6% for all plans. On average, the mean dose difference between Monte Carlo and Pinnacle3 for all 88 plans was 1.38%. The largest discrepancy in maximum dose was 5.8%, and was noted for one of the plans using a minimum segment size of 1.0 cm2. For minimum dose to the PTV, a maximum discrepancy between Monte Carlo and Pinnacle3 was noted of 12.5% for a plan using a 6.0 cm2 minimum segment size. Agreement between point dose measurements and Pinnacle3-calculated doses were on
FAST-PT: a novel algorithm to calculate convolution integrals in cosmological perturbation theory
NASA Astrophysics Data System (ADS)
McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.; Blazek, Jonathan A.
2016-09-01
We present a novel algorithm, FAST-PT, for performing convolution or mode-coupling integrals that appear in nonlinear cosmological perturbation theory. The algorithm uses several properties of gravitational structure formation—the locality of the dark matter equations and the scale invariance of the problem—as well as Fast Fourier Transforms to describe the input power spectrum as a superposition of power laws. This yields extremely fast performance, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation. We describe the algorithm and demonstrate its application to calculating nonlinear corrections to the matter power spectrum, including one-loop standard perturbation theory and the renormalization group approach. We also describe our public code (in Python) to implement this algorithm. The code, along with a user manual and example implementations, is available at https://github.com/JoeMcEwen/FAST-PT.
FAST-PT: Convolution integrals in cosmological perturbation theory calculator
NASA Astrophysics Data System (ADS)
McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.; Blazek, Jonathan A.
2016-03-01
FAST-PT calculates 1-loop corrections to the matter power spectrum in cosmology. The code utilizes Fourier methods combined with analytic expressions to reduce the computation time down to scale as N log N, where N is the number of grid point in the input linear power spectrum. FAST-PT is extremely fast, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation.
GPU-accelerated Monte Carlo convolution∕superposition implementation for dose calculation
Zhou, Bo; Yu, Cedric X.; Chen, Danny Z.; Hu, X. Sharon
2010-01-01
Purpose: Dose calculation is a key component in radiation treatment planning systems. Its performance and accuracy are crucial to the quality of treatment plans as emerging advanced radiation therapy technologies are exerting ever tighter constraints on dose calculation. A common practice is to choose either a deterministic method such as the convolution∕superposition (CS) method for speed or a Monte Carlo (MC) method for accuracy. The goal of this work is to boost the performance of a hybrid Monte Carlo convolution∕superposition (MCCS) method by devising a graphics processing unit (GPU) implementation so as to make the method practical for day-to-day usage. Methods: Although the MCCS algorithm combines the merits of MC fluence generation and CS fluence transport, it is still not fast enough to be used as a day-to-day planning tool. To alleviate the speed issue of MC algorithms, the authors adopted MCCS as their target method and implemented a GPU-based version. In order to fully utilize the GPU computing power, the MCCS algorithm is modified to match the GPU hardware architecture. The performance of the authors’ GPU-based implementation on an Nvidia GTX260 card is compared to a multithreaded software implementation on a quad-core system. Results: A speedup in the range of 6.7–11.4× is observed for the clinical cases used. The less than 2% statistical fluctuation also indicates that the accuracy of the authors’ GPU-based implementation is in good agreement with the results from the quad-core CPU implementation. Conclusions: This work shows that GPU is a feasible and cost-efficient solution compared to other alternatives such as using cluster machines or field-programmable gate arrays for satisfying the increasing demands on computation speed and accuracy of dose calculation. But there are also inherent limitations of using GPU for accelerating MC-type applications, which are also analyzed in detail in this article. PMID:21158271
Al Abed, Amr; Yin, Shijie; Suaning, Gregg J; Lovell, Nigel H; Dokos, Socrates
2012-01-01
Computational models are valuable tools that can be used to aid the design and test the efficacy of electrical stimulation strategies in prosthetic vision devices. In continuum models of retinal electrophysiology, the effective extracellular potential can be considered as an approximate measure of the electrotonic loading a neuron's dendritic tree exerts on the soma. A convolution based method is presented to calculate the local spatial average of the effective extracellular loading in retinal ganglion cells (RGCs) in a continuum model of the retina which includes an active RGC tissue layer. The method can be used to study the effect of the dendritic tree size on the activation of RGCs by electrical stimulation using a hexagonal arrangement of electrodes (hexpolar) placed in the suprachoroidal space.
NASA Astrophysics Data System (ADS)
Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud
2014-09-01
To speed-up the absorbed dose (AD) computation while accounting for tissue heterogeneities, a Collapsed Cone (CC) superposition algorithm was developed and validated for 90Y. The superposition was implemented with an Energy Deposition Kernel scaled with the radiological distance, along with CC acceleration. The validation relative to Monte Carlo simulations was performed on 6 phantoms involving soft tissue, lung and bone, a radioembolisation treatment and a simulated bone metastasis treatment. As a figure of merit, the relative AD difference (ΔAD) in low gradient regions (LGR), distance to agreement (DTA) in high gradient regions and the γ(1%,1 mm) criterion were used for the phantoms. Mean organ doses and γ(3%,3 mm) were used for the patient data. For the semi-infinite sources, ΔAD in LGR was below 1%. DTA was below 0.6 mm. All profiles verified the γ(1%,1 mm) criterion. For both clinical cases, mean doses differed by less than 1% for the considered organs and all profiles verified the γ(3%,3 mm). The calculation time was below 4 min on a single processor for CC superposition and 40 h on a 40 nodes cluster for MCNP (108 histories). Our results show that the CC superposition is a very promising alternative to MC for 90Y dosimetry, while significantly reducing computation time.
Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud
2014-09-01
To speed-up the absorbed dose (AD) computation while accounting for tissue heterogeneities, a Collapsed Cone (CC) superposition algorithm was developed and validated for (90)Y. The superposition was implemented with an Energy Deposition Kernel scaled with the radiological distance, along with CC acceleration. The validation relative to Monte Carlo simulations was performed on 6 phantoms involving soft tissue, lung and bone, a radioembolisation treatment and a simulated bone metastasis treatment. As a figure of merit, the relative AD difference (ΔAD) in low gradient regions (LGR), distance to agreement (DTA) in high gradient regions and the γ(1%,1 mm) criterion were used for the phantoms. Mean organ doses and γ(3%,3 mm) were used for the patient data. For the semi-infinite sources, ΔAD in LGR was below 1%. DTA was below 0.6 mm. All profiles verified the γ(1%,1 mm) criterion. For both clinical cases, mean doses differed by less than 1% for the considered organs and all profiles verified the γ(3%,3 mm). The calculation time was below 4 min on a single processor for CC superposition and 40 h on a 40 nodes cluster for MCNP (10(8) histories). Our results show that the CC superposition is a very promising alternative to MC for (90)Y dosimetry, while significantly reducing computation time. PMID:25097006
Wu, Vincent W.C.; Tse, Teddy K.H.; Ho, Cola L.M.; Yeung, Eric C.Y.
2013-07-01
Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time.
NASA Astrophysics Data System (ADS)
Tsakstara, V.; Kosmas, T. S.
2011-12-01
Convoluted differential and total cross sections of inelastic ν scattering on 128,130Te isotopes are computed from the original cross sections calculated previously using the quasiparticle random-phase approximation. We adopt various spectral distributions for the neutrino energy spectra such as the common two-parameter Fermi-Dirac and power-law distributions appropriate to explore nuclear detector responses to supernova neutrino spectra. We also concentrate on the use of low-energy β-beam neutrinos, originating from boosted β--radioactive 6He ions, to decompose original supernova (anti)neutrino spectra that are subsequently employed to simulate total cross sections of the reactions 130Te(ν˜,ν˜')130Te*. The concrete nuclear regimes selected, 128,130Te, are contents of the multipurpose CUORE and COBRA rare event detectors. Our present investigation may provide useful information about the efficiency of the Te detector medium of the above experiments in their potential use in supernova neutrino searches.
Romańczyk, Piotr P; Rotko, Grzegorz; Kurek, Stefan S
2016-08-10
Formal potentials of the first reduction leading to dechlorination in dimethylformamide were obtained from convolution analysis of voltammetric data and confirmed by quantum chemical calculations for a series of polychlorinated benzenes: hexachlorobenzene (-2.02 V vs. Fc(+)/Fc), pentachloroanisole (-2.14 V), and 2,4-dichlorophenoxy- and 2,4,5-trichlorophenoxyacetic acids (-2.35 V and -2.34 V, respectively). The key parameters required to calculate the reduction potential, electron affinity and/or C-Cl bond dissociation energy, were computed at both DFT-D and CCSD(T)-F12 levels. Comparison of the obtained gas-phase energies and redox potentials with experiment enabled us to verify the relative energetics and the performance of various implicit solvent models. Good agreement with the experiment was achieved for redox potentials computed at the DFT-D level, but only for the stepwise mechanism owing to the error compensation. For the concerted electron transfer/C-Cl bond cleavage process, the application of a high level coupled cluster method is required. Quantum chemical calculations have also demonstrated the significant role of the π*ring and σ*C-Cl orbital mixing. It brings about the stabilisation of the non-planar, C2v-symmetric C6Cl6˙(-) radical anion, explains the experimentally observed low energy barrier and the transfer coefficient close to 0.5 for C6Cl5OCH3 in an electron transfer process followed by immediate C-Cl bond cleavage in solution, and an increase in the probability of dechlorination of di- and trichlorophenoxyacetic acids due to substantial population of the vibrational excited states corresponding to the out-of-plane C-Cl bending at ambient temperatures.
Romańczyk, Piotr P; Rotko, Grzegorz; Kurek, Stefan S
2016-08-10
Formal potentials of the first reduction leading to dechlorination in dimethylformamide were obtained from convolution analysis of voltammetric data and confirmed by quantum chemical calculations for a series of polychlorinated benzenes: hexachlorobenzene (-2.02 V vs. Fc(+)/Fc), pentachloroanisole (-2.14 V), and 2,4-dichlorophenoxy- and 2,4,5-trichlorophenoxyacetic acids (-2.35 V and -2.34 V, respectively). The key parameters required to calculate the reduction potential, electron affinity and/or C-Cl bond dissociation energy, were computed at both DFT-D and CCSD(T)-F12 levels. Comparison of the obtained gas-phase energies and redox potentials with experiment enabled us to verify the relative energetics and the performance of various implicit solvent models. Good agreement with the experiment was achieved for redox potentials computed at the DFT-D level, but only for the stepwise mechanism owing to the error compensation. For the concerted electron transfer/C-Cl bond cleavage process, the application of a high level coupled cluster method is required. Quantum chemical calculations have also demonstrated the significant role of the π*ring and σ*C-Cl orbital mixing. It brings about the stabilisation of the non-planar, C2v-symmetric C6Cl6˙(-) radical anion, explains the experimentally observed low energy barrier and the transfer coefficient close to 0.5 for C6Cl5OCH3 in an electron transfer process followed by immediate C-Cl bond cleavage in solution, and an increase in the probability of dechlorination of di- and trichlorophenoxyacetic acids due to substantial population of the vibrational excited states corresponding to the out-of-plane C-Cl bending at ambient temperatures. PMID:27477334
NASA Astrophysics Data System (ADS)
Copeland, Kyle
2015-07-01
The superposition approximation was commonly employed in atmospheric nuclear transport modeling until recent years and is incorporated into flight dose calculation codes such as CARI-6 and EPCARD. The useful altitude range for this approximation is investigated using Monte Carlo transport techniques. CARI-7A simulates atmospheric radiation transport of elements H-Fe using a database of precalculated galactic cosmic radiation showers calculated with MCNPX 2.7.0 and is employed here to investigate the influence of the superposition approximation on effective dose rates, relative to full nuclear transport of galactic cosmic ray primary ions. Superposition is found to produce results less than 10% different from nuclear transport at current commercial and business aviation altitudes while underestimating dose rates at higher altitudes. The underestimate sometimes exceeds 20% at approximately 23 km and exceeds 40% at 50 km. Thus, programs employing this approximation should not be used to estimate doses or dose rates for high-altitude portions of the commercial space and near-space manned flights that are expected to begin soon.
Yeom, Han-Ju; Park, Jae-Hyeung
2016-08-22
We propose a method to obtain a computer-generated hologram that renders reflectance distributions of individual mesh surfaces of three-dimensional objects. Unlike previous methods which find phase distribution inside each mesh, the proposed method performs convolution of angular spectrum of the mesh to obtain desired reflectance distribution. Manipulation in the angular spectrum domain enables its application to fully-analytic mesh based computer generated hologram, removing the necessity for resampling of the spatial frequency grid. It is also computationally inexpensive as the convolution can be performed efficiently using Fourier transform. In this paper, we present principle, error analysis, simulation, and experimental verification results of the proposed method.
NASA Astrophysics Data System (ADS)
Kazakov, Vasily I.; Moskaletz, Dmitry O.; Moskaletz, Oleg D.
2016-04-01
A new, alternative theory of diffraction grating spectral device which is based on the mathematical analysis of the optical signal transformation from the input aperture of spectral device to result of photo detection is proposed. Exhaustive characteristics of the diffraction grating spectral device - its complex and power spread functions as the kernels of the corresponding integral operator, describing the optical signal transformation by spectral device is obtained. On the basis of the proposed alternative theory the possibility of using the diffraction grating spectral device for calculation of convolution and correlation of optical pulse signals is showed.
NASA Astrophysics Data System (ADS)
Naik, Mehul S.
Intensity-modulated radiation therapy (IMRT) is a 3D conformal radiation therapy technique that utilizes either a multileaf intensity-modulating collimator (MIMiC used with the NOMOS Peacock system) or a multileaf collimator (MLC) on a conventional linear accelerator for beam intensity modulation to afford increased conformity in dose distributions. Due to the high-dose gradient regions that are effectively created, particular emphasis should be placed in the accurate determination of pencil beam kernels that are utilized by pencil beam convolution algorithms employed by a number of commercial IMRT treatment planning systems (TPS). These kernels are determined from relatively large field dose profiles that are typically collected using an ion chamber during commissioning of the TPS, while recent studies have demonstrated improvements in dose calculation accuracy when incorporating film data into the commissioning measurements. For this study, it has been proposed that the shape of high-resolution dose kernels can be extracted directly from single pencil beam (beamlet) profile measurements acquired using high-precision dosimetric film in order to accurately compute dose distributions, specifically for small fields and the penumbra regions of the larger fields. The effectiveness of GafChromic EBT film as an appropriate dosimeter to acquire the necessary measurements was evaluated and compared to the conventional silver-halide Kodak EDR2 film. Using the NOMOS Peacock system, similar dose kernels were extracted through deconvolution of the elementary pencil beam profiles using the two different types of films. Independent convolution-based calculations were performed using these kernels, resulting in better agreement with the measured relative dose profiles, as compared to those determined by CORVUS TPS' finite-size pencil beam (FSPB) algorithm. Preliminary evaluation of the proposed method in performing kernel extraction for an MLC-based IMRT system also showed
Brandenburg, Jan Gerit; Alessio, Maristella; Civalleri, Bartolomeo; Peintinger, Michael F; Bredow, Thomas; Grimme, Stefan
2013-09-26
We extend the previously developed geometrical correction for the inter- and intramolecular basis set superposition error (gCP) to periodic density functional theory (DFT) calculations. We report gCP results compared to those from the standard Boys-Bernardi counterpoise correction scheme and large basis set calculations. The applicability of the method to molecular crystals as the main target is tested for the benchmark set X23. It consists of 23 noncovalently bound crystals as introduced by Johnson et al. (J. Chem. Phys. 2012, 137, 054103) and refined by Tkatchenko et al. (J. Chem. Phys. 2013, 139, 024705). In order to accurately describe long-range electron correlation effects, we use the standard atom-pairwise dispersion correction scheme DFT-D3. We show that a combination of DFT energies with small atom-centered basis sets, the D3 dispersion correction, and the gCP correction can accurately describe van der Waals and hydrogen-bonded crystals. Mean absolute deviations of the X23 sublimation energies can be reduced by more than 70% and 80% for the standard functionals PBE and B3LYP, respectively, to small residual mean absolute deviations of about 2 kcal/mol (corresponding to 13% of the average sublimation energy). As a further test, we compute the interlayer interaction of graphite for varying distances and obtain a good equilibrium distance and interaction energy of 6.75 Å and -43.0 meV/atom at the PBE-D3-gCP/SVP level. We fit the gCP scheme for a recently developed pob-TZVP solid-state basis set and obtain reasonable results for the X23 benchmark set and the potential energy curve for water adsorption on a nickel (110) surface.
NASA Astrophysics Data System (ADS)
Gustafsson, Johan; Nilsson, Per; Sjögreen Gleisner, Katarina
2013-03-01
We have previously shown analytically that the biologically effective dose (BED), including effects of repair during irradiation and of incomplete repair between fractions, can be formulated using a convolution between the absorbed dose rate function and the function describing repair. In this work, a discrete formalism is derived along with its implementation via the fast Fourier transform. The implementation takes the intrinsic periodicity of the discrete Fourier transform into consideration, as well as possible inconsistencies that may arise due to discretization and truncation of the functions describing the absorbed dose rate and repair. Numerically and analytically calculated BED values are compared for various situations in external beam radiotherapy, brachytherapy and radionuclide therapy, including the use of different repair models. The numerical method is shown to be accurate and versatile since it can be applied to any kind of absorbed dose rate function and allows for the incorporation of different repair models. Typical accuracies for clinically realistic examples are in the order of 10-3% to 10-5%. The method has thus the potential of being a useful tool for the calculation of BED, also in situations with complicated irradiation patterns or repair functions.
NASA Astrophysics Data System (ADS)
Russo, G.; Attili, A.; Battistoni, G.; Bertrand, D.; Bourhaleb, F.; Cappucci, F.; Ciocca, M.; Mairani, A.; Milian, F. M.; Molinelli, S.; Morone, M. C.; Muraro, S.; Orts, T.; Patera, V.; Sala, P.; Schmitt, E.; Vivaldo, G.; Marchetto, F.
2016-01-01
The calculation algorithm of a modern treatment planning system for ion-beam radiotherapy should ideally be able to deal with different ion species (e.g. protons and carbon ions), to provide relative biological effectiveness (RBE) evaluations and to describe different beam lines. In this work we propose a new approach for ion irradiation outcomes computations, the beamlet superposition (BS) model, which satisfies these requirements. This model applies and extends the concepts of previous fluence-weighted pencil-beam algorithms to quantities of radiobiological interest other than dose, i.e. RBE- and LET-related quantities. It describes an ion beam through a beam-line specific, weighted superposition of universal beamlets. The universal physical and radiobiological irradiation effect of the beamlets on a representative set of water-like tissues is evaluated once, coupling the per-track information derived from FLUKA Monte Carlo simulations with the radiobiological effectiveness provided by the microdosimetric kinetic model and the local effect model. Thanks to an extension of the superposition concept, the beamlet irradiation action superposition is applicable for the evaluation of dose, RBE and LET distributions. The weight function for the beamlets superposition is derived from the beam phase space density at the patient entrance. A general beam model commissioning procedure is proposed, which has successfully been tested on the CNAO beam line. The BS model provides the evaluation of different irradiation quantities for different ions, the adaptability permitted by weight functions and the evaluation speed of analitical approaches. Benchmarking plans in simple geometries and clinical plans are shown to demonstrate the model capabilities.
NASA Astrophysics Data System (ADS)
Gustafsson, Johan; Nilsson, Per; Sjögreen Gleisner, Katarina
2013-03-01
This work presents a new mathematical formulation of biologically effective dose (BED) for radiation therapy where effects of repair need to be considered. The formulation is based on the observation that the effects of repair, both during protracted irradiation and of incomplete repair between fractions, can be written using a convolution, i.e. {BED}(T)=\\int \
Dealiased convolutions for pseudospectral simulations
NASA Astrophysics Data System (ADS)
Roberts, Malcolm; Bowman, John C.
2011-12-01
Efficient algorithms have recently been developed for calculating dealiased linear convolution sums without the expense of conventional zero-padding or phase-shift techniques. For one-dimensional in-place convolutions, the memory requirements are identical with the zero-padding technique, with the important distinction that the additional work memory need not be contiguous with the input data. This decoupling of data and work arrays dramatically reduces the memory and computation time required to evaluate higher-dimensional in-place convolutions. The memory savings is achieved by computing the in-place Fourier transform of the data in blocks, rather than all at once. The technique also allows one to dealias the n-ary convolutions that arise on Fourier transforming cubic and higher powers. Implicitly dealiased convolutions can be built on top of state-of-the-art adaptive fast Fourier transform libraries like FFTW. Vectorized multidimensional implementations for the complex and centered Hermitian (pseudospectral) cases have already been implemented in the open-source software FFTW++. With the advent of this library, writing a high-performance dealiased pseudospectral code for solving nonlinear partial differential equations has now become a relatively straightforward exercise. New theoretical estimates of computational complexity and memory use are provided, including corrected timing results for 3D pruned convolutions and further consideration of higher-order convolutions.
SU-E-T-08: A Convolution Model for Head Scatter Fluence in the Intensity Modulated Field
Chen, M; Mo, X; Chen, Y; Parnell, D; Key, S; Olivera, G; Galmarini, W; Lu, W
2014-06-01
Purpose: To efficiently calculate the head scatter fluence for an arbitrary intensity-modulated field with any source distribution using the source occlusion model. Method: The source occlusion model with focal and extra focal radiation (Jaffray et al, 1993) can be used to account for LINAC head scatter. In the model, the fluence map of any field shape at any point can be calculated via integration of the source distribution within the visible range, as confined by each segment, using the detector eye's view. A 2D integration would be required for each segment and each fluence plane point, which is time-consuming, as an intensity-modulated field contains typically tens to hundreds of segments. In this work, we prove that the superposition of the segmental integrations is equivalent to a simple convolution regardless of what the source distribution is. In fact, for each point, the detector eye's view of the field shape can be represented as a function with the origin defined at the point's pinhole reflection through the center of the collimator plane. We were thus able to reduce hundreds of source plane integration to one convolution. We calculated the fluence map for various 3D and IMRT beams and various extra-focal source distributions using both the segmental integration approach and the convolution approach and compared the computation time and fluence map results of both approaches. Results: The fluence maps calculated using the convolution approach were the same as those calculated using the segmental approach, except for rounding errors (<0.1%). While it took considerably longer time to calculate all segmental integrations, the fluence map calculation using the convolution approach took only ∼1/3 of the time for typical IMRT fields with ∼100 segments. Conclusions: The convolution approach for head scatter fluence calculation is fast and accurate and can be used to enhance the online process.
Monari, Antonio; Bendazzoli, Gian Luigi; Evangelisti, Stefano; Angeli, Celestino; Ben Amor, Nadia; Borini, Stefano; Maynau, Daniel; Rossi, Elda
2007-03-01
The dispersion interactions of the Ne2 dimer were studied using both the long-range perturbative and supramolecular approaches: for the long-range approach, full CI or string-truncated CI methods were used, while for the supramolecular treatments, the energy curves were computed by using configuration interaction with single and double excitation (CISD), coupled cluster with single and double excitation, and coupled-cluster with single and double (and perturbative) triple excitations. From the interatomic potential-energy curves obtained by the supramolecular approach, the C6 and C8 dispersion coefficients were computed via an interpolation scheme, and they were compared with the corresponding values obtained within the long-range perturbative treatment. We found that the lack of size consistency of the CISD approach makes this method completely useless to compute dispersion coefficients even when the effect of the basis-set superposition error on the dimer curves is considered. The largest full-CI space we were able to use contains more than 1 billion symmetry-adapted Slater determinants, and it is, to our knowledge, the largest calculation of second-order properties ever done at the full-CI level so far. Finally, a new data format and libraries (Q5Cost) have been used in order to interface different codes used in the present study.
Ellison, David H.
2014-01-01
The distal convoluted tubule is the nephron segment that lies immediately downstream of the macula densa. Although short in length, the distal convoluted tubule plays a critical role in sodium, potassium, and divalent cation homeostasis. Recent genetic and physiologic studies have greatly expanded our understanding of how the distal convoluted tubule regulates these processes at the molecular level. This article provides an update on the distal convoluted tubule, highlighting concepts and pathophysiology relevant to clinical practice. PMID:24855283
Kruse, Holger; Grimme, Stefan
2012-04-21
A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model
Multipartite entanglement of superpositions
Cavalcanti, D.; Terra Cunha, M. O.; Acin, A.
2007-10-15
The entanglement of superpositions [Linden et al., Phys. Rev. Lett. 97, 100502 (2006)]is generalized to the multipartite scenario: an upper bound to the multipartite entanglement of a superposition is given in terms of the entanglement of the superposed states and the superposition coefficients. This bound is proven to be tight for a class of states composed of an arbitrary number of qubits. We also extend the result to a large family of quantifiers, which includes the negativity, the robustness of entanglement, and the best separable approximation measure.
Some easily analyzable convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R.; Dolinar, S.; Pollara, F.; Vantilborg, H.
1989-01-01
Convolutional codes have played and will play a key role in the downlink telemetry systems on many NASA deep-space probes, including Voyager, Magellan, and Galileo. One of the chief difficulties associated with the use of convolutional codes, however, is the notorious difficulty of analyzing them. Given a convolutional code as specified, say, by its generator polynomials, it is no easy matter to say how well that code will perform on a given noisy channel. The usual first step in such an analysis is to computer the code's free distance; this can be done with an algorithm whose complexity is exponential in the code's constraint length. The second step is often to calculate the transfer function in one, two, or three variables, or at least a few terms in its power series expansion. This step is quite hard, and for many codes of relatively short constraint lengths, it can be intractable. However, a large class of convolutional codes were discovered for which the free distance can be computed by inspection, and for which there is a closed-form expression for the three-variable transfer function. Although for large constraint lengths, these codes have relatively low rates, they are nevertheless interesting and potentially useful. Furthermore, the ideas developed here to analyze these specialized codes may well extend to a much larger class.
Network Class Superposition Analyses
Pearson, Carl A. B.; Zeng, Chen; Simha, Rahul
2013-01-01
Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., for the yeast cell cycle process [1]), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix , which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for derived from Boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with . We show how to generate Derrida plots based on . We show that -based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on . We motivate all of these results in terms of a popular molecular biology Boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for , for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses. PMID:23565141
Network class superposition analyses.
Pearson, Carl A B; Zeng, Chen; Simha, Rahul
2013-01-01
Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30) for the yeast cell cycle process), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses. PMID:23565141
Takeda, Atsuya; Sanuki, Naoko; Kunieda, Etsuo Ohashi, Toshio; Oku, Yohei; Takeda, Toshiaki; Shigematsu, Naoyuki; Kubo, Atsushi
2009-02-01
Purpose: To retrospectively analyze the clinical outcomes of stereotactic body radiotherapy (SBRT) for patients with Stages 1A and 1B non-small-cell lung cancer. Methods and Materials: We reviewed the records of patients with non-small-cell lung cancer treated with curative intent between Dec 2001 and May 2007. All patients had histopathologically or cytologically confirmed disease, increased levels of tumor markers, and/or positive findings on fluorodeoxyglucose positron emission tomography. Staging studies identified their disease as Stage 1A or 1B. Performance status was 2 or less according to World Health Organization guidelines in all cases. The prescribed dose of 50 Gy total in five fractions, calculated by using a superposition algorithm, was defined for the periphery of the planning target volume. Results: One hundred twenty-one patients underwent SBRT during the study period, and 63 were eligible for this analysis. Thirty-eight patients had Stage 1A (T1N0M0) and 25 had Stage 1B (T2N0M0). Forty-nine patients were not appropriate candidates for surgery because of chronic pulmonary disease. Median follow-up of these 49 patients was 31 months (range, 10-72 months). The 3-year local control, disease-free, and overall survival rates in patients with Stages 1A and 1B were 93% and 96% (p = 0.86), 76% and 77% (p = 0.83), and 90% and 63% (p = 0.09), respectively. No acute toxicity was observed. Grade 2 or higher radiation pneumonitis was experienced by 3 patients, and 1 of them had fatal bacterial pneumonia. Conclusions: The SBRT at 50 Gy total in five fractions to the periphery of the planning target volume calculated by using a superposition algorithm is feasible. High local control rates were achieved for both T2 and T1 tumors.
Tupitsyn, I.I.
1988-03-01
The ionization potentials of the halogen group have been calculated. The calculations were carried out using the relativistic Hartree-Fock method taking into account correlation effects. Comparison of theoretical results with experimental data for the elements F, Cl, Br, and I allows an estimation of the accuracy and reliability of the method. The theoretical values of the ionization potential of astatine obtained here may be of definite interest for the chemistry of astatine.
Asymmetric quantum convolutional codes
NASA Astrophysics Data System (ADS)
La Guardia, Giuliano G.
2016-01-01
In this paper, we construct the first families of asymmetric quantum convolutional codes (AQCCs). These new AQCCs are constructed by means of the CSS-type construction applied to suitable families of classical convolutional codes, which are also constructed here. The new codes have non-catastrophic generator matrices, and they have great asymmetry. Since our constructions are performed algebraically, i.e. we develop general algebraic methods and properties to perform the constructions, it is possible to derive several families of such codes and not only codes with specific parameters. Additionally, several different types of such codes are obtained.
Superposition Enhanced Nested Sampling
NASA Astrophysics Data System (ADS)
Martiniani, Stefano; Stevenson, Jacob D.; Wales, David J.; Frenkel, Daan
2014-07-01
The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.
Emül, Y.; Erbahar, D.; Açıkgöz, M.
2015-08-14
Analyses of the local crystal and electronic structure in the vicinity of Fe{sup 3+} centers in perovskite KMgF{sub 3} crystal have been carried out in a comprehensive manner. A combination of density functional theory (DFT) and a semi-empirical superposition model (SPM) is used for a complete analysis of all Fe{sup 3+} centers in this study for the first time. Some quantitative information has been derived from the DFT calculations on both the electronic structure and the local geometry around Fe{sup 3+} centers. All of the trigonal (K-vacancy case, K-Li substitution case, and normal trigonal Fe{sup 3+} center case), FeF{sub 5}O cluster, and tetragonal (Mg-vacancy and Mg-Li substitution cases) centers have been taken into account based on the previously suggested experimental and theoretical inferences. The collaboration between the experimental data and the results of both DFT and SPM calculations provides us to understand most probable structural model for Fe{sup 3+} centers in KMgF{sub 3}.
Understanding deep convolutional networks.
Mallat, Stéphane
2016-04-13
Deep convolutional networks provide state-of-the-art classifications and regressions results over many high-dimensional problems. We review their architecture, which scatters data with a cascade of linear filter weights and nonlinearities. A mathematical framework is introduced to analyse their properties. Computations of invariants involve multiscale contractions with wavelets, the linearization of hierarchical symmetries and sparse separations. Applications are discussed. PMID:26953183
Superposition properties of interacting ion channels.
Keleshian, A M; Yeo, G F; Edeson, R O; Madsen, B W
1994-01-01
Quantitative analysis of patch clamp data is widely based on stochastic models of single-channel kinetics. Membrane patches often contain more than one active channel of a given type, and it is usually assumed that these behave independently in order to interpret the record and infer individual channel properties. However, recent studies suggest there are significant channel interactions in some systems. We examine a model of dependence in a system of two identical channels, each modeled by a continuous-time Markov chain in which specified transition rates are dependent on the conductance state of the other channel, changing instantaneously when the other channel opens or closes. Each channel then has, e.g., a closed time density that is conditional on the other channel being open or closed, these being identical under independence. We relate the two densities by a convolution function that embodies information about, and serves to quantify, dependence in the closed class. Distributions of observable (superposition) sojourn times are given in terms of these conditional densities. The behavior of two channel systems based on two- and three-state Markov models is examined by simulation. Optimized fitting of simulated data using reasonable parameters values and sample size indicates that both positive and negative cooperativity can be distinguished from independence. PMID:7524711
Improved scatter correction using adaptive scatter kernel superposition
NASA Astrophysics Data System (ADS)
Sun, M.; Star-Lack, J. M.
2010-11-01
Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.
New optimal quantum convolutional codes
NASA Astrophysics Data System (ADS)
Zhu, Shixin; Wang, Liqi; Kai, Xiaoshan
2015-04-01
One of the most challenges to prove the feasibility of quantum computers is to protect the quantum nature of information. Quantum convolutional codes are aimed at protecting a stream of quantum information in a long distance communication, which are the correct generalization to the quantum domain of their classical analogs. In this paper, we construct some classes of quantum convolutional codes by employing classical constacyclic codes. These codes are optimal in the sense that they attain the Singleton bound for pure convolutional stabilizer codes.
Entanglement-assisted quantum convolutional coding
Wilde, Mark M.; Brun, Todd A.
2010-04-15
We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.
McCormick, James A; Ellison, David H
2015-01-01
The distal convoluted tubule (DCT) is a short nephron segment, interposed between the macula densa and collecting duct. Even though it is short, it plays a key role in regulating extracellular fluid volume and electrolyte homeostasis. DCT cells are rich in mitochondria, and possess the highest density of Na+/K+-ATPase along the nephron, where it is expressed on the highly amplified basolateral membranes. DCT cells are largely water impermeable, and reabsorb sodium and chloride across the apical membrane via electroneurtral pathways. Prominent among this is the thiazide-sensitive sodium chloride cotransporter, target of widely used diuretic drugs. These cells also play a key role in magnesium reabsorption, which occurs predominantly, via a transient receptor potential channel (TRPM6). Human genetic diseases in which DCT function is perturbed have provided critical insights into the physiological role of the DCT, and how transport is regulated. These include Familial Hyperkalemic Hypertension, the salt-wasting diseases Gitelman syndrome and EAST syndrome, and hereditary hypomagnesemias. The DCT is also established as an important target for the hormones angiotensin II and aldosterone; it also appears to respond to sympathetic-nerve stimulation and changes in plasma potassium. Here, we discuss what is currently known about DCT physiology. Early studies that determined transport rates of ions by the DCT are described, as are the channels and transporters expressed along the DCT with the advent of molecular cloning. Regulation of expression and activity of these channels and transporters is also described; particular emphasis is placed on the contribution of genetic forms of DCT dysregulation to our understanding.
Linear superposition in nonlinear equations.
Khare, Avinash; Sukhatme, Uday
2002-06-17
Several nonlinear systems such as the Korteweg-de Vries (KdV) and modified KdV equations and lambda phi(4) theory possess periodic traveling wave solutions involving Jacobi elliptic functions. We show that suitable linear combinations of these known periodic solutions yield many additional solutions with different periods and velocities. This linear superposition procedure works by virtue of some remarkable new identities involving elliptic functions. PMID:12059300
A quantum algorithm for Viterbi decoding of classical convolutional codes
NASA Astrophysics Data System (ADS)
Grice, Jon R.; Meyer, David A.
2015-07-01
We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.
Student ability to distinguish between superposition states and mixed states in quantum mechanics
NASA Astrophysics Data System (ADS)
Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.
2015-12-01
Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the experimental implications of a superposition state. In particular, they fail to recognize how a superposition state and a mixed state (sometimes called a "lack of knowledge" state) can produce different experimental results. We present data that suggest that superposition in quantum mechanics is a difficult concept for students enrolled in sophomore-, junior-, and graduate-level quantum mechanics courses. We illustrate how an interactive lecture tutorial can improve student understanding of quantum mechanical superposition. A longitudinal study suggests that the impact persists after an additional quarter of quantum mechanics instruction that does not specifically address these ideas.
Student Ability to Distinguish between Superposition States and Mixed States in Quantum Mechanics
ERIC Educational Resources Information Center
Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.
2015-01-01
Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the…
Nonbinary Quantum Convolutional Codes Derived from Negacyclic Codes
NASA Astrophysics Data System (ADS)
Chen, Jianzhang; Li, Jianping; Yang, Fan; Huang, Yuanyuan
2015-01-01
In this paper, some families of nonbinary quantum convolutional codes are constructed by using negacyclic codes. These nonbinary quantum convolutional codes are different from quantum convolutional codes in the literature. Moreover, we construct a family of optimal quantum convolutional codes.
QCDNUM: Fast QCD evolution and convolution
NASA Astrophysics Data System (ADS)
Botje, M.
2011-02-01
The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline
Two dimensional convolute integers for machine vision and image recognition
NASA Technical Reports Server (NTRS)
Edwards, Thomas R.
1988-01-01
Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.
Linear superposition solutions to nonlinear wave equations
NASA Astrophysics Data System (ADS)
Liu, Yu
2012-11-01
The solutions to a linear wave equation can satisfy the principle of superposition, i.e., the linear superposition of two or more known solutions is still a solution of the linear wave equation. We show in this article that many nonlinear wave equations possess exact traveling wave solutions involving hyperbolic, triangle, and exponential functions, and the suitable linear combinations of these known solutions can also constitute linear superposition solutions to some nonlinear wave equations with special structural characteristics. The linear superposition solutions to the generalized KdV equation K(2,2,1), the Oliver water wave equation, and the k(n, n) equation are given. The structure characteristic of the nonlinear wave equations having linear superposition solutions is analyzed, and the reason why the solutions with the forms of hyperbolic, triangle, and exponential functions can form the linear superposition solutions is also discussed.
Creating a Superposition of Unknown Quantum States
NASA Astrophysics Data System (ADS)
Oszmaniec, Michał; Grudka, Andrzej; Horodecki, Michał; Wójcik, Antoni
2016-03-01
The superposition principle is one of the landmarks of quantum mechanics. The importance of quantum superpositions provokes questions about the limitations that quantum mechanics itself imposes on the possibility of their generation. In this work, we systematically study the problem of the creation of superpositions of unknown quantum states. First, we prove a no-go theorem that forbids the existence of a universal probabilistic quantum protocol producing a superposition of two unknown quantum states. Second, we provide an explicit probabilistic protocol generating a superposition of two unknown states, each having a fixed overlap with the known referential pure state. The protocol can be applied to generate coherent superposition of results of independent runs of subroutines in a quantum computer. Moreover, in the context of quantum optics it can be used to efficiently generate highly nonclassical states or non-Gaussian states.
Time-Strain Superposition in Polymer Glasses
NASA Astrophysics Data System (ADS)
O'Connell, Paul A.; McKenna, Gregory B.
1997-03-01
Time-strain superposition is often used in constitutive modeling to describe the nonlinear viscoelastic reponse of solid-like polymers. While it is true that time-strain superposition does not always work, a more fundamental question arises when it appears to work. Is the master curve obtained by time-strain superposition the same as that obtained in time-temperature superposition? Here we show work from torsional measurements on polycarbonate in the temperature range from 30 to 130 ^oC. We find that at each temperature time-strain superposition can be performed, but that the strain reductions do not give the same master curves as does the temperature reduction. Such behavior suggests that time-strain superposition cannot be used to represent polymeric material behavior and that its utility for estimating long time performance is very limited.
Creating a Superposition of Unknown Quantum States.
Oszmaniec, Michał; Grudka, Andrzej; Horodecki, Michał; Wójcik, Antoni
2016-03-18
The superposition principle is one of the landmarks of quantum mechanics. The importance of quantum superpositions provokes questions about the limitations that quantum mechanics itself imposes on the possibility of their generation. In this work, we systematically study the problem of the creation of superpositions of unknown quantum states. First, we prove a no-go theorem that forbids the existence of a universal probabilistic quantum protocol producing a superposition of two unknown quantum states. Second, we provide an explicit probabilistic protocol generating a superposition of two unknown states, each having a fixed overlap with the known referential pure state. The protocol can be applied to generate coherent superposition of results of independent runs of subroutines in a quantum computer. Moreover, in the context of quantum optics it can be used to efficiently generate highly nonclassical states or non-Gaussian states.
Mesoscopic Superposition States in Relativistic Landau Levels
Bermudez, A.; Martin-Delgado, M. A.; Solano, E.
2007-09-21
We show that a linear superposition of mesoscopic states in relativistic Landau levels can be built when an external magnetic field couples to a relativistic spin 1/2 charged particle. Under suitable initial conditions, the associated Dirac equation produces unitarily superpositions of coherent states involving the particle orbital quanta in a well-defined mesoscopic regime. We demonstrate that these mesoscopic superpositions have a purely relativistic origin and disappear in the nonrelativistic limit.
The M&M Superposition Principle.
ERIC Educational Resources Information Center
Miller, John B.
2000-01-01
Describes a physical system for demonstrating operators, eigenvalues, and superposition of states for a set of unusual wave functions. Uses candy to provide students with a visual and concrete picture of a superposition of states rather than an abstract plot of several overlaid mathematical states. (WRM)
Approximating large convolutions in digital images.
Mount, D M; Kanungo, T; Netanyahu, N S; Piatko, C; Silverman, R; Wu, A Y
2001-01-01
Computing discrete two-dimensional (2-D) convolutions is an important problem in image processing. In mathematical morphology, an important variant is that of computing binary convolutions, where the kernel of the convolution is a 0-1 valued function. This operation can be quite costly, especially when large kernels are involved. We present an algorithm for computing convolutions of this form, where the kernel of the binary convolution is derived from a convex polygon. Because the kernel is a geometric object, we allow the algorithm some flexibility in how it elects to digitize the convex kernel at each placement, as long as the digitization satisfies certain reasonable requirements. We say that such a convolution is valid. Given this flexibility we show that it is possible to compute binary convolutions more efficiently than would normally be possible for large kernels. Our main result is an algorithm which, given an m x n image and a k-sided convex polygonal kernel K, computes a valid convolution in O(kmn) time. Unlike standard algorithms for computing correlations and convolutions, the running time is independent of the area or perimeter of K, and our techniques do not rely on computing fast Fourier transforms. Our algorithm is based on a novel use of Bresenham's (1965) line-drawing algorithm and prefix-sums to update the convolution incrementally as the kernel is moved from one position to another across the image. PMID:18255522
Verifying quantum superpositions at metre scales
NASA Astrophysics Data System (ADS)
Stamper-Kurn, D. M.; Marti, G. E.; Müller, H.
2016-09-01
While the existence of quantum superpositions of massive particles over microscopic separations has been established since the birth of quantum mechanics, the maintenance of superposition states over macroscopic separations is a subject of modern experimental tests. In Ref. [1], T. Kovachy et al. report on applying optical pulses to place a freely falling Bose-Einstein condensate into a superposition of two trajectories that separate by an impressive distance of 54 cm before being redirected toward one another. When the trajectories overlap, a final optical pulse produces interference with high contrast, but with random phase, between the two wave packets. Contrary to claims made in Ref. [1], we argue that the observed interference is consistent with, but does not prove, that the spatially separated atomic ensembles were in a quantum superposition state. Therefore, the persistence of such superposition states remains experimentally unestablished.
Mixed superposition rules and the Riccati hierarchy
NASA Astrophysics Data System (ADS)
Grabowski, Janusz; de Lucas, Javier
Mixed superposition rules, i.e., functions describing the general solution of a system of first-order differential equations in terms of a generic family of particular solutions of first-order systems and some constants, are studied. The main achievement is a generalization of the celebrated Lie-Scheffers Theorem, characterizing systems admitting a mixed superposition rule. This somehow unexpected result says that such systems are exactly Lie systems, i.e., they admit a standard superposition rule. This provides a new and powerful tool for finding Lie systems, which is applied here to studying the Riccati hierarchy and to retrieving some known results in a more efficient and simpler way.
The trellis complexity of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Lin, W.
1995-01-01
It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.
Convolution-deconvolution in DIGES
Philippacopoulos, A.J.; Simos, N.
1995-05-01
Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities.
Many-Body Basis Set Superposition Effect.
Ouyang, John F; Bettens, Ryan P A
2015-11-10
The basis set superposition effect (BSSE) arises in electronic structure calculations of molecular clusters when questions relating to interactions between monomers within the larger cluster are asked. The binding energy, or total energy, of the cluster may be broken down into many smaller subcluster calculations and the energies of these subsystems linearly combined to, hopefully, produce the desired quantity of interest. Unfortunately, BSSE can plague these smaller fragment calculations. In this work, we carefully examine the major sources of error associated with reproducing the binding energy and total energy of a molecular cluster. In order to do so, we decompose these energies in terms of a many-body expansion (MBE), where a "body" here refers to the monomers that make up the cluster. In our analysis, we found it necessary to introduce something we designate here as a many-ghost many-body expansion (MGMBE). The work presented here produces some surprising results, but perhaps the most significant of all is that BSSE effects up to the order of truncation in a MBE of the total energy cancel exactly. In the case of the binding energy, the only BSSE correction terms remaining arise from the removal of the one-body monomer total energies. Nevertheless, our earlier work indicated that BSSE effects continued to remain in the total energy of the cluster up to very high truncation order in the MBE. We show in this work that the vast majority of these high-order many-body effects arise from BSSE associated with the one-body monomer total energies. Also, we found that, remarkably, the complete basis set limit values for the three-body and four-body interactions differed very little from that at the MP2/aug-cc-pVDZ level for the respective subclusters embedded within a larger cluster. PMID:26574311
Zhang, Xiao-Zheng; Thomas, Jean-Hugh; Bi, Chuan-Xing; Pascal, Jean-Claude
2012-10-01
A time-domain plane wave superposition method is proposed to reconstruct nonstationary sound fields. In this method, the sound field is expressed as a superposition of time convolutions between the estimated time-wavenumber spectrum of the sound pressure on a virtual source plane and the time-domain propagation kernel at each wavenumber. By discretizing the time convolutions directly, the reconstruction can be carried out iteratively in the time domain, thus providing the advantage of continuously reconstructing time-dependent pressure signals. In the reconstruction process, the Tikhonov regularization is introduced at each time step to obtain a relevant estimate of the time-wavenumber spectrum on the virtual source plane. Because the double infinite integral of the two-dimensional spatial Fourier transform is discretized directly in the wavenumber domain in the proposed method, it does not need to perform the two-dimensional spatial fast Fourier transform that is generally used in time domain holography and real-time near-field acoustic holography, and therefore it avoids some errors associated with the two-dimensional spatial fast Fourier transform in theory and makes possible to use an irregular microphone array. The feasibility of the proposed method is demonstrated by numerical simulations and an experiment with two speakers.
The general theory of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Stanley, R. P.
1993-01-01
This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.
Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity
NASA Astrophysics Data System (ADS)
Sales, J. S.; da Silva, L. F.; de Almeida, N. G.
2011-03-01
We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.
Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity
Sales, J. S.; Silva, L. F. da; Almeida, N. G. de
2011-03-15
We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.
Achieving unequal error protection with convolutional codes
NASA Technical Reports Server (NTRS)
Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.
1994-01-01
This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.
Search for optimal distance spectrum convolutional codes
NASA Technical Reports Server (NTRS)
Connor, Matthew C.; Perez, Lance C.; Costello, Daniel J., Jr.
1993-01-01
In order to communicate reliably and to reduce the required transmitter power, NASA uses coded communication systems on most of their deep space satellites and probes (e.g. Pioneer, Voyager, Galileo, and the TDRSS network). These communication systems use binary convolutional codes. Better codes make the system more reliable and require less transmitter power. However, there are no good construction techniques for convolutional codes. Thus, to find good convolutional codes requires an exhaustive search over the ensemble of all possible codes. In this paper, an efficient convolutional code search algorithm was implemented on an IBM RS6000 Model 580. The combination of algorithm efficiency and computational power enabled us to find, for the first time, the optimal rate 1/2, memory 14, convolutional code.
A new pencil beam model for photon dose calculations in heterogeneous media.
Zhang, P; Simon, A; De Crevoisier, R; Haigron, P; Nassef, M H; Li, B; Shu, H
2014-11-01
The pencil beam method is commonly used for dose calculations in intensity-modulated radiation therapy (IMRT). In this study, we have proposed a novel pencil model for calculating photon dose distributions in heterogeneous media. To avoid any oblique kernel-related bias and reduce computation time, dose distributions were computed in a spherical coordinate system based on the pencil kernels of different distances from source to surface (DSS). We employed two different dose calculation methods: the superposition method and the fast Fourier transform convolution (FFTC) method. In order to render the superposition method more accurate, we scaled the depth-directed component by moving the position of the entry point and altering the DSS value for a given beamlet. The lateral components were thus directly corrected by the density scaling method along the spherical shell without taking the densities from the previous layers into account. Significant computation time could be saved by performing the FFTC calculations on each spherical shell, disregarding density changes in the lateral direction. The proposed methods were tested on several phantoms, including lung- and bone-type heterogeneities. We compared them with Monte Carlo (MC) simulation for several field sizes with 6 MV photon beams. Our results revealed mean absolute deviations <1% for the proposed superposition method. Compared to the AAA algorithm, this method improved dose calculation accuracy by at least 0.3% in heterogeneous phantoms. The FFTC method was approximately 40 times faster than the superposition method. However, compared with MC, mean absolute deviations were <3% for the FFTC method.
On the Use of Material-Dependent Damping in ANSYS for Mode Superposition Transient Analysis
Nie, J.; Wei, X.
2011-07-17
The mode superposition method is often used for dynamic analysis of complex structures, such as the seismic Category I structures in nuclear power plants, in place of the less efficient full method, which uses the full system matrices for calculation of the transient responses. In such applications, specification of material-dependent damping is usually desirable because complex structures can consist of multiple types of materials that may have different energy dissipation capabilities. A recent review of the ANSYS manual for several releases found that the use of material-dependent damping is not clearly explained for performing a mode superposition transient dynamic analysis. This paper includes several mode superposition transient dynamic analyses using different ways to specify damping in ANSYS, in order to determine how material-dependent damping can be specified conveniently in a mode superposition transient dynamic analysis.
Transfer of arbitrary quantum emitter states to near-field photon superpositions in nanocavities.
Thijssen, Arthur C T; Cryan, Martin J; Rarity, John G; Oulton, Ruth
2012-09-24
We present a method to analyze the suitability of particular photonic cavity designs for information exchange between arbitrary superposition states of a quantum emitter and the near-field photonic cavity mode. As an illustrative example, we consider whether quantum dot emitters embedded in "L3" and "H1" photonic crystal cavities are able to transfer a spin superposition state to a confined photonic superposition state for use in quantum information transfer. Using an established dyadic Green's function (DGF) analysis, we describe methods to calculate coupling to arbitrary quantum emitter positions and orientations using the modified local density of states (LDOS) calculated using numerical finite-difference time-domain (FDTD) simulations. We find that while superposition states are not supported in L3 cavities, the double degeneracy of the H1 cavities supports superposition states of the two orthogonal modes that may be described as states on a Poincaré-like sphere. Methods are developed to comprehensively analyze the confined superposition state generated from an arbitrary emitter position and emitter dipole orientation.
a Logical Account of Quantum Superpositions
NASA Astrophysics Data System (ADS)
Krause, Décio Arenhart, Jonas R. Becker
In this paper we consider the phenomenon of superpositions in quantum mechanics and suggest a way to deal with the idea in a logical setting from a syntactical point of view, that is, as subsumed in the language of the formalism, and not semantically. We restrict the discussion to the propositional level only. Then, after presenting the motivations and a possible world semantics, the formalism is outlined and we also consider within this scheme the claim that superpositions may involve contradictions, as in the case of the Schrödinger's cat, which (it is usually said) is both alive and dead. We argue that this claim is a misreading of the quantum case. Finally, we sketch a new form of quantum logic that involves three kinds of negations and present the relationships among them. The paper is a first approach to the subject, introducing some main guidelines to be developed by a `syntactical' logical approach to quantum superpositions.
An approximate CPHD filter for superpositional sensors
NASA Astrophysics Data System (ADS)
Mahler, Ronald; El-Fallah, Adel
2012-06-01
Most multitarget tracking algorithms, such as JPDA, MHT, and the PHD and CPHD filters, presume the following measurement model: (a) targets are point targets, (b) every target generates at most a single measurement, and (c) any measurement is generated by at most a single target. However, the most familiar sensors, such as surveillance and imaging radars, violate assumption (c). This is because they are actually superpositional-that is, any measurement is a sum of signals generated by all of the targets in the scene. At this conference in 2009, the first author derived exact formulas for PHD and CPHD filters that presume general superpositional measurement models. Unfortunately, these formulas are computationally intractable. In this paper, we modify and generalize a Gaussian approximation technique due to Thouin, Nannuru, and Coates to derive a computationally tractable superpositional-CPHD filter. Implementation requires sequential Monte Carlo (particle filter) techniques.
Large energy superpositions via Rydberg dressing
NASA Astrophysics Data System (ADS)
Khazali, Mohammadsadegh; Lau, Hon Wai; Humeniuk, Adam; Simon, Christoph
2016-08-01
We propose to create superposition states of over 100 strontium atoms in a ground state or metastable optical clock state using the Kerr-type interaction due to Rydberg state dressing in an optical lattice. The two components of the superposition can differ by an order of 300 eV in energy, allowing tests of energy decoherence models with greatly improved sensitivity. We take into account the effects of higher-order nonlinearities, spatial inhomogeneity of the interaction, decay from the Rydberg state, collective many-body decoherence, atomic motion, molecular formation, and diminishing Rydberg level separation for increasing principal number.
The evolution and development of neural superposition.
Agi, Egemen; Langen, Marion; Altschuler, Steven J; Wu, Lani F; Zimmermann, Timo; Hiesinger, Peter Robin
2014-01-01
Visual systems have a rich history as model systems for the discovery and understanding of basic principles underlying neuronal connectivity. The compound eyes of insects consist of up to thousands of small unit eyes that are connected by photoreceptor axons to set up a visual map in the brain. The photoreceptor axon terminals thereby represent neighboring points seen in the environment in neighboring synaptic units in the brain. Neural superposition is a special case of such a wiring principle, where photoreceptors from different unit eyes that receive the same input converge upon the same synaptic units in the brain. This wiring principle is remarkable, because each photoreceptor in a single unit eye receives different input and each individual axon, among thousands others in the brain, must be sorted together with those few axons that have the same input. Key aspects of neural superposition have been described as early as 1907. Since then neuroscientists, evolutionary and developmental biologists have been fascinated by how such a complicated wiring principle could evolve, how it is genetically encoded, and how it is developmentally realized. In this review article, we will discuss current ideas about the evolutionary origin and developmental program of neural superposition. Our goal is to identify in what way the special case of neural superposition can help us answer more general questions about the evolution and development of genetically "hard-wired" synaptic connectivity in the brain.
Comments on episodic superposition of memory States.
Lambert-Mogiliansky, Ariane
2014-01-01
This article develops a commentary to Charles Brainerd, Zheng Wang and Valerie F. Reyna's article entitled "Superposition of episodic memories: Overdistribution and quantum models" published in a special number of topiCS 2013 devoted to quantum modelling in cognitive sciences. PMID:24259305
The Evolution and Development of Neural Superposition
Agi, Egemen; Langen, Marion; Altschuler, Steven J.; Wu, Lani F.; Zimmermann, Timo
2014-01-01
Visual systems have a rich history as model systems for the discovery and understanding of basic principles underlying neuronal connectivity. The compound eyes of insects consist of up to thousands of small unit eyes that are connected by photoreceptor axons to set up a visual map in the brain. The photoreceptor axon terminals thereby represent neighboring points seen in the environment in neighboring synaptic units in the brain. Neural superposition is a special case of such a wiring principle, where photoreceptors from different unit eyes that receive the same input converge upon the same synaptic units in the brain. This wiring principle is remarkable, because each photoreceptor in a single unit eye receives different input and each individual axon, among thousands others in the brain, must be sorted together with those few axons that have the same input. Key aspects of neural superposition have been described as early as 1907. Since then neuroscientists, evolutionary and developmental biologists have been fascinated by how such a complicated wiring principle could evolve, how it is genetically encoded, and how it is developmentally realized. In this review article, we will discuss current ideas about the evolutionary origin and developmental program of neural superposition. Our goal is to identify in what way the special case of neural superposition can help us answer more general questions about the evolution and development of genetically “hard-wired” synaptic connectivity in the brain. PMID:24912630
The principle of superposition in human prehension.
Zatsiorsky, Vladimir M; Latash, Mark L; Gao, Fan; Shim, Jae Kun
2004-03-01
The experimental evidence supports the validity of the principle of superposition for multi-finger prehension in humans. Forces and moments of individual digits are defined by two independent commands: "Grasp the object stronger/weaker to prevent slipping" and "Maintain the rotational equilibrium of the object". The effects of the two commands are summed up.
The principle of superposition in human prehension
Zatsiorsky, Vladimir M.; Latash, Mark L.; Gao, Fan; Shim, Jae Kun
2010-01-01
SUMMARY The experimental evidence supports the validity of the principle of superposition for multi-finger prehension in humans. Forces and moments of individual digits are defined by two independent commands: “Grasp the object stronger/weaker to prevent slipping” and “Maintain the rotational equilibrium of the object”. The effects of the two commands are summed up. PMID:20186284
Macroscopic Quantum Superposition in Cavity Optomechanics
NASA Astrophysics Data System (ADS)
Liao, Jie-Qiao; Tian, Lin
Quantum superposition in mechanical systems is not only a key evidence of macroscopic quantum coherence, but can also be utilized in modern quantum technology. Here we propose an efficient approach for creating macroscopically distinct mechanical superposition states in a two-mode optomechanical system. Photon hopping between the two cavity-modes is modulated sinusoidally. The modulated photon tunneling enables an ultrastrong radiation-pressure force acting on the mechanical resonator, and hence significantly increases the mechanical displacement induced by a single photon. We present systematic studies on the generation of the Yurke-Stoler-like states in the presence of system dissipations. The state generation method is general and it can be implemented with either optomechanical or electromechanical systems. The authors are supported by the National Science Foundation under Award No. NSF-DMR-0956064 and the DARPA ORCHID program through AFOSR.
Astronomical Image Subtraction by Cross-Convolution
NASA Astrophysics Data System (ADS)
Yuan, Fang; Akerlof, Carl W.
2008-04-01
In recent years, there has been a proliferation of wide-field sky surveys to search for a variety of transient objects. Using relatively short focal lengths, the optics of these systems produce undersampled stellar images often marred by a variety of aberrations. As participants in such activities, we have developed a new algorithm for image subtraction that no longer requires high-quality reference images for comparison. The computational efficiency is comparable with similar procedures currently in use. The general technique is cross-convolution: two convolution kernels are generated to make a test image and a reference image separately transform to match as closely as possible. In analogy to the optimization technique for generating smoothing splines, the inclusion of an rms width penalty term constrains the diffusion of stellar images. In addition, by evaluating the convolution kernels on uniformly spaced subimages across the total area, these routines can accommodate point-spread functions that vary considerably across the focal plane.
Molecular graph convolutions: moving beyond fingerprints.
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503
Molecular graph convolutions: moving beyond fingerprints.
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
Molecular graph convolutions: moving beyond fingerprints
NASA Astrophysics Data System (ADS)
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular "graph convolutions", a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph---atoms, bonds, distances, etc.---which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
Optimal Superpositioning of Flexible Molecule Ensembles
Gapsys, Vytautas; de Groot, Bert L.
2013-01-01
Analysis of the internal dynamics of a biological molecule requires the successful removal of overall translation and rotation. Particularly for flexible or intrinsically disordered peptides, this is a challenging task due to the absence of a well-defined reference structure that could be used for superpositioning. In this work, we started the analysis with a widely known formulation of an objective for the problem of superimposing a set of multiple molecules as variance minimization over an ensemble. A negative effect of this superpositioning method is the introduction of ambiguous rotations, where different rotation matrices may be applied to structurally similar molecules. We developed two algorithms to resolve the suboptimal rotations. The first approach minimizes the variance together with the distance of a structure to a preceding molecule in the ensemble. The second algorithm seeks for minimal variance together with the distance to the nearest neighbors of each structure. The newly developed methods were applied to molecular-dynamics trajectories and normal-mode ensembles of the Aβ peptide, RS peptide, and lysozyme. These new (to our knowledge) superpositioning methods combine the benefits of variance and distance between nearest-neighbor(s) minimization, providing a solution for the analysis of intrinsic motions of flexible molecules and resolving ambiguous rotations. PMID:23332072
Toward quantum superposition of living organisms
NASA Astrophysics Data System (ADS)
Romero-Isart, Oriol; Juan, Mathieu L.; Quidant, Romain; Cirac, J. Ignacio
2010-03-01
The most striking feature of quantum mechanics is the existence of superposition states, where an object appears to be in different situations at the same time. The existence of such states has been previously tested with small objects, such as atoms, ions, electrons and photons (Zoller et al 2005 Eur. Phys. J. D 36 203-28), and even with molecules (Arndt et al 1999 Nature 401 680-2). More recently, it has been shown that it is possible to create superpositions of collections of photons (Deléglise et al 2008 Nature 455 510-14), atoms (Hammerer et al 2008 arXiv:0807.3358) or Cooper pairs (Friedman et al 2000 Nature 406 43-6). Very recent progress in optomechanical systems may soon allow us to create superpositions of even larger objects, such as micro-sized mirrors or cantilevers (Marshall et al 2003 Phys. Rev. Lett. 91 130401; Kippenberg and Vahala 2008 Science 321 1172-6 Marquardt and Girvin 2009 Physics 2 40; Favero and Karrai 2009 Nature Photon. 3 201-5), and thus to test quantum mechanical phenomena at larger scales. Here we propose a method to cool down and create quantum superpositions of the motion of sub-wavelength, arbitrarily shaped dielectric objects trapped inside a high-finesse cavity at a very low pressure. Our method is ideally suited for the smallest living organisms, such as viruses, which survive under low-vacuum pressures (Rothschild and Mancinelli 2001 Nature 406 1092-101) and optically behave as dielectric objects (Ashkin and Dziedzic 1987 Science 235 1517-20). This opens up the possibility of testing the quantum nature of living organisms by creating quantum superposition states in very much the same spirit as the original Schrödinger's cat 'gedanken' paradigm (Schrödinger 1935 Naturwissenschaften 23 807-12, 823-8, 844-9). We anticipate that our paper will be a starting point for experimentally addressing fundamental questions, such as the role of life and consciousness in quantum mechanics.
X-ray optics simulation using Gaussian superposition technique
Idir, M.; Cywiak, M.; Morales, A. and Modi, M.H.
2011-09-15
We present an efficient method to perform x-ray optics simulation with high or partially coherent x-ray sources using Gaussian superposition technique. In a previous paper, we have demonstrated that full characterization of optical systems, diffractive and geometric, is possible by using the Fresnel Gaussian Shape Invariant (FGSI) previously reported in the literature. The complex amplitude distribution in the object plane is represented by a linear superposition of complex Gaussians wavelets and then propagated through the optical system by means of the referred Gaussian invariant. This allows ray tracing through the optical system and at the same time allows calculating with high precision the complex wave-amplitude distribution at any plane of observation. This technique can be applied in a wide spectral range where the Fresnel diffraction integral applies including visible, x-rays, acoustic waves, etc. We describe the technique and include some computer simulations as illustrative examples for x-ray optical component. We show also that this method can be used to study partial or total coherence illumination problem.
Dubrovsky, V. G.; Topovsky, A. V.
2013-03-15
New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u{sup (n)}, n= 1, Horizontal-Ellipsis , N are constructed via Zakharov and Manakov {partial_derivative}-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u{sup (n)} and calculated by {partial_derivative}-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schroedinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums of special solutions u{sup (n)}. It is shown that the sums u=u{sup (k{sub 1})}+...+u{sup (k{sub m})}, 1 Less-Than-Or-Slanted-Equal-To k{sub 1} < k{sub 2} < Horizontal-Ellipsis < k{sub m} Less-Than-Or-Slanted-Equal-To N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schroedinger equation and can serve as model potentials for electrons in planar structures of modern electronics.
Sequential Syndrome Decoding of Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.
A Double Precision High Speed Convolution Processor
NASA Astrophysics Data System (ADS)
Larochelle, F.; Coté, J. F.; Malowany, A. S.
1989-11-01
There exist several convolution processors on the market that can process images at video rate. However, none of these processors operates in floating point arithmetic. Unfortunately, many image processing algorithms presently under development are inoperable in integer arithmetic, forcing the researchers to use regular computers. To solve this problem, we designed a specialized convolution processor that operates in double precision floating point arithmetic with a throughput several thousand times faster than the one obtained on regular computer. Its high performance is attributed to a VLSI double precision convolution systolic cell designed in our laboratories. A 9X9 systolic array carries out, in a pipeline manner, every arithmetic operation. The processor is designed to interface directly with the VME Bus. A DMA chip is responsible for bringing the original pixel intensities from the memory of the computer to the systolic array and to return the convolved pixels back to memory. A special use of 8K RAMs allows an inexpensive and efficient way of delaying the pixel intensities in order to supply the right sequence to the systolic array. On board circuitry converts pixel values into floating point representation when the image is originally represented with integer values. An additional systolic cell, used as a pipeline adder at the output of the systolic array, offers the possibility of combining images together which allows a variable convolution window size and color image processing.
Convolutions and Their Applications in Information Science.
ERIC Educational Resources Information Center
Rousseau, Ronald
1998-01-01
Presents definitions of convolutions, mathematical operations between sequences or between functions, and gives examples of their use in information science. In particular they can be used to explain the decline in the use of older literature (obsolescence) or the influence of publication delays on the aging of scientific literature. (Author/LRW)
Number-Theoretic Functions via Convolution Rings.
ERIC Educational Resources Information Center
Berberian, S. K.
1992-01-01
Demonstrates the number theory property that the number of divisors of an integer n times the number of positive integers k, less than or equal to and relatively prime to n, equals the sum of the divisors of n using theory developed about multiplicative functions, the units of a convolution ring, and the Mobius Function. (MDH)
Transient Response of Shells of Revolution by Direct Integration and Modal Superposition Methods
NASA Technical Reports Server (NTRS)
Stephens, W. B.; Adelman, H. M.
1974-01-01
The results of an analytical effort to obtain and evaluate transient response data for a cylindrical and a conical shell by use of two different approaches: direct integration and modal superposition are described. The inclusion of nonlinear terms is more important than the inclusion of secondary linear effects (transverse shear deformation and rotary inertia) although there are thin-shell structures where these secondary effects are important. The advantages of the direct integration approach are that geometric nonlinear and secondary effects are easy to include and high-frequency response may be calculated. In comparison to the modal superposition technique the computer storage requirements are smaller. The advantages of the modal superposition approach are that the solution is independent of the previous time history and that once the modal data are obtained, the response for repeated cases may be efficiently computed. Also, any admissible set of initial conditions can be applied.
Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution
Akabani, G.; Hawkins, W.G.; Eckblade, M.B.; Leichner, P.K.
1999-01-01
The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transform (CHT) algorithm.
On Kolmogorov's superpositions and Boolean functions
Beiu, V.
1998-12-31
The paper overviews results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on an explicit numerical (i.e., constructive) algorithm for Kolmogorov's superpositions they will show that for obtaining minimum size neutral networks for implementing any Boolean function, the activation function of the neurons is the identity function. Because classical AND-OR implementations, as well as threshold gate implementations require exponential size (in the worst case), it will follow that size-optimal solutions for implementing arbitrary Boolean functions require analog circuitry. Conclusions and several comments on the required precision are ending the paper.
Maximum predictive power and the superposition principle
NASA Technical Reports Server (NTRS)
Summhammer, Johann
1994-01-01
In quantum physics the direct observables are probabilities of events. We ask how observed probabilities must be combined to achieve what we call maximum predictive power. According to this concept the accuracy of a prediction must only depend on the number of runs whose data serve as input for the prediction. We transform each probability to an associated variable whose uncertainty interval depends only on the amount of data and strictly decreases with it. We find that for a probability which is a function of two other probabilities maximum predictive power is achieved when linearly summing their associated variables and transforming back to a probability. This recovers the quantum mechanical superposition principle.
Design of artificial spherical superposition compound eye
NASA Astrophysics Data System (ADS)
Cao, Zhaolou; Zhai, Chunjie; Wang, Keyi
2015-12-01
In this research, design of artificial spherical superposition compound eye is presented. The imaging system consists of three layers of lens arrays. In each channel, two lenses are designed to control the angular magnification and a field lens is added to improve the image quality and extend the field of view. Aspherical surfaces are introduced to improve the image quality. Ray tracing results demonstrate that the light from the same object point is focused at the same imaging point through different channels. Therefore the system has much higher energy efficiency than conventional spherical apposition compound eye.
Atom Microscopy via Dual Resonant Superposition
NASA Astrophysics Data System (ADS)
Abdul Jabar, M. S.; Bakht, Amin Bacha; Jalaluddin, M.; Iftikhar, Ahmad
2015-12-01
An M-type Rb87 atomic system is proposed for one-dimensional atom microscopy under the condition of Electromagnetically Induced Transparency. Super-localization of the atom in the absorption spectrum while its delocalization in the dispersion spectrum is observed due to the dual superposition effect of the resonant fields. The observed minimum uncertainty peaks will find important applications in Laser cooling, creating focused atom beams, atom nanolithography, and in measurement of the center-of-mass wave function of moving atoms.
Profile of CT scan output dose in axial and helical modes using convolution
NASA Astrophysics Data System (ADS)
Anam, C.; Haryanto, F.; Widita, R.; Arif, I.; Dougherty, G.
2016-03-01
The profile of the CT scan output dose is crucial for establishing the patient dose profile. The purpose of this study is to investigate the profile of the CT scan output dose in both axial and helical modes using convolution. A single scan output dose profile (SSDP) in the center of a head phantom was measured using a solid-state detector. The multiple scan output dose profile (MSDP) in the axial mode was calculated using convolution between SSDP and delta function, whereas for the helical mode MSDP was calculated using convolution between SSDP and the rectangular function. MSDPs were calculated for a number of scans (5, 10, 15, 20 and 25). The multiple scan average dose (MSAD) for differing numbers of scans was compared to the value of CT dose index (CTDI). Finally, the edge values of MSDP for every scan number were compared to the corresponding MSAD values. MSDPs were successfully generated by using convolution between a SSDP and the appropriate function. We found that CTDI only accurately estimates MSAD when the number of scans was more than 10. We also found that the edge values of the profiles were 42% to 93% lower than that the corresponding MSADs.
A Mathematical Motivation for Complex-Valued Convolutional Networks.
Tygert, Mark; Bruna, Joan; Chintala, Soumith; LeCun, Yann; Piantino, Serkan; Szlam, Arthur
2016-05-01
A complex-valued convolutional network (convnet) implements the repeated application of the following composition of three operations, recursively applying the composition to an input vector of nonnegative real numbers: (1) convolution with complex-valued vectors, followed by (2) taking the absolute value of every entry of the resulting vectors, followed by (3) local averaging. For processing real-valued random vectors, complex-valued convnets can be viewed as data-driven multiscale windowed power spectra, data-driven multiscale windowed absolute spectra, data-driven multiwavelet absolute values, or (in their most general configuration) data-driven nonlinear multiwavelet packets. Indeed, complex-valued convnets can calculate multiscale windowed spectra when the convnet filters are windowed complex-valued exponentials. Standard real-valued convnets, using rectified linear units (ReLUs), sigmoidal (e.g., logistic or tanh) nonlinearities, or max pooling, for example, do not obviously exhibit the same exact correspondence with data-driven wavelets (whereas for complex-valued convnets, the correspondence is much more than just a vague analogy). Courtesy of the exact correspondence, the remarkably rich and rigorous body of mathematical analysis for wavelets applies directly to (complex-valued) convnets.
Deep Learning with Hierarchical Convolutional Factor Analysis
Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence
2013-01-01
Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342
A convolutional neural network neutrino event classifier
Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.
2016-09-01
Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less
A convolutional neural network neutrino event classifier
NASA Astrophysics Data System (ADS)
Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.
2016-09-01
Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.
A Construction of MDS Quantum Convolutional Codes
NASA Astrophysics Data System (ADS)
Zhang, Guanghui; Chen, Bocong; Li, Liangchen
2015-09-01
In this paper, two new families of MDS quantum convolutional codes are constructed. The first one can be regarded as a generalization of [36, Theorem 6.5], in the sense that we do not assume that q≡1 (mod 4). More specifically, we obtain two classes of MDS quantum convolutional codes with parameters: (i) [( q 2+1, q 2-4 i+3,1;2,2 i+2)] q , where q≥5 is an odd prime power and 2≤ i≤( q-1)/2; (ii) , where q is an odd prime power with the form q=10 m+3 or 10 m+7 ( m≥2), and 2≤ i≤2 m-1.
Quantum convolutional codes derived from constacyclic codes
NASA Astrophysics Data System (ADS)
Yan, Tingsu; Huang, Xinmei; Tang, Yuansheng
2014-12-01
In this paper, three families of quantum convolutional codes are constructed. The first one and the second one can be regarded as a generalization of Theorems 3, 4, 7 and 8 [J. Chen, J. Li, F. Yang and Y. Huang, Int. J. Theor. Phys., doi:10.1007/s10773-014-2214-6 (2014)], in the sense that we drop the constraint q ≡ 1 (mod 4). Furthermore, the second one and the third one attain the quantum generalized Singleton bound.
Convolutional Neural Network Based dem Super Resolution
NASA Astrophysics Data System (ADS)
Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang
2016-06-01
DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.
Authentication Protocol using Quantum Superposition States
Kanamori, Yoshito; Yoo, Seong-Moo; Gregory, Don A.; Sheldon, Frederick T
2009-01-01
When it became known that quantum computers could break the RSA (named for its creators - Rivest, Shamir, and Adleman) encryption algorithm within a polynomial-time, quantum cryptography began to be actively studied. Other classical cryptographic algorithms are only secure when malicious users do not have sufficient computational power to break security within a practical amount of time. Recently, many quantum authentication protocols sharing quantum entangled particles between communicators have been proposed, providing unconditional security. An issue caused by sharing quantum entangled particles is that it may not be simple to apply these protocols to authenticate a specific user in a group of many users. An authentication protocol using quantum superposition states instead of quantum entangled particles is proposed. The random number shared between a sender and a receiver can be used for classical encryption after the authentication has succeeded. The proposed protocol can be implemented with the current technologies we introduce in this paper.
Optically multiplexed imaging with superposition space tracking.
Uttam, Shikhar; Goodman, Nathan A; Neifeld, Mark A; Kim, Changsoon; John, Renu; Kim, Jungsang; Brady, David
2009-02-01
We describe a novel method to track targets in a large field of view. This method simultaneously images multiple, encoded sub-fields of view onto a common focal plane. Sub-field encoding enables target tracking by creating a unique connection between target characteristics in superposition space and the target's true position in real space. This is accomplished without reconstructing a conventional image of the large field of view. Potential encoding schemes include spatial shift, rotation, and magnification. We discuss each of these encoding schemes, but the main emphasis of the paper and all examples are based on one-dimensional spatial shift encoding. System performance is evaluated in terms of two criteria: average decoding time and probability of decoding error. We study these performance criteria as a function of resolution in the encoding scheme and signal-to-noise ratio. Finally, we include simulation and experimental results demonstrating our novel tracking method. PMID:19189000
On the superposition principle in interference experiments.
Sinha, Aninda; H Vijay, Aravind; Sinha, Urbasi
2015-01-01
The superposition principle is usually incorrectly applied in interference experiments. This has recently been investigated through numerics based on Finite Difference Time Domain (FDTD) methods as well as the Feynman path integral formalism. In the current work, we have derived an analytic formula for the Sorkin parameter which can be used to determine the deviation from the application of the principle. We have found excellent agreement between the analytic distribution and those that have been earlier estimated by numerical integration as well as resource intensive FDTD simulations. The analytic handle would be useful for comparing theory with future experiments. It is applicable both to physics based on classical wave equations as well as the non-relativistic Schrödinger equation. PMID:25973948
On the superposition principle in interference experiments
Sinha, Aninda; H. Vijay, Aravind; Sinha, Urbasi
2015-01-01
The superposition principle is usually incorrectly applied in interference experiments. This has recently been investigated through numerics based on Finite Difference Time Domain (FDTD) methods as well as the Feynman path integral formalism. In the current work, we have derived an analytic formula for the Sorkin parameter which can be used to determine the deviation from the application of the principle. We have found excellent agreement between the analytic distribution and those that have been earlier estimated by numerical integration as well as resource intensive FDTD simulations. The analytic handle would be useful for comparing theory with future experiments. It is applicable both to physics based on classical wave equations as well as the non-relativistic Schrödinger equation. PMID:25973948
Multipartite cellular automata and the superposition principle
NASA Astrophysics Data System (ADS)
Elze, Hans-Thomas
2016-05-01
Cellular automata (CA) can show well known features of quantum mechanics (QM), such as a linear updating rule that resembles a discretized form of the Schrödinger equation together with its conservation laws. Surprisingly, a whole class of “natural” Hamiltonian CA, which are based entirely on integer-valued variables and couplings and derived from an action principle, can be mapped reversibly to continuum models with the help of sampling theory. This results in “deformed” quantum mechanical models with a finite discreteness scale l, which for l→0 reproduce the familiar continuum limit. Presently, we show, in particular, how such automata can form “multipartite” systems consistently with the tensor product structures of non-relativistic many-body QM, while maintaining the linearity of dynamics. Consequently, the superposition principle is fully operative already on the level of these primordial discrete deterministic automata, including the essential quantum effects of interference and entanglement.
Superposition and alignment of labeled point clouds.
Fober, Thomas; Glinca, Serghei; Klebe, Gerhard; Hüllermeier, Eyke
2011-01-01
Geometric objects are often represented approximately in terms of a finite set of points in three-dimensional euclidean space. In this paper, we extend this representation to what we call labeled point clouds. A labeled point cloud is a finite set of points, where each point is not only associated with a position in three-dimensional space, but also with a discrete class label that represents a specific property. This type of model is especially suitable for modeling biomolecules such as proteins and protein binding sites, where a label may represent an atom type or a physico-chemical property. Proceeding from this representation, we address the question of how to compare two labeled points clouds in terms of their similarity. Using fuzzy modeling techniques, we develop a suitable similarity measure as well as an efficient evolutionary algorithm to compute it. Moreover, we consider the problem of establishing an alignment of the structures in the sense of a one-to-one correspondence between their basic constituents. From a biological point of view, alignments of this kind are of great interest, since mutually corresponding molecular constituents offer important information about evolution and heredity, and can also serve as a means to explain a degree of similarity. In this paper, we therefore develop a method for computing pairwise or multiple alignments of labeled point clouds. To this end, we proceed from an optimal superposition of the corresponding point clouds and construct an alignment which is as much as possible in agreement with the neighborhood structure established by this superposition. We apply our methods to the structural analysis of protein binding sites.
The analysis of convolutional codes via the extended Smith algorithm
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Onyszchuk, I.
1993-01-01
Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.
NASA Astrophysics Data System (ADS)
Mochida, Y.; Ilanko, S.
2010-05-01
This paper shows that the transient response of a plate undergoing flexural vibration can be calculated accurately and efficiently using the natural frequencies and modes obtained from the superposition method. The response of a completely free plate is used to demonstrate this. The case considered is one where all supports of a simply supported thin rectangular plate under self weight are suddenly removed. The resulting motion consists of a combination of the natural modes of a completely free plate. The modal superposition method is used for determining the transient response, and the natural frequencies and mode shapes of the plates used are obtained by Gorman's superposition method. These are compared with corresponding results based on the modes using the Rayleigh-Ritz method using the ordinary and degenerated free-free beam functions. There is an excellent agreement between the results from both approaches but the superposition method has shown faster convergence and the results may serve as benchmarks for the transient response of completely free plates.
Decoherence of quantum superpositions through coupling to engineered reservoirs
Myatt; King; Turchette; Sackett; Kielpinski; Itano; Monroe; Wineland
2000-01-20
The theory of quantum mechanics applies to closed systems. In such ideal situations, a single atom can, for example, exist simultaneously in a superposition of two different spatial locations. In contrast, real systems always interact with their environment, with the consequence that macroscopic quantum superpositions (as illustrated by the 'Schrodinger's cat' thought-experiment) are not observed. Moreover, macroscopic superpositions decay so quickly that even the dynamics of decoherence cannot be observed. However, mesoscopic systems offer the possibility of observing the decoherence of such quantum superpositions. Here we present measurements of the decoherence of superposed motional states of a single trapped atom. Decoherence is induced by coupling the atom to engineered reservoirs, in which the coupling and state of the environment are controllable. We perform three experiments, finding that the decoherence rate scales with the square of a quantity describing the amplitude of the superposition state.
Dose discrepancies in the buildup region and their impact on dose calculations for IMRT fields
Hsu, Shu-Hui; Moran, Jean M.; Chen Yu; Kulasekere, Ravi; Roberson, Peter L.
2010-05-15
Purpose: Dose accuracy in the buildup region for radiotherapy treatment planning suffers from challenges in both measurement and calculation. This study investigates the dosimetry in the buildup region at normal and oblique incidences for open and IMRT fields and assesses the quality of the treatment planning calculations. Methods: This study was divided into three parts. First, percent depth doses and profiles (for 5x5, 10x10, 20x20, and 30x30 cm{sup 2} field sizes at 0 deg., 45 deg., and 70 deg. incidences) were measured in the buildup region in Solid Water using an Attix parallel plate chamber and Kodak XV film, respectively. Second, the parameters in the empirical contamination (EC) term of the convolution/superposition (CVSP) calculation algorithm were fitted based on open field measurements. Finally, seven segmental head-and-neck IMRT fields were measured on a flat phantom geometry and compared to calculations using {gamma} and dose-gradient compensation (C) indices to evaluate the impact of residual discrepancies and to assess the adequacy of the contamination term for IMRT fields. Results: Local deviations between measurements and calculations for open fields were within 1% and 4% in the buildup region for normal and oblique incidences, respectively. The C index with 5%/1 mm criteria for IMRT fields ranged from 89% to 99% and from 96% to 98% at 2 mm and 10 cm depths, respectively. The quality of agreement in the buildup region for open and IMRT fields is comparable to that in nonbuildup regions. Conclusions: The added EC term in CVSP was determined to be adequate for both open and IMRT fields. Due to the dependence of calculation accuracy on (1) EC modeling, (2) internal convolution and density grid sizes, (3) implementation details in the algorithm, and (4) the accuracy of measurements used for treatment planning system commissioning, the authors recommend an evaluation of the accuracy of near-surface dose calculations as a part of treatment planning
Applications of convolution voltammetry in electroanalytical chemistry.
Bentley, Cameron L; Bond, Alan M; Hollenkamp, Anthony F; Mahon, Peter J; Zhang, Jie
2014-02-18
The robustness of convolution voltammetry for determining accurate values of the diffusivity (D), bulk concentration (C(b)), and stoichiometric number of electrons (n) has been demonstrated by applying the technique to a series of electrode reactions in molecular solvents and room temperature ionic liquids (RTILs). In acetonitrile, the relatively minor contribution of nonfaradaic current facilitates analysis with macrodisk electrodes, thus moderate scan rates can be used without the need to perform background subtraction to quantify the diffusivity of iodide [D = 1.75 (±0.02) × 10(-5) cm(2) s(-1)] in this solvent. In the RTIL 1-ethyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide, background subtraction is necessary at a macrodisk electrode but can be avoided at a microdisk electrode, thereby simplifying the analytical procedure and allowing the diffusivity of iodide [D = 2.70 (±0.03) × 10(-7) cm(2) s(-1)] to be quantified. Use of a convolutive procedure which simultaneously allows D and nC(b) values to be determined is also demonstrated. Three conditions under which a technique of this kind may be applied are explored and are related to electroactive species which display slow dissolution kinetics, undergo a single multielectron transfer step, or contain multiple noninteracting redox centers using ferrocene in an RTIL, 1,4-dinitro-2,3,5,6-tetramethylbenzene, and an alkynylruthenium trimer, respectively, as examples. The results highlight the advantages of convolution voltammetry over steady-state techniques such as rotating disk electrode voltammetry and microdisk electrode voltammetry, as it is not restricted by the mode of diffusion (planar or radial), hence removing limitations on solvent viscosity, electrode geometry, and voltammetric scan rate.
Bacterial colony counting by Convolutional Neural Networks.
Ferrari, Alessandro; Lombardi, Stefano; Signoroni, Alberto
2015-01-01
Counting bacterial colonies on microbiological culture plates is a time-consuming, error-prone, nevertheless fundamental task in microbiology. Computer vision based approaches can increase the efficiency and the reliability of the process, but accurate counting is challenging, due to the high degree of variability of agglomerated colonies. In this paper, we propose a solution which adopts Convolutional Neural Networks (CNN) for counting the number of colonies contained in confluent agglomerates, that scored an overall accuracy of the 92.8% on a large challenging dataset. The proposed CNN-based technique for estimating the cardinality of colony aggregates outperforms traditional image processing approaches, becoming a promising approach to many related applications.
Convolution neural networks for ship type recognition
NASA Astrophysics Data System (ADS)
Rainey, Katie; Reeder, John D.; Corelli, Alexander G.
2016-05-01
Algorithms to automatically recognize ship type from satellite imagery are desired for numerous maritime applications. This task is difficult, and example imagery accurately labeled with ship type is hard to obtain. Convolutional neural networks (CNNs) have shown promise in image recognition settings, but many of these applications rely on the availability of thousands of example images for training. This work attempts to under- stand for which types of ship recognition tasks CNNs might be well suited. We report the results of baseline experiments applying a CNN to several ship type classification tasks, and discuss many of the considerations that must be made in approaching this problem.
Description of a quantum convolutional code.
Ollivier, Harold; Tillich, Jean-Pierre
2003-10-24
We describe a quantum error correction scheme aimed at protecting a flow of quantum information over long distance communication. It is largely inspired by the theory of classical convolutional codes which are used in similar circumstances in classical communication. The particular example shown here uses the stabilizer formalism. We provide an explicit encoding circuit and its associated error estimation algorithm. The latter gives the most likely error over any memoryless quantum channel, with a complexity growing only linearly with the number of encoded qubits.
Convolution formulations for non-negative intensity.
Williams, Earl G
2013-08-01
Previously unknown spatial convolution formulas for a variant of the active normal intensity in planar coordinates have been derived that use measured pressure or normal velocity near-field holograms to construct a positive-only (outward) intensity distribution in the plane, quantifying the areas of the vibrating structure that produce radiation to the far-field. This is an extension of the outgoing-only (unipolar) intensity technique recently developed for arbitrary geometries by Steffen Marburg. The method is applied independently to pressure and velocity data measured in a plane close to the surface of a point-driven, unbaffled rectangular plate in the laboratory. It is demonstrated that the sound producing regions of the structure are clearly revealed using the derived formulas and that the spatial resolution is limited to a half-wavelength. A second set of formulas called the hybrid-intensity formulas are also derived which yield a bipolar intensity using a different spatial convolution operator, again using either the measured pressure or velocity. It is demonstrated from the experiment results that the velocity formula yields the classical active intensity and the pressure formula an interesting hybrid intensity that may be useful for source localization. Computations are fast and carried out in real space without Fourier transforms into wavenumber space. PMID:23927105
Superposition rules for higher order systems and their applications
NASA Astrophysics Data System (ADS)
Cariñena, J. F.; Grabowski, J.; de Lucas, J.
2012-05-01
Superposition rules form a class of functions that describe general solutions of systems of first-order ordinary differential equations in terms of generic families of particular solutions and certain constants. In this work, we extend this notion and other related ones to systems of higher order differential equations and analyse their properties. Several results concerning the existence of various types of superposition rules for higher order systems are proved and illustrated with examples extracted from the physics and mathematics literature. In particular, two new superposition rules for the second- and third-order Kummer-Schwarz equations are derived.
Experimental Investigation of Convoluted Contouring for Aircraft Afterbody Drag Reduction
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Hunter, Craig A.
1999-01-01
An experimental investigation was performed in the NASA Langley 16-Foot Transonic Tunnel to determine the aerodynamic effects of external convolutions, placed on the boattail of a nonaxisymmetric nozzle for drag reduction. Boattail angles of 15 and 22 were tested with convolutions placed at a forward location upstream of the boattail curvature, at a mid location along the curvature and at a full location that spanned the entire boattail flap. Each of the baseline nozzle afterbodies (no convolutions) had a parabolic, converging contour with a parabolically decreasing corner radius. Data were obtained at several Mach numbers from static conditions to 1.2 for a range of nozzle pressure ratios and angles of attack. An oil paint flow visualization technique was used to qualitatively assess the effect of the convolutions. Results indicate that afterbody drag reduction by convoluted contouring is convolution location, Mach number, boattail angle, and NPR dependent. The forward convolution location was the most effective contouring geometry for drag reduction on the 22 afterbody, but was only effective for M < 0.95. At M = 0.8, drag was reduced 20 and 36 percent at NPRs of 5.4 and 7, respectively, but drag was increased 10 percent for M = 0.95 at NPR = 7. Convoluted contouring along the 15 boattail angle afterbody was not effective at reducing drag because the flow was minimally separated from the baseline afterbody, unlike the massive separation along the 22 boattail angle baseline afterbody.
Predicting Flow-Induced Vibrations In A Convoluted Hose
NASA Technical Reports Server (NTRS)
Harvey, Stuart A.
1994-01-01
Composite model constructed from two less accurate models. Predicts approximately frequencies and modes of vibrations induced by flows of various fluids in convoluted hose. Based partly on spring-and-lumped-mass representation of dynamics involving springiness and mass of convolution of hose and density of fluid in hose.
New quantum MDS-convolutional codes derived from constacyclic codes
NASA Astrophysics Data System (ADS)
Li, Fengwei; Yue, Qin
2015-12-01
In this paper, we utilize a family of Hermitian dual-containing constacyclic codes to construct classical and quantum MDS convolutional codes. Our classical and quantum convolutional codes are optimal in the sense that they attain the classical (quantum) generalized Singleton bound.
Nonclassical properties and quantum resources of hierarchical photonic superposition states
Volkoff, T. J.
2015-11-15
We motivate and introduce a class of “hierarchical” quantum superposition states of N coupled quantum oscillators. Unlike other well-known multimode photonic Schrödinger-cat states such as entangled coherent states, the hierarchical superposition states are characterized as two-branch superpositions of tensor products of single-mode Schrödinger-cat states. In addition to analyzing the photon statistics and quasiprobability distributions of prominent examples of these nonclassical states, we consider their usefulness for highprecision quantum metrology of nonlinear optical Hamiltonians and quantify their mode entanglement. We propose two methods for generating hierarchical superpositions in N = 2 coupled microwave cavities, exploiting currently existing quantum optical technology for generating entanglement between spatially separated electromagnetic field modes.
Nonclassical properties and quantum resources of hierarchical photonic superposition states
NASA Astrophysics Data System (ADS)
Volkoff, T. J.
2015-11-01
We motivate and introduce a class of "hierarchical" quantum superposition states of N coupled quantum oscillators. Unlike other well-known multimode photonic Schrödinger-cat states such as entangled coherent states, the hierarchical superposition states are characterized as two-branch superpositions of tensor products of single-mode Schrödinger-cat states. In addition to analyzing the photon statistics and quasiprobability distributions of prominent examples of these nonclassical states, we consider their usefulness for highprecision quantum metrology of nonlinear optical Hamiltonians and quantify their mode entanglement. We propose two methods for generating hierarchical superpositions in N = 2 coupled microwave cavities, exploiting currently existing quantum optical technology for generating entanglement between spatially separated electromagnetic field modes.
Has Macroscopic Superposition in Superconducting Qubits Really Been Demonstrated?
NASA Astrophysics Data System (ADS)
Kadin, Alan M.; Kaplan, Steven B.
Quantum computing depends on many qubits coupled via quantum entanglement, where each qubit must be a simultaneous superposition of two quantum states of different energies, rather than one state or the other as in classical bits. It is widely believed that observations of energy quantization and Rabi oscillations in macroscopic superconducting circuits prove that these are proper qubits with quantum superposition. But is this really the only interpretration? We propose a novel paradigm for macroscopic quantum systems, in which energies are quantized (with photon-mediated transitions), but the quantized states are realistic objects without superposition. For example, a circuit could make a transition from one quantized value of flux to another, but would never have both at the same time. We further suggest a superconducting circuit that can put this proposal to a test. Without quantum superposition, most of the potential benefit of quantum computing would be lost.
NASA Technical Reports Server (NTRS)
Platnick, S.
1999-01-01
Photon transport in a multiple scattering medium is critically dependent on scattering statistics, in particular the average number of scatterings. A superposition technique is derived to accurately determine the average number of scatterings encountered by reflected and transmitted photons within arbitrary layers in plane-parallel, vertically inhomogeneous clouds. As expected, the resulting scattering number profiles are highly dependent on cloud particle absorption and solar/viewing geometry. The technique uses efficient adding and doubling radiative transfer procedures, avoiding traditional time-intensive Monte Carlo methods. Derived superposition formulae are applied to a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Cloud remote sensing techniques that use solar reflectance or transmittance measurements generally assume a homogeneous plane-parallel cloud structure. The scales over which this assumption is relevant, in both the vertical and horizontal, can be obtained from the superposition calculations. Though the emphasis is on photon transport in clouds, the derived technique is applicable to any scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers in the atmosphere.
NASA Astrophysics Data System (ADS)
Guérin, Philippe Allard; Feix, Adrien; Araújo, Mateus; Brukner, Časlav
2016-09-01
In communication complexity, a number of distant parties have the task of calculating a distributed function of their inputs, while minimizing the amount of communication between them. It is known that with quantum resources, such as entanglement and quantum channels, one can obtain significant reductions in the communication complexity of some tasks. In this work, we study the role of the quantum superposition of the direction of communication as a resource for communication complexity. We present a tripartite communication task for which such a superposition allows for an exponential saving in communication, compared to one-way quantum (or classical) communication; the advantage also holds when we allow for protocols with bounded error probability.
Guérin, Philippe Allard; Feix, Adrien; Araújo, Mateus; Brukner, Časlav
2016-09-01
In communication complexity, a number of distant parties have the task of calculating a distributed function of their inputs, while minimizing the amount of communication between them. It is known that with quantum resources, such as entanglement and quantum channels, one can obtain significant reductions in the communication complexity of some tasks. In this work, we study the role of the quantum superposition of the direction of communication as a resource for communication complexity. We present a tripartite communication task for which such a superposition allows for an exponential saving in communication, compared to one-way quantum (or classical) communication; the advantage also holds when we allow for protocols with bounded error probability.
Guérin, Philippe Allard; Feix, Adrien; Araújo, Mateus; Brukner, Časlav
2016-09-01
In communication complexity, a number of distant parties have the task of calculating a distributed function of their inputs, while minimizing the amount of communication between them. It is known that with quantum resources, such as entanglement and quantum channels, one can obtain significant reductions in the communication complexity of some tasks. In this work, we study the role of the quantum superposition of the direction of communication as a resource for communication complexity. We present a tripartite communication task for which such a superposition allows for an exponential saving in communication, compared to one-way quantum (or classical) communication; the advantage also holds when we allow for protocols with bounded error probability. PMID:27636460
Superposition of quantum and classical rotational motions in Sc2C2@C84 fullerite
NASA Astrophysics Data System (ADS)
Michel, K. H.; Verberck, B.; Hulman, M.; Kuzmany, H.; Krause, M.
2007-02-01
The superposition of the quantum rotational motion (tunneling) of the encapsulated Sc2C2 complex with the classical rotational motion of the surrounding C84 molecule in a powder crystal of Sc2C2@C84 fullerite is investigated by theory. Since the quantum rotor is dragged along by the C84 molecule, any detection method which couples to the quantum rotor (in casu the C2 bond of the Sc2C2 complex) also probes the thermally excited classical motion (uniaxial rotational diffusion and stochastic meroaxial jumps) of the surrounding fullerene. The dynamic rotation-rotation response functions in frequency space are obtained as convolutions of quantum and classical dynamic correlation functions. The corresponding Raman scattering laws are derived, and the overall shape of the spectra and the width of the resonance lines are studied as functions of temperature. The results of the theory are confronted with experimental low-frequency Raman spectra on powder crystals of Sc2C2@C84 [M. Krause et al., Phys. Rev. Lett. 93, 137403 (2004)]. The agreement of theory with experiment is very satisfactory in a broad temperature range.
Convolutional code performance in planetary entry channels
NASA Technical Reports Server (NTRS)
Modestino, J. W.
1974-01-01
The planetary entry channel is modeled for communication purposes representing turbulent atmospheric scattering effects. The performance of short and long constraint length convolutional codes is investigated in conjunction with coherent BPSK modulation and Viterbi maximum likelihood decoding. Algorithms for sequential decoding are studied in terms of computation and/or storage requirements as a function of the fading channel parameters. The performance of the coded coherent BPSK system is compared with the coded incoherent MFSK system. Results indicate that: some degree of interleaving is required to combat time correlated fading of channel; only modest amounts of interleaving are required to approach performance of memoryless channel; additional propagational results are required on the phase perturbation process; and the incoherent MFSK system is superior when phase tracking errors are considered.
Image statistics decoding for convolutional codes
NASA Technical Reports Server (NTRS)
Pitt, G. H., III; Swanson, L.; Yuen, J. H.
1987-01-01
It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.
Tissue heterogeneity in IMRT dose calculation for lung cancer.
Pasciuti, Katia; Iaccarino, Giuseppe; Strigari, Lidia; Malatesta, Tiziana; Benassi, Marcello; Di Nallo, Anna Maria; Mirri, Alessandra; Pinzi, Valentina; Landoni, Valeria
2011-01-01
The aim of this study was to evaluate the differences in accuracy of dose calculation between 3 commonly used algorithms, the Pencil Beam algorithm (PB), the Anisotropic Analytical Algorithm (AAA), and the Collapsed Cone Convolution Superposition (CCCS) for intensity-modulated radiation therapy (IMRT). The 2D dose distributions obtained with the 3 algorithms were compared on each CT slice pixel by pixel, using the MATLAB code (The MathWorks, Natick, MA) and the agreement was assessed with the γ function. The effect of the differences on dose-volume histograms (DVHs), tumor control, and normal tissue complication probability (TCP and NTCP) were also evaluated, and its significance was quantified by using a nonparametric test. In general PB generates regions of over-dosage both in the lung and in the tumor area. These differences are not always in DVH of the lung, although the Wilcoxon test indicated significant differences in 2 of 4 patients. Disagreement in the lung region was also found when the Γ analysis was performed. The effect on TCP is less important than for NTCP because of the slope of the curve at the level of the dose of interest. The effect of dose calculation inaccuracy is patient-dependent and strongly related to beam geometry and to the localization of the tumor. When multiple intensity-modulated beams are used, the effect of the presence of the heterogeneity on dose distribution may not always be easily predictable. PMID:20970989
Quantum superposition at the half-metre scale.
Kovachy, T; Asenbaum, P; Overstreet, C; Donnelly, C A; Dickerson, S M; Sugarbaker, A; Hogan, J M; Kasevich, M A
2015-12-24
The quantum superposition principle allows massive particles to be delocalized over distant positions. Though quantum mechanics has proved adept at describing the microscopic world, quantum superposition runs counter to intuitive conceptions of reality and locality when extended to the macroscopic scale, as exemplified by the thought experiment of Schrödinger's cat. Matter-wave interferometers, which split and recombine wave packets in order to observe interference, provide a way to probe the superposition principle on macroscopic scales and explore the transition to classical physics. In such experiments, large wave-packet separation is impeded by the need for long interaction times and large momentum beam splitters, which cause susceptibility to dephasing and decoherence. Here we use light-pulse atom interferometry to realize quantum interference with wave packets separated by up to 54 centimetres on a timescale of 1 second. These results push quantum superposition into a new macroscopic regime, demonstrating that quantum superposition remains possible at the distances and timescales of everyday life. The sub-nanokelvin temperatures of the atoms and a compensation of transverse optical forces enable a large separation while maintaining an interference contrast of 28 per cent. In addition to testing the superposition principle in a new regime, large quantum superposition states are vital to exploring gravity with atom interferometers in greater detail. We anticipate that these states could be used to increase sensitivity in tests of the equivalence principle, measure the gravitational Aharonov-Bohm effect, and eventually detect gravitational waves and phase shifts associated with general relativity. PMID:26701053
Quantum superposition at the half-metre scale.
Kovachy, T; Asenbaum, P; Overstreet, C; Donnelly, C A; Dickerson, S M; Sugarbaker, A; Hogan, J M; Kasevich, M A
2015-12-24
The quantum superposition principle allows massive particles to be delocalized over distant positions. Though quantum mechanics has proved adept at describing the microscopic world, quantum superposition runs counter to intuitive conceptions of reality and locality when extended to the macroscopic scale, as exemplified by the thought experiment of Schrödinger's cat. Matter-wave interferometers, which split and recombine wave packets in order to observe interference, provide a way to probe the superposition principle on macroscopic scales and explore the transition to classical physics. In such experiments, large wave-packet separation is impeded by the need for long interaction times and large momentum beam splitters, which cause susceptibility to dephasing and decoherence. Here we use light-pulse atom interferometry to realize quantum interference with wave packets separated by up to 54 centimetres on a timescale of 1 second. These results push quantum superposition into a new macroscopic regime, demonstrating that quantum superposition remains possible at the distances and timescales of everyday life. The sub-nanokelvin temperatures of the atoms and a compensation of transverse optical forces enable a large separation while maintaining an interference contrast of 28 per cent. In addition to testing the superposition principle in a new regime, large quantum superposition states are vital to exploring gravity with atom interferometers in greater detail. We anticipate that these states could be used to increase sensitivity in tests of the equivalence principle, measure the gravitational Aharonov-Bohm effect, and eventually detect gravitational waves and phase shifts associated with general relativity.
Quantum superposition at the half-metre scale
NASA Astrophysics Data System (ADS)
Kovachy, T.; Asenbaum, P.; Overstreet, C.; Donnelly, C. A.; Dickerson, S. M.; Sugarbaker, A.; Hogan, J. M.; Kasevich, M. A.
2015-12-01
The quantum superposition principle allows massive particles to be delocalized over distant positions. Though quantum mechanics has proved adept at describing the microscopic world, quantum superposition runs counter to intuitive conceptions of reality and locality when extended to the macroscopic scale, as exemplified by the thought experiment of Schrödinger’s cat. Matter-wave interferometers, which split and recombine wave packets in order to observe interference, provide a way to probe the superposition principle on macroscopic scales and explore the transition to classical physics. In such experiments, large wave-packet separation is impeded by the need for long interaction times and large momentum beam splitters, which cause susceptibility to dephasing and decoherence. Here we use light-pulse atom interferometry to realize quantum interference with wave packets separated by up to 54 centimetres on a timescale of 1 second. These results push quantum superposition into a new macroscopic regime, demonstrating that quantum superposition remains possible at the distances and timescales of everyday life. The sub-nanokelvin temperatures of the atoms and a compensation of transverse optical forces enable a large separation while maintaining an interference contrast of 28 per cent. In addition to testing the superposition principle in a new regime, large quantum superposition states are vital to exploring gravity with atom interferometers in greater detail. We anticipate that these states could be used to increase sensitivity in tests of the equivalence principle, measure the gravitational Aharonov-Bohm effect, and eventually detect gravitational waves and phase shifts associated with general relativity.
NASA Astrophysics Data System (ADS)
Sanchez-Parcerisa, D.; Cortés-Giraldo, M. A.; Dolney, D.; Kondrla, M.; Fager, M.; Carabe, A.
2016-02-01
In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm-1) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.
Sanchez-Parcerisa, D; Cortés-Giraldo, M A; Dolney, D; Kondrla, M; Fager, M; Carabe, A
2016-02-21
In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm(-1)) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.
relline: Relativistic line profiles calculation
NASA Astrophysics Data System (ADS)
Dauser, Thomas
2015-05-01
relline calculates relativistic line profiles; it is compatible with the common X-ray data analysis software XSPEC (ascl:9910.005) and ISIS (ascl:1302.002). The two basic forms are an additive line model (RELLINE) and a convolution model to calculate relativistic smearing (RELCONV).
Observing a coherent superposition of an atom and a molecule
Dowling, Mark R.; Bartlett, Stephen D.; Rudolph, Terry; Spekkens, Robert W.
2006-11-15
We demonstrate that it is possible, in principle, to perform a Ramsey-type interference experiment to exhibit a coherent superposition of a single atom and a diatomic molecule. This gedanken experiment, based on the techniques of Aharonov and Susskind [Phys. Rev. 155, 1428 (1967)], explicitly violates the commonly accepted superselection rule that forbids coherent superpositions of eigenstates of differing atom number. A Bose-Einstein condensate plays the role of a reference frame that allows for coherent operations analogous to Ramsey pulses. We also investigate an analogous gedanken experiment to exhibit a coherent superposition of a single boson and a fermion, violating the commonly accepted superselection rule forbidding coherent superpositions of states of differing particle statistics. In this case, the reference frame is realized by a multimode state of many fermions. This latter case reproduces all of the relevant features of Ramsey interferometry, including Ramsey fringes over many repetitions of the experiment. However, the apparent inability of this proposed experiment to produce well-defined relative phases between two distinct systems each described by a coherent superposition of a boson and a fermion demonstrates that there are additional, outstanding requirements to fully 'lift' the univalence superselection rule.
Metaheuristic Algorithms for Convolution Neural Network
Fanany, Mohamad Ivan; Arymurthy, Aniati Murni
2016-01-01
A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738
Metaheuristic Algorithms for Convolution Neural Network.
Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni
2016-01-01
A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738
Robust mesoscopic superposition of strongly correlated ultracold atoms
Hallwood, David W.; Ernst, Thomas; Brand, Joachim
2010-12-15
We propose a scheme to create coherent superpositions of annular flow of strongly interacting bosonic atoms in a one-dimensional ring trap. The nonrotating ground state is coupled to a vortex state with mesoscopic angular momentum by means of a narrow potential barrier and an applied phase that originates from either rotation or a synthetic magnetic field. We show that superposition states in the Tonks-Girardeau regime are robust against single-particle loss due to the effects of strong correlations. The coupling between the mesoscopically distinct states scales much more favorably with particle number than in schemes relying on weak interactions, thus making particle numbers of hundreds or thousands feasible. Coherent oscillations induced by time variation of parameters may serve as a 'smoking gun' signature for detecting superposition states.
Dissipative Optomechanical Preparation of Macroscopic Quantum Superposition States.
Abdi, M; Degenfeld-Schonburg, P; Sameti, M; Navarrete-Benlloch, C; Hartmann, M J
2016-06-10
The transition from quantum to classical physics remains an intensely debated question even though it has been investigated for more than a century. Further clarifications could be obtained by preparing macroscopic objects in spatial quantum superpositions and proposals for generating such states for nanomechanical devices either in a transient or a probabilistic fashion have been put forward. Here, we introduce a method to deterministically obtain spatial superpositions of arbitrary lifetime via dissipative state preparation. In our approach, we engineer a double-well potential for the motion of the mechanical element and drive it towards the ground state, which shows the desired spatial superposition, via optomechanical sideband cooling. We propose a specific implementation based on a superconducting circuit coupled to the mechanical motion of a lithium-decorated monolayer graphene sheet, introduce a method to verify the mechanical state by coupling it to a superconducting qubit, and discuss its prospects for testing collapse models for the quantum to classical transition. PMID:27341233
Experimental creation of superposition of unknown photonic quantum states
NASA Astrophysics Data System (ADS)
Hu, Xiao-Min; Hu, Meng-Jun; Chen, Jiang-Shan; Liu, Bi-Heng; Huang, Yun-Feng; Li, Chuan-Feng; Guo, Guang-Can; Zhang, Yong-Sheng
2016-09-01
As one of the most intriguing intrinsic properties of the quantum world, quantum superposition provokes great interest in its own generation. Though a universal quantum machine that creates superposition of two arbitrary unknown states has been shown to be physically impossible, a probabilistic protocol exists given that two input states have nonzero overlaps with the referential state. Here we report a probabilistic quantum machine realizing superposition of two arbitrary unknown photonic qubits as long as they have nonzero overlaps with the horizontal polarization state |H > . A total of 11 different qubit pairs are chosen to test this protocol and we obtain the average fidelity as high as 0.99, which shows the excellent reliability of our realization. This realization may have significant applications in quantum information and quantum computation, e.g., generating nonclassical states and realizing information compression in a quantum computation.
Dissipative Optomechanical Preparation of Macroscopic Quantum Superposition States
NASA Astrophysics Data System (ADS)
Abdi, M.; Degenfeld-Schonburg, P.; Sameti, M.; Navarrete-Benlloch, C.; Hartmann, M. J.
2016-06-01
The transition from quantum to classical physics remains an intensely debated question even though it has been investigated for more than a century. Further clarifications could be obtained by preparing macroscopic objects in spatial quantum superpositions and proposals for generating such states for nanomechanical devices either in a transient or a probabilistic fashion have been put forward. Here, we introduce a method to deterministically obtain spatial superpositions of arbitrary lifetime via dissipative state preparation. In our approach, we engineer a double-well potential for the motion of the mechanical element and drive it towards the ground state, which shows the desired spatial superposition, via optomechanical sideband cooling. We propose a specific implementation based on a superconducting circuit coupled to the mechanical motion of a lithium-decorated monolayer graphene sheet, introduce a method to verify the mechanical state by coupling it to a superconducting qubit, and discuss its prospects for testing collapse models for the quantum to classical transition.
NASA Astrophysics Data System (ADS)
Xiong, Jun; Liu, J. G.; Cao, Li
2015-12-01
This paper presents hardware efficient designs for implementing the one-dimensional (1D) discrete Fourier transform (DFT). Once DFT is formulated as the cyclic convolution form, the improved first-order moments-based cyclic convolution structure can be used as the basic computing unit for the DFT computation, which only contains a control module, a barrel shifter and (N-1)/2 accumulation units. After decomposing and reordering the twiddle factors, all that remains to do is shifting the input data sequence and accumulating them under the control of the statistical results on the twiddle factors. The whole calculation process only contains shift operations and additions with no need for multipliers and large memory. Compared with the previous first-order moments-based structure for DFT, the proposed designs have the advantages of less hardware consumption, lower power consumption and the flexibility to achieve better performance in certain cases. A series of experiments have proven the high performance of the proposed designs in terms of the area time product and power consumption. Similar efficient designs can be obtained for other computations, such as DCT/IDCT, DST/IDST, digital filter and correlation by transforming them into the forms of the first-order moments based cyclic convolution.
Vehicle detection based on visual saliency and deep sparse convolution hierarchical model
NASA Astrophysics Data System (ADS)
Cai, Yingfeng; Wang, Hai; Chen, Xiaobo; Gao, Li; Chen, Long
2016-07-01
Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification. These types of methods generally have high processing times and low vehicle detection performance. To address this issue, a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed. A visual saliency calculation is firstly used to generate a small vehicle candidate area. The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection. The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group, which outperforms the existing state-of-the-art algorithms. More importantly, high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.
Vehicle detection based on visual saliency and deep sparse convolution hierarchical model
NASA Astrophysics Data System (ADS)
Cai, Yingfeng; Wang, Hai; Chen, Xiaobo; Gao, Li; Chen, Long
2016-06-01
Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification. These types of methods generally have high processing times and low vehicle detection performance. To address this issue, a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed. A visual saliency calculation is firstly used to generate a small vehicle candidate area. The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection. The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group, which outperforms the existing state-of-the-art algorithms. More importantly, high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.
Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.
Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian
2016-10-01
In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches. PMID:26660697
Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.
Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian
2016-10-01
In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.
Seeing lens imaging as a superposition of multiple views
NASA Astrophysics Data System (ADS)
Grusche, Sascha
2016-01-01
In the conventional approach to lens imaging, rays are used to map object points to image points. However, many students want to think of the image as a whole. To answer this need, Kepler’s ray drawing is reinterpreted in terms of shifted camera obscura images. These images are uncovered by covering the lens with pinholes. Thus, lens imaging is seen as a superposition of sharp images from different viewpoints, so-called elemental images. This superposition is simulated with projectors, and with transparencies. Lens ray diagrams are constructed based on elemental images; the conventional construction method is included as a special case.
Entanglement and discord of the superposition of Greenberger-Horne-Zeilinger states
Parashar, Preeti; Rana, Swapan
2011-03-15
We calculate the analytic expression for geometric measure of entanglement for arbitrary superposition of two N-qubit canonical orthonormal Greenberger-Horne-Zeilinger (GHZ) states and the same for two W states. In the course of characterizing all kinds of nonclassical correlations, an explicit formula for quantum discord (via relative entropy) for the former class of states has been presented. Contrary to the GHZ state, the closest separable state to the W state is not classical. Therefore, in this case, the discord is different from the relative entropy of entanglement. We conjecture that the discord for the N-qubit W state is log{sub 2}N.
Convolutional Sparse Coding for Trajectory Reconstruction.
Zhu, Yingying; Lucey, Simon
2015-03-01
Trajectory basis Non-Rigid Structure from Motion (NRSfM) refers to the process of reconstructing the 3D trajectory of each point of a non-rigid object from just their 2D projected trajectories. Reconstruction relies on two factors: (i) the condition of the composed camera & trajectory basis matrix, and (ii) whether the trajectory basis has enough degrees of freedom to model the 3D point trajectory. These two factors are inherently conflicting. Employing a trajectory basis with small capacity has the positive characteristic of reducing the likelihood of an ill-conditioned system (when composed with the camera) during reconstruction. However, this has the negative characteristic of increasing the likelihood that the basis will not be able to fully model the object's "true" 3D point trajectories. In this paper we draw upon a well known result centering around the Reduced Isometry Property (RIP) condition for sparse signal reconstruction. RIP allow us to relax the requirement that the full trajectory basis composed with the camera matrix must be well conditioned. Further, we propose a strategy for learning an over-complete basis using convolutional sparse coding from naturally occurring point trajectory corpora to increase the likelihood that the RIP condition holds for a broad class of point trajectories and camera motions. Finally, we propose an l1 inspired objective for trajectory reconstruction that is able to "adaptively" select the smallest sub-matrix from an over-complete trajectory basis that balances (i) and (ii). We present more practical 3D reconstruction results compared to current state of the art in trajectory basis NRSfM.
Noise-enhanced convolutional neural networks.
Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart
2016-06-01
Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives.
Colonoscopic polyp detection using convolutional neural networks
NASA Astrophysics Data System (ADS)
Park, Sun Young; Sargent, Dusty
2016-03-01
Computer aided diagnosis (CAD) systems for medical image analysis rely on accurate and efficient feature extraction methods. Regardless of which type of classifier is used, the results will be limited if the input features are not diagnostically relevant and do not properly discriminate between the different classes of images. Thus, a large amount of research has been dedicated to creating feature sets that capture the salient features that physicians are able to observe in the images. Successful feature extraction reduces the semantic gap between the physician's interpretation and the computer representation of images, and helps to reduce the variability in diagnosis between physicians. Due to the complexity of many medical image classification tasks, feature extraction for each problem often requires domainspecific knowledge and a carefully constructed feature set for the specific type of images being classified. In this paper, we describe a method for automatic diagnostic feature extraction from colonoscopy images that may have general application and require a lower level of domain-specific knowledge. The work in this paper expands on our previous CAD algorithm for detecting polyps in colonoscopy video. In that work, we applied an eigenimage model to extract features representing polyps, normal tissue, diverticula, etc. from colonoscopy videos taken from various viewing angles and imaging conditions. Classification was performed using a conditional random field (CRF) model that accounted for the spatial and temporal adjacency relationships present in colonoscopy video. In this paper, we replace the eigenimage feature descriptor with features extracted from a convolutional neural network (CNN) trained to recognize the same image types in colonoscopy video. The CNN-derived features show greater invariance to viewing angles and image quality factors when compared to the eigenimage model. The CNN features are used as input to the CRF classifier as before. We report
Generation of macroscopic superposition states with small nonlinearity
Jeong, H.; Ralph, T.C.; Kim, M. S.; Ham, B.S.
2004-12-01
We suggest a scheme to generate a macroscopic superposition state ('Schroedinger cat state') of a free-propagating optical field using a beam splitter, homodyne measurement, and a very small Kerr nonlinear effect. Our scheme makes it possible to reduce considerably the required nonlinear effect to generate an optical cat state using simple and efficient optical elements.
Carrasco, P.; Jornet, N.; Duch, M. A.; Panettieri, V.; Weber, L.; Eudaldo, T.; Ginjaume, M.; Ribas, M.
2007-08-15
To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10x10, 5x5, and 2x2 cm{sup 2}) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2x2 cm{sup 2} field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values
ASIC-based architecture for the real-time computation of 2D convolution with large kernel size
NASA Astrophysics Data System (ADS)
Shao, Rui; Zhong, Sheng; Yan, Luxin
2015-12-01
Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.
NASA Astrophysics Data System (ADS)
Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.
2016-03-01
Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.
Dose-calculation algorithms in the context of inhomogeneity corrections for high energy photon beams
Papanikolaou, Niko; Stathakis, Sotirios
2009-10-15
Radiation therapy has witnessed a plethora of innovations and developments in the past 15 years. Since the introduction of computed tomography for treatment planning there has been a steady introduction of new methods to refine treatment delivery. Imaging continues to be an integral part of the planning, but also the delivery, of modern radiotherapy. However, all the efforts of image guided radiotherapy, intensity-modulated planning and delivery, adaptive radiotherapy, and everything else that we pride ourselves in having in the armamentarium can fall short, unless there is an accurate dose-calculation algorithm. The agreement between the calculated and delivered doses is of great significance in radiation therapy since the accuracy of the absorbed dose as prescribed determines the clinical outcome. Dose-calculation algorithms have evolved greatly over the years in an effort to be more inclusive of the effects that govern the true radiation transport through the human body. In this Vision 20/20 paper, we look back to see how it all started and where things are now in terms of dose algorithms for photon beams and the inclusion of tissue heterogeneities. Convolution-superposition algorithms have dominated the treatment planning industry for the past few years. Monte Carlo techniques have an inherent accuracy that is superior to any other algorithm and as such will continue to be the gold standard, along with measurements, and maybe one day will be the algorithm of choice for all particle treatment planning in radiation therapy.
Modelling ocean carbon cycle with a nonlinear convolution model
NASA Astrophysics Data System (ADS)
Kheshgi, Haroon S.; White, Benjamin S.
1996-02-01
A nonlinear convolution integral is developed to model the response of the ocean carbon sink to changes in the atmospheric concentration of CO2. This model can accurately represent the atmospheric response of complex ocean carbon cycle models in which the nonlinear behavior stems from the nonlinear dependence of CO2 solubility in seawater on CO2 partial pressure, which is often represented by the buffer factor. The kernel of the nonlinear convolution model can be constructed from a response of such a complex model to an arbitrary change in CO2 emissions, along with the functional dependence of the buffer factor. Once the convolution kernel has been constructed, either analytically or from a model experiment, the convolution representation can be used to estimate responses of the ocean carbon sink to other changes in the atmospheric concentration of CO2. Thus the method can be used, e.g., to explore alternative emissions scenarios for assessments of climate change. A derivation for the nonlinear convolution integral model is given, and the model is used to reproduce the response of two carbon cycle models: a one-dimensional diffusive ocean model, and a three-dimensional ocean-general-circulation tracer model.
Evaluation of convolutional neural networks for visual recognition.
Nebauer, C
1998-01-01
Convolutional neural networks provide an efficient method to constrain the complexity of feedforward neural networks by weight sharing and restriction to local connections. This network topology has been applied in particular to image classification when sophisticated preprocessing is to be avoided and raw images are to be classified directly. In this paper two variations of convolutional networks--neocognitron and a modification of neocognitron--are compared with classifiers based on fully connected feedforward layers (i.e., multilayer perceptron, nearest neighbor classifier, auto-encoding network) with respect to their visual recognition performance. Beside the original neocognitron a modification of the neocognitron is proposed which combines neurons from perceptron with the localized network structure of neocognitron. Instead of training convolutional networks by time-consuming error backpropagation, in this work a modular procedure is applied whereby layers are trained sequentially from the input to the output layer in order to recognize features of increasing complexity. For a quantitative experimental comparison with standard classifiers two very different recognition tasks have been chosen: handwritten digit recognition and face recognition. In the first example on handwritten digit recognition the generalization of convolutional networks is compared to fully connected networks. In several experiments the influence of variations of position, size, and orientation of digits is determined and the relation between training sample size and validation error is observed. In the second example recognition of human faces is investigated under constrained and variable conditions with respect to face orientation and illumination and the limitations of convolutional networks are discussed.
NASA Astrophysics Data System (ADS)
Mercan, Kadir; Demir, Çiǧdem; Civalek, Ömer
2016-01-01
In the present manuscript, free vibration response of circular cylindrical shells with functionally graded material (FGM) is investigated. The method of discrete singular convolution (DSC) is used for numerical solution of the related governing equation of motion of FGM cylindrical shell. The constitutive relations are based on the Love's first approximation shell theory. The material properties are graded in the thickness direction according to a volume fraction power law indexes. Frequency values are calculated for different types of boundary conditions, material and geometric parameters. In general, close agreement between the obtained results and those of other researchers has been found.
Error-trellis Syndrome Decoding Techniques for Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Error-trellis syndrome decoding techniques for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1985-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Glaucoma detection based on deep convolutional neural network.
Xiangyu Chen; Yanwu Xu; Damon Wing Kee Wong; Tien Yin Wong; Jiang Liu
2015-08-01
Glaucoma is a chronic and irreversible eye disease, which leads to deterioration in vision and quality of life. In this paper, we develop a deep learning (DL) architecture with convolutional neural network for automated glaucoma diagnosis. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images to discriminate between glaucoma and non-glaucoma patterns for diagnostic decisions. The proposed DL architecture contains six learned layers: four convolutional layers and two fully-connected layers. Dropout and data augmentation strategies are adopted to further boost the performance of glaucoma diagnosis. Extensive experiments are performed on the ORIGA and SCES datasets. The results show area under curve (AUC) of the receiver operating characteristic curve in glaucoma detection at 0.831 and 0.887 in the two databases, much better than state-of-the-art algorithms. The method could be used for glaucoma detection. PMID:26736362
Glaucoma detection based on deep convolutional neural network.
Xiangyu Chen; Yanwu Xu; Damon Wing Kee Wong; Tien Yin Wong; Jiang Liu
2015-08-01
Glaucoma is a chronic and irreversible eye disease, which leads to deterioration in vision and quality of life. In this paper, we develop a deep learning (DL) architecture with convolutional neural network for automated glaucoma diagnosis. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images to discriminate between glaucoma and non-glaucoma patterns for diagnostic decisions. The proposed DL architecture contains six learned layers: four convolutional layers and two fully-connected layers. Dropout and data augmentation strategies are adopted to further boost the performance of glaucoma diagnosis. Extensive experiments are performed on the ORIGA and SCES datasets. The results show area under curve (AUC) of the receiver operating characteristic curve in glaucoma detection at 0.831 and 0.887 in the two databases, much better than state-of-the-art algorithms. The method could be used for glaucoma detection.
Modeling the reversible decoherence of mesoscopic superpositions in dissipative environments
NASA Astrophysics Data System (ADS)
Mokarzel, S. G.; Salgueiro, A. N.; Nemes, M. C.
2002-04-01
A model is presented to describe the recently proposed experiment [J. Raimond, M. Brune, and S. Haroche, Phys. Rev. Lett 79, 1964 (1997)] in which a mesoscopic superposition of radiation states is prepared in a high-Q cavity that is coupled to a similar resonator. The dynamical coherence loss of such a state in the absence of dissipation is reversible and can be observed in principle. We show how this picture is modified due to the presence of the environmental couplings. Analytical expressions for the experimental conditional probabilities and the linear entropy are given. We conclude that the phenomenon can still be observed provided the ratio between the damping constant and the intercavities coupling does not exceed about a few percent. This observation is favored for superpositions of states with a large overlap.
Tailoring quantum superpositions with linearly polarized amplitude-modulated light
Pustelny, S.; Koczwara, M.; Cincio, L.; Gawlik, W.
2011-04-15
Amplitude-modulated nonlinear magneto-optical rotation is a powerful technique that offers a possibility of controllable generation of given quantum states. In this paper, we demonstrate creation and detection of specific ground-state magnetic-sublevel superpositions in {sup 87}Rb. By appropriate tuning of the modulation frequency and magnetic-field induction the efficiency of a given coherence generation is controlled. The processes are analyzed versus different experimental parameters.
Harmonic superposition method for grand-canonical ensembles
NASA Astrophysics Data System (ADS)
Calvo, F.; Wales, D. J.
2015-03-01
The harmonic superposition method provides a unified framework to the equilibrium and relaxation kinetics on complex potential energy landscapes. Here we extend it to grand-canonical statistical ensembles governed by chemical potentials or chemical potential differences, by sampling energy minima corresponding to the various relevant sizes or compositions. The method is applied and validated against conventional Monte Carlo simulations for the problems of chemical equilibrium in nanoalloys and hydrogen absorption in bulk and nanoscale palladium.
Quantum Superposition, Collapse, and the Default Specification Principle
NASA Astrophysics Data System (ADS)
Nikkhah Shirazi, Armin
2014-03-01
Quantum Superposition and collapse lie at the heart of the difficulty in understanding what quantum mechanics is exactly telling us about reality. We present here a principle which permits one to formulate a simple and general mathematical model that abstracts these features out of quantum theory. A precise formulation of this principle in terms of a set-theoretic axiom added to standard set theory may directly connect the foundations of physics to the foundations of mathematics.
Macroscopic superposition of ultracold atoms with orbital degrees of freedom
Garcia-March, M. A.; Carr, L. D.; Dounas-Frazer, D. R.
2011-04-15
We introduce higher dimensions into the problem of Bose-Einstein condensates in a double-well potential, taking into account orbital angular momentum. We completely characterize the eigenstates of this system, delineating new regimes via both analytical high-order perturbation theory and numerical exact diagonalization. Among these regimes are mixed Josephson- and Fock-like behavior, crossings in both excited and ground states, and shadows of macroscopic superposition states.
Sensing Super-position: Visual Instrument Sensor Replacement
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2006-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an
Measurement-Induced Macroscopic Superposition States in Cavity Optomechanics
NASA Astrophysics Data System (ADS)
Hoff, Ulrich B.; Kollath-Bönig, Johann; Neergaard-Nielsen, Jonas S.; Andersen, Ulrik L.
2016-09-01
A novel protocol for generating quantum superpositions of macroscopically distinct states of a bulk mechanical oscillator is proposed, compatible with existing optomechanical devices operating in the bad-cavity limit. By combining a pulsed optomechanical quantum nondemolition (QND) interaction with nonclassical optical resources and measurement-induced feedback, the need for strong single-photon coupling is avoided. We outline a three-pulse sequence of QND interactions encompassing squeezing-enhanced cooling by measurement, state preparation, and tomography.
NASA Astrophysics Data System (ADS)
Cho, Woong; Suh, Tae-Suk; Park, Jeong-Hoon; Xing, Lei; Lee, Jeong-Woo
2012-12-01
A collapsed cone convolution algorithm was applied to a treatment planning system for the calculation of dose distributions. The distribution of beam fluences was determined using a three-source model by considering the source strengths of the primary beam, the beam scattered from the primary collimators, and an extra beam scattered from extra structures in the gantry head of the radiotherapy treatment machine. The distribution of the total energy released per unit mass (TERMA) was calculated from the distribution of the fluence by considering several physical effects such as the emission of poly-energetic photon spectra, the attenuation of the beam fluence in a medium, the horn effect, the beam-softening effect, and beam transmission through collimators or multi-leaf collimators. The distribution of the doses was calculated by using the convolution of the distribution of the TERMA and the poly-energetic kernel. The distribution of the kernel was approximated to several tens of collapsed cone lines to express the energies transferred by the electrons that originated from the interactions between the photons and the medium. The implemented algorithm was validated by comparing the calculated percentage depth doses (PDDs) and dose profiles with the measured PDDs and relevant profiles. In addition, the dose distribution for an irregular-shaped radiation field was verified by comparing the calculated doses with the measured doses obtained via EDR2 film dosimetry and with the calculated doses obtained using a different treatment planning system based on the pencil beam algorithm (Eclipse, Varian, Palo Alto, USA). The majority of the calculated doses for the PDDs, the profiles, and the irregular-shaped field showed good agreement with the measured doses to within a 2% dose difference, except in the build-up regions. The implemented algorithm was proven to be efficient and accurate for clinical purposes in radiation therapy, and it was found to be easily implementable in
Single-Atom Gating of Quantum State Superpositions
Moon, Christopher
2010-04-28
The ultimate miniaturization of electronic devices will likely require local and coherent control of single electronic wavefunctions. Wavefunctions exist within both physical real space and an abstract state space with a simple geometric interpretation: this state space - or Hilbert space - is spanned by mutually orthogonal state vectors corresponding to the quantized degrees of freedom of the real-space system. Measurement of superpositions is akin to accessing the direction of a vector in Hilbert space, determining an angle of rotation equivalent to quantum phase. Here we show that an individual atom inside a designed quantum corral1 can control this angle, producing arbitrary coherent superpositions of spatial quantum states. Using scanning tunnelling microscopy and nanostructures assembled atom-by-atom we demonstrate how single spins and quantum mirages can be harnessed to image the superposition of two electronic states. We also present a straightforward method to determine the atom path enacting phase rotations between any desired state vectors. A single atom thus becomes a real-space handle for an abstract Hilbert space, providing a simple technique for coherent quantum state manipulation at the spatial limit of condensed matter.
A high-order fast method for computing convolution integral with smooth kernel
Qiang, Ji
2009-09-28
In this paper we report on a high-order fast method to numerically calculate convolution integral with smooth non-periodic kernel. This method is based on the Newton-Cotes quadrature rule for the integral approximation and an FFT method for discrete summation. The method can have an arbitrarily high-order accuracy in principle depending on the number of points used in the integral approximation and a computational cost of O(Nlog(N)), where N is the number of grid points. For a three-point Simpson rule approximation, the method has an accuracy of O(h{sup 4}), where h is the size of the computational grid. Applications of the Simpson rule based algorithm to the calculation of a one-dimensional continuous Gauss transform and to the calculation of a two-dimensional electric field from a charged beam are also presented.
Die and telescoping punch form convolutions in thin diaphragm
NASA Technical Reports Server (NTRS)
1965-01-01
Die and punch set forms convolutions in thin dished metal diaphragm without stretching the metal too thin at sharp curvatures. The die corresponds to the metal shape to be formed, and the punch consists of elements that progressively slide against one another under the restraint of a compressed-air cushion to mate with the die.
Convolutional virtual electric field for image segmentation using active contours.
Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden
2014-01-01
Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images. PMID:25360586
Real-time rendering of optical effects using spatial convolution
NASA Astrophysics Data System (ADS)
Rokita, Przemyslaw
1998-03-01
Simulation of special effects such as: defocus effect, depth-of-field effect, raindrops or water film falling on the windshield, may be very useful in visual simulators and in all computer graphics applications that need realistic images of outdoor scenery. Those effects are especially important in rendering poor visibility conditions in flight and driving simulators, but can also be applied, for example, in composing computer graphics and video sequences- -i.e. in Augmented Reality systems. This paper proposes a new approach to the rendering of those optical effects by iterative adaptive filtering using spatial convolution. The advantage of this solution is that the adaptive convolution can be done in real-time by existing hardware. Optical effects mentioned above can be introduced into the image computed using conventional camera model by applying to the intensity of each pixel the convolution filter having an appropriate point spread function. The algorithms described in this paper can be easily implemented int the visualization pipeline--the final effect may be obtained by iterative filtering using a single hardware convolution filter or with the pipeline composed of identical 3 X 3 filters placed as the stages of this pipeline. Another advantage of the proposed solution is that the extension based on proposed algorithm can be added to the existing rendering systems as a final stage of the visualization pipeline.
Face recognition: a convolutional neural-network approach.
Lawrence, S; Giles, C L; Tsoi, A C; Back, A D
1997-01-01
We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.
Reply to 'Comment on 'Quantum convolutional error-correcting codes''
Chau, H.F.
2005-08-15
In their Comment, de Almeida and Palazzo [Phys. Rev. A 72, 026301 (2005)] discovered an error in my earlier paper concerning the construction of quantum convolutional codes [Phys. Rev. A 58, 905 (1998)]. This error can be repaired by modifying the method of code construction.
Maximum-likelihood estimation of circle parameters via convolution.
Zelniker, Emanuel E; Clarkson, I Vaughan L
2006-04-01
The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images. PMID:16579374
A nonlinear convolution model for the evasion of CO2 injected into the deep ocean
NASA Astrophysics Data System (ADS)
Kheshgi, Haroon S.; Archer, David E.
2004-02-01
Deep ocean storage of CO2 captured from, for example, flue gases is being considered as a potential response option to global warming concerns. For storage to be effective, CO2 injected into the deep ocean must remain sequestered from the atmosphere for a long time. However, a fraction of CO2 injected into the deep ocean is expected to eventually evade into the atmosphere. This fraction is expected to depend on the time since injection, the location of injection, and the future atmospheric concentration of CO2. We approximate the evasion of injected CO2 at specific locations using a nonlinear convolution model including explicitly the nonlinear response of CO2 solubility to future CO2 concentration and alkalinity and Green's functions for the transport of CO2 from injection locations to the ocean surface as well as alkalinity response to seafloor CaCO3 dissolution. Green's functions are calculated from the results of a three-dimensional model for ocean carbon cycle for impulses of CO2 either released to the atmosphere or injected a locations deep in the Pacific and Atlantic oceans. CO2 transport in the three-dimensional (3-D) model is governed by offline tracer transport in the ocean interior, exchange of CO2 with the atmosphere, and dissolution of ocean sediments. The convolution model is found to accurately approximate results of the 3-D model in test cases including both deep-ocean injection and sediment dissolution. The convolution model allows comparison of the CO2 evasion delay achieved by deep ocean injection with notional scenarios for CO2 stabilization and the time extent of the fossil fuel era.
De-convoluting mixed crude oil in Prudhoe Bay Field, North Slope, Alaska
Peters, K.E.; Scott, Ramos L.; Zumberge, J.E.; Valin, Z.C.; Bird, K.J.
2008-01-01
Seventy-four crude oil samples from the Barrow arch on the North Slope of Alaska were studied to assess the relative volumetric contributions from different source rocks to the giant Prudhoe Bay Field. We applied alternating least squares to concentration data (ALS-C) for 46 biomarkers in the range C19-C35 to de-convolute mixtures of oil generated from carbonate rich Triassic Shublik Formation and clay rich Jurassic Kingak Shale and Cretaceous Hue Shale-gamma ray zone (Hue-GRZ) source rocks. ALS-C results for 23 oil samples from the prolific Ivishak Formation reservoir of the Prudhoe Bay Field indicate approximately equal contributions from Shublik Formation and Hue-GRZ source rocks (37% each), less from the Kingak Shale (26%), and little or no contribution from other source rocks. These results differ from published interpretations that most oil in the Prudhoe Bay Field originated from the Shublik Formation source rock. With few exceptions, the relative contribution of oil from the Shublik Formation decreases, while that from the Hue-GRZ increases in reservoirs along the Barrow arch from Point Barrow in the northwest to Point Thomson in the southeast (???250 miles or 400 km). The Shublik contribution also decreases to a lesser degree between fault blocks within the Ivishak pool from west to east across the Prudhoe Bay Field. ALS-C provides a robust means to calculate the relative amounts of two or more oil types in a mixture. Furthermore, ALS-C does not require that pure end member oils be identified prior to analysis or that laboratory mixtures of these oils be prepared to evaluate mixing. ALS-C of biomarkers reliably de-convolutes mixtures because the concentrations of compounds in mixtures vary as linear functions of the amount of each oil type. ALS of biomarker ratios (ALS-R) cannot be used to de-convolute mixtures because compound ratios vary as nonlinear functions of the amount of each oil type.
NASA Astrophysics Data System (ADS)
An, Nguyen Ba
2009-04-01
Three novel probabilistic yet conclusive schemes are proposed to teleport a general two-mode coherent-state superposition via attenuated quantum channels with ideal and/or threshold detectors. The calculated total success probability is highest (lowest) when only ideal (threshold) detectors are used.
Kan, Monica W. K.; Yu, Peter K. N.; Leung, Lucullus H. T.
2013-01-01
Deterministic linear Boltzmann transport equation (D-LBTE) solvers have recently been developed, and one of the latest available software codes, Acuros XB, has been implemented in a commercial treatment planning system for radiotherapy photon beam dose calculation. One of the major limitations of most commercially available model-based algorithms for photon dose calculation is the ability to account for the effect of electron transport. This induces some errors in patient dose calculations, especially near heterogeneous interfaces between low and high density media such as tissue/lung interfaces. D-LBTE solvers have a high potential of producing accurate dose distributions in and near heterogeneous media in the human body. Extensive previous investigations have proved that D-LBTE solvers were able to produce comparable dose calculation accuracy as Monte Carlo methods with a reasonable speed good enough for clinical use. The current paper reviews the dosimetric evaluations of D-LBTE solvers for external beam photon radiotherapy. This content summarizes and discusses dosimetric validations for D-LBTE solvers in both homogeneous and heterogeneous media under different circumstances and also the clinical impact on various diseases due to the conversion of dose calculation from a conventional convolution/superposition algorithm to a recently released D-LBTE solver. PMID:24066294
Dose discrepancies in the buildup region and their impact on dose calculations for IMRT fields
Hsu, Shu-Hui; Moran, Jean M.; Chen, Yu; Kulasekere, Ravi; Roberson, Peter L.
2010-01-01
Purpose: Dose accuracy in the buildup region for radiotherapy treatment planning suffers from challenges in both measurement and calculation. This study investigates the dosimetry in the buildup region at normal and oblique incidences for open and IMRT fields and assesses the quality of the treatment planning calculations. Methods: This study was divided into three parts. First, percent depth doses and profiles (for 5×5, 10×10, 20×20, and 30×30 cm2 field sizes at 0°, 45°, and 70° incidences) were measured in the buildup region in Solid Water using an Attix parallel plate chamber and Kodak XV film, respectively. Second, the parameters in the empirical contamination (EC) term of the convolution∕superposition (CVSP) calculation algorithm were fitted based on open field measurements. Finally, seven segmental head-and-neck IMRT fields were measured on a flat phantom geometry and compared to calculations using γ and dose-gradient compensation (C) indices to evaluate the impact of residual discrepancies and to assess the adequacy of the contamination term for IMRT fields. Results: Local deviations between measurements and calculations for open fields were within 1% and 4% in the buildup region for normal and oblique incidences, respectively. The C index with 5%∕1 mm criteria for IMRT fields ranged from 89% to 99% and from 96% to 98% at 2 mm and 10 cm depths, respectively. The quality of agreement in the buildup region for open and IMRT fields is comparable to that in nonbuildup regions. Conclusions: The added EC term in CVSP was determined to be adequate for both open and IMRT fields. Due to the dependence of calculation accuracy on (1) EC modeling, (2) internal convolution and density grid sizes, (3) implementation details in the algorithm, and (4) the accuracy of measurements used for treatment planning system commissioning, the authors recommend an evaluation of the accuracy of near-surface dose calculations as a part of treatment planning commissioning
Entanglement of mixed macroscopic superpositions: An entangling-power study
Paternostro, M.; Kim, M. S.; Jeong, H.
2006-01-15
We investigate entanglement properties of a recently introduced class of macroscopic quantum superpositions in two-mode mixed states. One of the tools we use in order to infer the entanglement in this non-Gaussian class of states is the power to entangle a qubit system. Our study reveals features which are hidden in a standard approach to entanglement investigation based on the uncertainty principle of the quadrature variables. We briefly describe the experimental setup corresponding to our theoretical scenario and a suitable modification of the protocol which makes our proposal realizable within the current experimental capabilities.
Accelerated Superposition State Molecular Dynamics for Condensed Phase Systems.
Ceotto, Michele; Ayton, Gary S; Voth, Gregory A
2008-04-01
An extension of superposition state molecular dynamics (SSMD) [Venkatnathan and Voth J. Chem. Theory Comput. 2005, 1, 36] is presented with the goal to accelerate timescales and enable the study of "long-time" phenomena for condensed phase systems. It does not require any a priori knowledge about final and transition state configurations, or specific topologies. The system is induced to explore new configurations by virtue of a fictitious (free-particle-like) accelerating potential. The acceleration method can be applied to all degrees of freedom in the system and can be applied to condensed phases and fluids. PMID:26620930
Stern, Robin L.; Heaton, Robert; Fraser, Martin W.; and others
2011-01-15
The requirement of an independent verification of the monitor units (MU) or time calculated to deliver the prescribed dose to a patient has been a mainstay of radiation oncology quality assurance. The need for and value of such a verification was obvious when calculations were performed by hand using look-up tables, and the verification was achieved by a second person independently repeating the calculation. However, in a modern clinic using CT/MR/PET simulation, computerized 3D treatment planning, heterogeneity corrections, and complex calculation algorithms such as convolution/superposition and Monte Carlo, the purpose of and methodology for the MU verification have come into question. In addition, since the verification is often performed using a simpler geometrical model and calculation algorithm than the primary calculation, exact or almost exact agreement between the two can no longer be expected. Guidelines are needed to help the physicist set clinically reasonable action levels for agreement. This report addresses the following charges of the task group: (1) To re-evaluate the purpose and methods of the ''independent second check'' for monitor unit calculations for non-IMRT radiation treatment in light of the complexities of modern-day treatment planning. (2) To present recommendations on how to perform verification of monitor unit calculations in a modern clinic. (3) To provide recommendations on establishing action levels for agreement between primary calculations and verification, and to provide guidance in addressing discrepancies outside the action levels. These recommendations are to be used as guidelines only and shall not be interpreted as requirements.
Stern, Robin L; Heaton, Robert; Fraser, Martin W; Goddu, S Murty; Kirby, Thomas H; Lam, Kwok Leung; Molineu, Andrea; Zhu, Timothy C
2011-01-01
The requirement of an independent verification of the monitor units (MU) or time calculated to deliver the prescribed dose to a patient has been a mainstay of radiation oncology quality assurance. The need for and value of such a verification was obvious when calculations were performed by hand using look-up tables, and the verification was achieved by a second person independently repeating the calculation. However, in a modern clinic using CT/MR/PET simulation, computerized 3D treatment planning, heterogeneity corrections, and complex calculation algorithms such as convolution/superposition and Monte Carlo, the purpose of and methodology for the MU verification have come into question. In addition, since the verification is often performed using a simpler geometrical model and calculation algorithm than the primary calculation, exact or almost exact agreement between the two can no longer be expected. Guidelines are needed to help the physicist set clinically reasonable action levels for agreement. This report addresses the following charges of the task group: (1) To re-evaluate the purpose and methods of the "independent second check" for monitor unit calculations for non-IMRT radiation treatment in light of the complexities of modern-day treatment planning. (2) To present recommendations on how to perform verification of monitor unit calculations in a modern clinic. (3) To provide recommendations on establishing action levels for agreement between primary calculations and verification, and to provide guidance in addressing discrepancies outside the action levels. These recommendations are to be used as guidelines only and shall not be interpreted as requirements.
NASA Astrophysics Data System (ADS)
Matsuo, Miyuki; Yokoyama, Misao; Umemura, Kenji; Gril, Joseph; Yano, Ken'ichiro; Kawai, Shuichi
2010-04-01
This paper deals with the kinetics of the color properties of hinoki ( Chamaecyparis obtusa Endl.) wood. Specimens cut from the wood were heated at 90-180°C as accelerated aging treatment. The specimens completely dried and heated in the presence of oxygen allowed us to evaluate the effects of thermal oxidation on wood color change. Color properties measured by a spectrophotometer showed similar behavior irrespective of the treatment temperature with each time scale. Kinetic analysis using the time-temperature superposition principle, which uses the whole data set, was successfully applied to the color changes. The calculated values of the apparent activation energy in terms of L *, a *, b *, and Δ E^{*}_{ab} were 117, 95, 114, and 113 kJ/mol, respectively, which are similar to the values of the literature obtained for other properties such as the physical and mechanical properties of wood.
Limitations to the validity of single wake superposition in wind farm yield assessment
NASA Astrophysics Data System (ADS)
Gunn, K.; Stock-Williams, C.; Burke, M.; Willden, R.; Vogel, C.; Hunter, W.; Stallard, T.; Robinson, N.; Schmidt, S. R.
2016-09-01
Commercially available wind yield assessment models rely on superposition of wakes calculated for isolated single turbines. These methods of wake simulation fail to account for emergent flow physics that may affect the behaviour of multiple turbines and their wakes and therefore wind farm yield predictions. In this paper wake-wake interaction is modelled computationally (CFD) and physically (in a hydraulic flume) to investigate physical causes of discrepancies between analytical modelling and simulations or measurements. Three effects, currently neglected in commercial models, are identified as being of importance: 1) when turbines are directly aligned, the combined wake is shortened relative to the single turbine wake; 2) when wakes are adjacent, each will be lengthened due to reduced mixing; and 3) the pressure field of downstream turbines can move and modify wakes flowing close to them.
NASA Astrophysics Data System (ADS)
Galatola, P.
2016-02-01
By means of a perturbative scheme, we determine analytically the capillary energy of a spheroidal colloid floating on a deformed fluid interface in terms of the local curvature tensor of the background deformation. We validate our results, that hold for small ellipticity of the particle and small deformations of the surface, by an exact numerical calculation. As an application of our perturbative approach, we determine the asymptotic interaction, for large separations d , between two different spheroidal particles. The dominant contribution is quadrupolar and proportional to d-4. It coincides with the known superposition approximation and is zero if one of the two particles is spherical. The next to leading approximation, proportional to d-8, is always attractive and independent of the orientation of the two colloids. It is the dominant contribution to the interaction between a spheroidal and a spherical colloid.
Carrasco, P.; Jornet, N.; Duch, M.A.; Weber, L.; Ginjaume, M.; Eudaldo, T.; Jurado, D.; Ruiz, A.; Ribas, M.
2004-10-01
An extensive set of benchmark measurement of PDDs and beam profiles was performed in a heterogeneous layer phantom, including a lung equivalent heterogeneity, by means of several detectors and compared against the predicted dose values by different calculation algorithms in two treatment planning systems. PDDs were measured with TLDs, plane parallel and cylindrical ionization chambers and beam profiles with films. Additionally, Monte Carlo simulations by meansof the PENELOPE code were performed. Four different field sizes (10x10, 5x5, 2x2, and1x1 cm{sup 2}) and two lung equivalent materials (CIRS, {rho}{sub e}{sup w}=0.195 and St. Bartholomew Hospital, London, {rho}{sub e}{sup w}=0.244-0.322) were studied. The performance of four correction-based algorithms and one based on convolution-superposition was analyzed. The correction-based algorithms were the Batho, the Modified Batho, and the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system and the TMS Pencil Beam from the Helax-TMS (Nucletron) treatment planning system. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. The only studied calculation methods that correlated successfully with the measured values with a 2% average inside all media were the Collapsed Cone and the Monte Carlo simulation. The biggest difference between the predicted and the delivered dose in the beam axis was found for the EqTAR algorithm inside the CIRS lung equivalent material in a 2x2 cm{sup 2} 18 MV x-ray beam. In these conditions, average and maximum difference against the TLD measurements were 32% and 39%, respectively. In the water equivalent part of the phantom every algorithm correctly predicted the dose (within 2%) everywhere except very close to the interfaces where differences up to 24% were found for 2x2 cm{sup 2} 18 MV photon beams. Consistent values were found between the reference detector (ionization chamber in water and TLD in lung) and Monte Carlo
Superposition states for quantum nanoelectronic circuits and their nonclassical properties
NASA Astrophysics Data System (ADS)
Choi, Jeong Ryeol
2016-09-01
Quantum properties of a superposition state for a series RLC nanoelectronic circuit are investigated. Two displaced number states of the same amplitude but with opposite phases are considered as components of the superposition state. We have assumed that the capacitance of the system varies with time and a time-dependent power source is exerted on the system. The effects of displacement and a sinusoidal power source on the characteristics of the state are addressed in detail. Depending on the magnitude of the sinusoidal power source, the wave packets that propagate in charge(q)-space are more or less distorted. Provided that the displacement is sufficiently high, distinct interference structures appear in the plot of the time behavior of the probability density whenever the two components of the wave packet meet together. This is strong evidence for the advent of nonclassical properties in the system, that cannot be interpretable by the classical theory. Nonclassicality of a quantum system is not only a beneficial topic for academic interest in itself, but its results can be useful resources for quantum information and computation as well.
Experiments testing macroscopic quantum superpositions must be slow
Mari, Andrea; De Palma, Giacomo; Giovannetti, Vittorio
2016-01-01
We consider a thought experiment where the preparation of a macroscopically massive or charged particle in a quantum superposition and the associated dynamics of a distant test particle apparently allow for superluminal communication. We give a solution to the paradox which is based on the following fundamental principle: any local experiment, discriminating a coherent superposition from an incoherent statistical mixture, necessarily requires a minimum time proportional to the mass (or charge) of the system. For a charged particle, we consider two examples of such experiments, and show that they are both consistent with the previous limitation. In the first, the measurement requires to accelerate the charge, that can entangle with the emitted photons. In the second, the limitation can be ascribed to the quantum vacuum fluctuations of the electromagnetic field. On the other hand, when applied to massive particles our result provides an indirect evidence for the existence of gravitational vacuum fluctuations and for the possibility of entangling a particle with quantum gravitational radiation. PMID:26959656
Superposition of Stochastic Processes and the Resulting Particle Distributions
NASA Astrophysics Data System (ADS)
Schwadron, N. A.; Dayeh, M. A.; Desai, M.; Fahr, H.; Jokipii, J. R.; Lee, M. A.
2010-04-01
Many observations of suprathermal and energetic particles in the solar wind and the inner heliosheath show that distribution functions scale approximately with the inverse of particle speed (v) to the fifth power. Although there are exceptions to this behavior, there is a growing need to understand why this type of distribution function appears so frequently. This paper develops the concept that a superposition of exponential and Gaussian distributions with different characteristic speeds and temperatures show power-law tails. The particular type of distribution function, f vprop v -5, appears in a number of different ways: (1) a series of Poisson-like processes where entropy is maximized with the rates of individual processes inversely proportional to the characteristic exponential speed, (2) a series of Gaussian distributions where the entropy is maximized with the rates of individual processes inversely proportional to temperature and the density of individual Gaussian distributions proportional to temperature, and (3) a series of different diffusively accelerated energetic particle spectra with individual spectra derived from observations (1997-2002) of a multiplicity of different shocks. Thus, we develop a proof-of-concept for the superposition of stochastic processes that give rise to power-law distribution functions.
Evolution of superpositions of quantum states through a level crossing
Torosov, B. T.; Vitanov, N. V.
2011-12-15
The Landau-Zener-Stueckelberg-Majorana (LZSM) model is widely used for estimating transition probabilities in the presence of crossing energy levels in quantum physics. This model, however, makes the unphysical assumption of an infinitely long constant interaction, which introduces a divergent phase in the propagator. This divergence remains hidden when estimating output probabilities for a single input state insofar as the divergent phase cancels out. In this paper we show that, because of this divergent phase, the LZSM model is inadequate to describe the evolution of pure or mixed superposition states across a level crossing. The LZSM model can be used only if the system is initially in a single state or in a completely mixed superposition state. To this end, we show that the more realistic Demkov-Kunike model, which assumes a hyperbolic-tangent level crossing and a hyperbolic-secant interaction envelope, is free of divergences and is a much more adequate tool for describing the evolution through a level crossing for an arbitrary input state. For multiple crossing energies which are reducible to one or more effective two-state systems (e.g., by the Majorana and Morris-Shore decompositions), similar conclusions apply: the LZSM model does not produce definite values of the populations and the coherences, and one should use the Demkov-Kunike model instead.
Time-Temperature Superposition Applied to PBX Mechanical Properties
NASA Astrophysics Data System (ADS)
Thompson, Darla; Deluca, Racci
2011-06-01
The use of plastic-bonded explosives (PBXs) in weapon applications requires a certain level of structural/mechanical integrity. Uniaxial tension and compression experiments characterize the mechanical response of materials over a wide range of temperatures and strain rates, providing the basis for predictive modeling in more complex geometries. After years of data collection on a wide variety of PBX formulations, we have applied time-temperature superposition principles to a mechanical properties database which includes PBX 9501, PBX 9502, PBXN-110, PBXN-9, and HPP (propellant). The results of quasi-static tension and compression, SHPB compression, and cantilever DMA are compared. Time-temperature relationships of maximum stress and corresponding strain values are analyzed in addition to the more conventional analysis of modulus. Our analysis shows adherence to the principles of time-temperature superposition and correlations of mechanical response to the binder glass transition and specimen density. Direct ties relate time-temperature analysis to the underlying basis of existing PBX mechanical models (ViscoSCRAM). Results suggest that, within limits, mechanical response can be predicted at conditions not explicitly measured. LA-UR 11-01096.
Time-temperature superposition applied to PBX mechanical properties
NASA Astrophysics Data System (ADS)
Thompson, Darla; DeLuca, Racci; Wright, Walter J.
2012-03-01
The use of plastic-bonded explosives (PBXs) in weapon applications requires that they possess and maintain a level of structural/mechanical integrity. Uniaxial tension and compression experiments are typically used to characterize the mechanical response of materials over a wide range of temperatures and strain rates, providing the basis for predictive modeling in more complex geometries. After many years of data collection on a variety of PBX formulations, we have here applied the principles of time-temperature superposition to a mechanical properties database which includes PBX 9501, PBX 9502, PBXN-110, PBXN-9, and HPP (propellant). Consistencies are demonstrated between the results of quasi-static tension and compression, dynamic Split-Hopkinson Pressure Bar (SHPB) compression, and cantilever Dynamic Mechanical Analysis (DMA). Timetemperature relationships of maximum stress and corresponding strain values are analyzed, in addition to the more conventional analysis of modulus. The extensive analysis shows adherence to the principles of time-temperature superposition and correlations of mechanical response to binder glasstransition temperature (Tg) and specimen density. Direct ties exist between the time-temperature analysis and the underlying basis of a useful existing PBX mechanical model (ViscoSCRAM). Results give confidence that, with some limitations, mechanical response can be predicted at conditions not explicitly measured.
Experiments testing macroscopic quantum superpositions must be slow.
Mari, Andrea; De Palma, Giacomo; Giovannetti, Vittorio
2016-03-09
We consider a thought experiment where the preparation of a macroscopically massive or charged particle in a quantum superposition and the associated dynamics of a distant test particle apparently allow for superluminal communication. We give a solution to the paradox which is based on the following fundamental principle: any local experiment, discriminating a coherent superposition from an incoherent statistical mixture, necessarily requires a minimum time proportional to the mass (or charge) of the system. For a charged particle, we consider two examples of such experiments, and show that they are both consistent with the previous limitation. In the first, the measurement requires to accelerate the charge, that can entangle with the emitted photons. In the second, the limitation can be ascribed to the quantum vacuum fluctuations of the electromagnetic field. On the other hand, when applied to massive particles our result provides an indirect evidence for the existence of gravitational vacuum fluctuations and for the possibility of entangling a particle with quantum gravitational radiation.
Experiments testing macroscopic quantum superpositions must be slow
NASA Astrophysics Data System (ADS)
Mari, Andrea; de Palma, Giacomo; Giovannetti, Vittorio
2016-03-01
We consider a thought experiment where the preparation of a macroscopically massive or charged particle in a quantum superposition and the associated dynamics of a distant test particle apparently allow for superluminal communication. We give a solution to the paradox which is based on the following fundamental principle: any local experiment, discriminating a coherent superposition from an incoherent statistical mixture, necessarily requires a minimum time proportional to the mass (or charge) of the system. For a charged particle, we consider two examples of such experiments, and show that they are both consistent with the previous limitation. In the first, the measurement requires to accelerate the charge, that can entangle with the emitted photons. In the second, the limitation can be ascribed to the quantum vacuum fluctuations of the electromagnetic field. On the other hand, when applied to massive particles our result provides an indirect evidence for the existence of gravitational vacuum fluctuations and for the possibility of entangling a particle with quantum gravitational radiation.
Modeling scattering from azimuthally symmetric bathymetric features using wavefield superposition.
Fawcett, John A
2007-12-01
In this paper, an approach for modeling the scattering from azimuthally symmetric bathymetric features is described. These features are useful models for small mounds and indentations on the seafloor at high frequencies and seamounts, shoals, and basins at low frequencies. A bathymetric feature can be considered as a compact closed region, with the same sound speed and density as one of the surrounding media. Using this approach, a number of numerical methods appropriate for a partially buried target or facet problem can be applied. This paper considers the use of wavefield superposition and because of the azimuthal symmetry, the three-dimensional solution to the scattering problem can be expressed as a Fourier sum of solutions to a set of two-dimensional scattering problems. In the case where the surrounding two half spaces have only a density contrast, a semianalytic coupled mode solution is derived. This provides a benchmark solution to scattering from a class of penetrable hemispherical bosses or indentations. The details and problems of the numerical implementation of the wavefield superposition method are described. Example computations using the method for a simple scattering feature on a seabed are presented for a wide band of frequencies.
Free Nano-Object Ramsey Interferometry for Large Quantum Superpositions
NASA Astrophysics Data System (ADS)
Wan, C.; Scala, M.; Morley, G. W.; Rahman, ATM. A.; Ulbricht, H.; Bateman, J.; Barker, P. F.; Bose, S.; Kim, M. S.
2016-09-01
We propose an interferometric scheme based on an untrapped nano-object subjected to gravity. The motion of the center of mass (c.m.) of the free object is coupled to its internal spin system magnetically, and a free flight scheme is developed based on coherent spin control. The wave packet of the test object, under a spin-dependent force, may then be delocalized to a macroscopic scale. A gravity induced dynamical phase (accrued solely on the spin state, and measured through a Ramsey scheme) is used to reveal the above spatially delocalized superposition of the spin-nano-object composite system that arises during our scheme. We find a remarkable immunity to the motional noise in the c.m. (initially in a thermal state with moderate cooling), and also a dynamical decoupling nature of the scheme itself. Together they secure a high visibility of the resulting Ramsey fringes. The mass independence of our scheme makes it viable for a nano-object selected from an ensemble with a high mass variability. Given these advantages, a quantum superposition with a 100 nm spatial separation for a massive object of 1 09 amu is achievable experimentally, providing a route to test postulated modifications of quantum theory such as continuous spontaneous localization.
Experiments testing macroscopic quantum superpositions must be slow.
Mari, Andrea; De Palma, Giacomo; Giovannetti, Vittorio
2016-01-01
We consider a thought experiment where the preparation of a macroscopically massive or charged particle in a quantum superposition and the associated dynamics of a distant test particle apparently allow for superluminal communication. We give a solution to the paradox which is based on the following fundamental principle: any local experiment, discriminating a coherent superposition from an incoherent statistical mixture, necessarily requires a minimum time proportional to the mass (or charge) of the system. For a charged particle, we consider two examples of such experiments, and show that they are both consistent with the previous limitation. In the first, the measurement requires to accelerate the charge, that can entangle with the emitted photons. In the second, the limitation can be ascribed to the quantum vacuum fluctuations of the electromagnetic field. On the other hand, when applied to massive particles our result provides an indirect evidence for the existence of gravitational vacuum fluctuations and for the possibility of entangling a particle with quantum gravitational radiation. PMID:26959656
Spectral density of generalized Wishart matrices and free multiplicative convolution
NASA Astrophysics Data System (ADS)
Młotkowski, Wojciech; Nowak, Maciej A.; Penson, Karol A.; Życzkowski, Karol
2015-07-01
We investigate the level density for several ensembles of positive random matrices of a Wishart-like structure, W =X X† , where X stands for a non-Hermitian random matrix. In particular, making use of the Cauchy transform, we study the free multiplicative powers of the Marchenko-Pastur (MP) distribution, MP⊠s, which for an integer s yield Fuss-Catalan distributions corresponding to a product of s -independent square random matrices, X =X1⋯Xs . New formulas for the level densities are derived for s =3 and s =1 /3 . Moreover, the level density corresponding to the generalized Bures distribution, given by the free convolution of arcsine and MP distributions, is obtained. We also explain the reason of such a curious convolution. The technique proposed here allows for the derivation of the level densities for several other cases.
Spectral density of generalized Wishart matrices and free multiplicative convolution.
Młotkowski, Wojciech; Nowak, Maciej A; Penson, Karol A; Życzkowski, Karol
2015-07-01
We investigate the level density for several ensembles of positive random matrices of a Wishart-like structure, W=XX(†), where X stands for a non-Hermitian random matrix. In particular, making use of the Cauchy transform, we study the free multiplicative powers of the Marchenko-Pastur (MP) distribution, MP(⊠s), which for an integer s yield Fuss-Catalan distributions corresponding to a product of s-independent square random matrices, X=X(1)⋯X(s). New formulas for the level densities are derived for s=3 and s=1/3. Moreover, the level density corresponding to the generalized Bures distribution, given by the free convolution of arcsine and MP distributions, is obtained. We also explain the reason of such a curious convolution. The technique proposed here allows for the derivation of the level densities for several other cases.
a Convolutional Network for Semantic Facade Segmentation and Interpretation
NASA Astrophysics Data System (ADS)
Schmitz, Matthias; Mayer, Helmut
2016-06-01
In this paper we present an approach for semantic interpretation of facade images based on a Convolutional Network. Our network processes the input images in a fully convolutional way and generates pixel-wise predictions. We show that there is no need for large datasets to train the network when transfer learning is employed, i. e., a part of an already existing network is used and fine-tuned, and when the available data is augmented by using deformed patches of the images for training. The network is trained end-to-end with patches of the images and each patch is augmented independently. To undo the downsampling for the classification, we add deconvolutional layers to the network. Outputs of different layers of the network are combined to achieve more precise pixel-wise predictions. We demonstrate the potential of our network based on results for the eTRIMS (Korč and Förstner, 2009) dataset reduced to facades.
UFLIC: A Line Integral Convolution Algorithm for Visualizing Unsteady Flows
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Kao, David L.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
This paper presents an algorithm, UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Using the Line Integral Convolution (LIC) as the underlying method, a new convolution algorithm is proposed that can effectively trace the flow's global features over time. The new algorithm consists of a time-accurate value depositing scheme and a successive feed-forward method. The value depositing scheme accurately models the flow advection, and the successive feed-forward method maintains the coherence between animation frames. Our new algorithm can produce time-accurate, highly coherent flow animations to highlight global features in unsteady flow fields. CFD scientists, for the first time, are able to visualize unsteady surface flows using our algorithm.
Image Super-Resolution Using Deep Convolutional Networks.
Dong, Chao; Loy, Chen Change; He, Kaiming; Tang, Xiaoou
2016-02-01
We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality. PMID:26761735
Deep learning for steganalysis via convolutional neural networks
NASA Astrophysics Data System (ADS)
Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu
2015-03-01
Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.
Image Super-Resolution Using Deep Convolutional Networks.
Dong, Chao; Loy, Chen Change; He, Kaiming; Tang, Xiaoou
2016-02-01
We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.
A new computational decoding complexity measure of convolutional codes
NASA Astrophysics Data System (ADS)
Benchimol, Isaac B.; Pimentel, Cecilio; Souza, Richard Demo; Uchôa-Filho, Bartolomeu F.
2014-12-01
This paper presents a computational complexity measure of convolutional codes well suitable for software implementations of the Viterbi algorithm (VA) operating with hard decision. We investigate the number of arithmetic operations performed by the decoding process over the conventional and minimal trellis modules. A relation between the complexity measure defined in this work and the one defined by McEliece and Lin is investigated. We also conduct a refined computer search for good convolutional codes (in terms of distance spectrum) with respect to two minimal trellis complexity measures. Finally, the computational cost of implementation of each arithmetic operation is determined in terms of machine cycles taken by its execution using a typical digital signal processor widely used for low-power telecommunications applications.
Convolution using guided acoustooptical interaction in thin-film waveguides
NASA Technical Reports Server (NTRS)
Chang, W. S. C.; Becker, R. A.; Tsai, C. S.; Yao, I. W.
1977-01-01
Interaction of two antiparallel acoustic surface waves (ASW) with an optical guided wave has been investigated theoretically as well as experimentally to obtain the convolution of two ASW signals. The maximum time-bandwidth product that can be achieved by such a convolver is shown to be of the order of 1000 or more. The maximum dynamic range can be as large as 83 dB.
New syndrome decoder for (n, 1) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.
Face Detection Using GPU-Based Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Nasse, Fabian; Thurau, Christian; Fink, Gernot A.
In this paper, we consider the problem of face detection under pose variations. Unlike other contributions, a focus of this work resides within efficient implementation utilizing the computational powers of modern graphics cards. The proposed system consists of a parallelized implementation of convolutional neural networks (CNNs) with a special emphasize on also parallelizing the detection process. Experimental validation in a smart conference room with 4 active ceiling-mounted cameras shows a dramatic speed-gain under real-life conditions.
On the growth and form of cortical convolutions
NASA Astrophysics Data System (ADS)
Tallinen, Tuomas; Chung, Jun Young; Rousseau, François; Girard, Nadine; Lefèvre, Julien; Mahadevan, L.
2016-06-01
The rapid growth of the human cortex during development is accompanied by the folding of the brain into a highly convoluted structure. Recent studies have focused on the genetic and cellular regulation of cortical growth, but understanding the formation of the gyral and sulcal convolutions also requires consideration of the geometry and physical shaping of the growing brain. To study this, we use magnetic resonance images to build a 3D-printed layered gel mimic of the developing smooth fetal brain; when immersed in a solvent, the outer layer swells relative to the core, mimicking cortical growth. This relative growth puts the outer layer into mechanical compression and leads to sulci and gyri similar to those in fetal brains. Starting with the same initial geometry, we also build numerical simulations of the brain modelled as a soft tissue with a growing cortex, and show that this also produces the characteristic patterns of convolutions over a realistic developmental course. All together, our results show that although many molecular determinants control the tangential expansion of the cortex, the size, shape, placement and orientation of the folds arise through iterations and variations of an elementary mechanical instability modulated by early fetal brain geometry.
Fine-grained representation learning in convolutional autoencoders
NASA Astrophysics Data System (ADS)
Luo, Chang; Wang, Jie
2016-03-01
Convolutional autoencoders (CAEs) have been widely used as unsupervised feature extractors for high-resolution images. As a key component in CAEs, pooling is a biologically inspired operation to achieve scale and shift invariances, and the pooled representation directly affects the CAEs' performance. Fine-grained pooling, which uses small and dense pooling regions, encodes fine-grained visual cues and enhances local characteristics. However, it tends to be sensitive to spatial rearrangements. In most previous works, pooled features were obtained by empirically modulating parameters in CAEs. We see the CAE as a whole and propose a fine-grained representation learning law to extract better fine-grained features. This representation learning law suggests two directions for improvement. First, we probabilistically evaluate the discrimination-invariance tradeoff with fine-grained granularity in the pooled feature maps, and suggest the proper filter scale in the convolutional layer and appropriate whitening parameters in preprocessing step. Second, pooling approaches are combined with the sparsity degree in pooling regions, and we propose the preferable pooling approach. Experimental results on two independent benchmark datasets demonstrate that our representation learning law could guide CAEs to extract better fine-grained features and performs better in multiclass classification task. This paper also provides guidance for selecting appropriate parameters to obtain better fine-grained representation in other convolutional neural networks.
Automatic localization of vertebrae based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Shen, Wei; Yang, Feng; Mu, Wei; Yang, Caiyun; Yang, Xin; Tian, Jie
2015-03-01
Localization of the vertebrae is of importance in many medical applications. For example, the vertebrae can serve as the landmarks in image registration. They can also provide a reference coordinate system to facilitate the localization of other organs in the chest. In this paper, we propose a new vertebrae localization method using convolutional neural networks (CNN). The main advantage of the proposed method is the removal of hand-crafted features. We construct two training sets to train two CNNs that share the same architecture. One is used to distinguish the vertebrae from other tissues in the chest, and the other is aimed at detecting the centers of the vertebrae. The architecture contains two convolutional layers, both of which are followed by a max-pooling layer. Then the output feature vector from the maxpooling layer is fed into a multilayer perceptron (MLP) classifier which has one hidden layer. Experiments were performed on ten chest CT images. We used leave-one-out strategy to train and test the proposed method. Quantitative comparison between the predict centers and ground truth shows that our convolutional neural networks can achieve promising localization accuracy without hand-crafted features.
SU-E-T-465: Dose Calculation Method for Dynamic Tumor Tracking Using a Gimbal-Mounted Linac
Sugimoto, S; Inoue, T; Kurokawa, C; Usui, K; Sasai, K; Utsunomiya, S; Ebe, K
2014-06-01
Purpose: Dynamic tumor tracking using the gimbal-mounted linac (Vero4DRT, Mitsubishi Heavy Industries, Ltd., Japan) has been available when respiratory motion is significant. The irradiation accuracy of the dynamic tumor tracking has been reported to be excellent. In addition to the irradiation accuracy, a fast and accurate dose calculation algorithm is needed to validate the dose distribution in the presence of respiratory motion because the multiple phases of it have to be considered. A modification of dose calculation algorithm is necessary for the gimbal-mounted linac due to the degrees of freedom of gimbal swing. The dose calculation algorithm for the gimbal motion was implemented using the linear transformation between coordinate systems. Methods: The linear transformation matrices between the coordinate systems with and without gimbal swings were constructed using the combination of translation and rotation matrices. The coordinate system where the radiation source is at the origin and the beam axis along the z axis was adopted. The transformation can be divided into the translation from the radiation source to the gimbal rotation center, the two rotations around the center relating to the gimbal swings, and the translation from the gimbal center to the radiation source. After operating the transformation matrix to the phantom or patient image, the dose calculation can be performed as the no gimbal swing. The algorithm was implemented in the treatment planning system, PlanUNC (University of North Carolina, NC). The convolution/superposition algorithm was used. The dose calculations with and without gimbal swings were performed for the 3 × 3 cm{sup 2} field with the grid size of 5 mm. Results: The calculation time was about 3 minutes per beam. No significant additional time due to the gimbal swing was observed. Conclusions: The dose calculation algorithm for the finite gimbal swing was implemented. The calculation time was moderate.
NASA Astrophysics Data System (ADS)
Xu, Zhigang
2015-12-01
In this study, a new method of storm surge modeling is proposed. This method is orders of magnitude faster than the traditional method within the linear dynamics framework. The tremendous enhancement of the computational efficiency results from the use of a pre-calculated all-source Green's function (ASGF), which connects a point of interest (POI) to the rest of the world ocean. Once the ASGF has been pre-calculated, it can be repeatedly used to quickly produce a time series of a storm surge at the POI. Using the ASGF, storm surge modeling can be simplified as its convolution with an atmospheric forcing field. If the ASGF is prepared with the global ocean as the model domain, the output of the convolution is free of the effects of artificial open-water boundary conditions. Being the first part of this study, this paper presents mathematical derivations from the linearized and depth-averaged shallow-water equations to the ASGF convolution, establishes various auxiliary concepts that will be useful throughout the study, and interprets the meaning of the ASGF from different perspectives. This paves the way for the ASGF convolution to be further developed as a data-assimilative regression model in part II. Five Appendixes provide additional details about the algorithm and the MATLAB functions.
Abe, Sumiyoshi; Okuyama, Shinji
2012-01-01
The role of the superposition principle is discussed for the quantum-mechanical Carnot engine introduced by Bender, Brody, and Meister [J. Phys. A 33, 4427 (2000)]. It is shown that the efficiency of the engine can be enhanced by the superposition of quantum states. A finite-time process is also discussed and the condition of the maximum power output is presented. Interestingly, the efficiency at the maximum power is lower than that without superposition.
The origin of non-classical effects in a one-dimensional superposition of coherent states
NASA Technical Reports Server (NTRS)
Buzek, V.; Knight, P. L.; Barranco, A. Vidiella
1992-01-01
We investigate the nature of the quantum fluctuations in a light field created by the superposition of coherent fields. We give a physical explanation (in terms of Wigner functions and phase-space interference) why the 1-D superposition of coherent states in the direction of the x-quadrature leads to the squeezing of fluctuations in the y-direction, and show that such a superposition can generate the squeezed vacuum and squeezed coherent states.
Sanchez-Parcerisa, D; Cortés-Giraldo, M A; Dolney, D; Kondrla, M; Fager, M; Carabe, A
2016-02-21
In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm(-1)) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process. PMID:26840945
The number of terms in the superpositions upper bounds the amount of the coherence change
NASA Astrophysics Data System (ADS)
Liu, Feng; Li, Fei
2016-07-01
For the l1 norm of coherence, what is the relation between the coherence of a state and the individual terms that by superposition yield the state? We find upper bounds on the coherence change before and after the superposition. When every term comes from one Hilbert subspace, the upper bound is the number of terms in the superpositions minus one. However, when the terms have support on orthogonal subspaces, the coherence of the superposition cannot be more the double of the above upper bound than the average of the coherence of the all terms being superposed.
The number of terms in the superpositions upper bounds the amount of the coherence change
NASA Astrophysics Data System (ADS)
Liu, Feng; Li, Fei
2016-10-01
For the l1 norm of coherence, what is the relation between the coherence of a state and the individual terms that by superposition yield the state? We find upper bounds on the coherence change before and after the superposition. When every term comes from one Hilbert subspace, the upper bound is the number of terms in the superpositions minus one. However, when the terms have support on orthogonal subspaces, the coherence of the superposition cannot be more the double of the above upper bound than the average of the coherence of the all terms being superposed.
Robustness of superposition states evolving under the influence of a thermal reservoir
Sales, J. S.; Almeida, N. G. de
2011-06-15
We study the evolution of superposition states under the influence of a reservoir at zero and finite temperatures in cavity quantum electrodynamics aiming to know how their purity is lost over time. The superpositions studied here are composed of coherent states, orthogonal coherent states, squeezed coherent states, and orthogonal squeezed coherent states, which we introduce to generalize the orthogonal coherent states. For comparison, we also show how the robustness of the superpositions studied here differs from that of a qubit given by a superposition of zero- and one-photon states.
Macroscopicity of quantum superpositions on a one-parameter unitary path in Hilbert space
NASA Astrophysics Data System (ADS)
Volkoff, T. J.; Whaley, K. B.
2014-12-01
We analyze quantum states formed as superpositions of an initial pure product state and its image under local unitary evolution, using two measurement-based measures of superposition size: one based on the optimal quantum binary distinguishability of the branches of the superposition and another based on the ratio of the maximal quantum Fisher information of the superposition to that of its branches, i.e., the relative metrological usefulness of the superposition. A general formula for the effective sizes of these states according to the branch-distinguishability measure is obtained and applied to superposition states of N quantum harmonic oscillators composed of Gaussian branches. Considering optimal distinguishability of pure states on a time-evolution path leads naturally to a notion of distinguishability time that generalizes the well-known orthogonalization times of Mandelstam and Tamm and Margolus and Levitin. We further show that the distinguishability time provides a compact operational expression for the superposition size measure based on the relative quantum Fisher information. By restricting the maximization procedure in the definition of this measure to an appropriate algebra of observables, we show that the superposition size of, e.g., NOON states and hierarchical cat states, can scale linearly with the number of elementary particles comprising the superposition state, implying precision scaling inversely with the total number of photons when these states are employed as probes in quantum parameter estimation of a 1-local Hamiltonian in this algebra.
Superposition method for analysis of free-edge stresses
NASA Technical Reports Server (NTRS)
Whitcomb, J. D.; Raju, I. S.
1983-01-01
Superposition techniques were used to transform the edge stress problem for composite laminates into a more lucid form. By eliminating loads and stresses not contributing to interlaminar stresses, the essential aspects of the edge stress problem are easily recognized. Transformed problem statements were developed for both mechanical and thermal loads. Also, a technique for approximate analysis using a two dimensional plane strain analysis was developed. Conventional quasi-three dimensional analysis was used to evaluate the accuracy of the transformed problems and the approximate two dimensional analysis. The transformed problems were shown to be exactly equivalent to the original problems. The approximate two dimensional analysis was found to predict the interlaminar normal and shear stresses reasonably well.
Sensing Super-Position: Human Sensing Beyond the Visual Spectrum
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2007-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This paper addresses the technical feasibility of augmenting human vision through Sensing Super-position by mixing natural Human sensing. The current implementation of the device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of Lie human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system. The
Adiabatic rotation, quantum search, and preparation of superposition states
NASA Astrophysics Data System (ADS)
Siu, M. Stewart
2007-06-01
We introduce the idea of using adiabatic rotation to generate superpositions of a large class of quantum states. For quantum computing this is an interesting alternative to the well-studied “straight line” adiabatic evolution. In ways that complement recent results, we show how to efficiently prepare three types of states: Kitaev’s toric code state, the cluster state of the measurement-based computation model, and the history state used in the adiabatic simulation of a quantum circuit. We also show that the method, when adapted for quantum search, provides quadratic speedup as other optimal methods do with the advantages that the problem Hamiltonian is time independent and that the energy gap above the ground state is strictly nondecreasing with time. Likewise the method can be used for optimization as an alternative to the standard adiabatic algorithm.
Predicting jet radius in electrospinning by superpositioning exponential functions
NASA Astrophysics Data System (ADS)
Widartiningsih, P. M.; Iskandar, F.; Munir, M. M.; Viridi, S.
2016-08-01
This paper presents an analytical study of the correlation between viscosity and fiber diameter in electrospinning. Control over fiber diameter in electrospinning process was important since it will determine the performance of resulting nanofiber. Theoretically, fiber diameter was determined by surface tension, solution concentration, flow rate, and electric current. But experimentally it had been proven that significantly viscosity had an influence to fiber diameter. Jet radius equation in electrospinning process was divided into three areas: near the nozzle, far from the nozzle, and at jet terminal. There was no correlation between these equations. Superposition of exponential series model provides the equations combined into one, thus the entire of working parameters on electrospinning take a contribution to fiber diameter. This method yields the value of solution viscosity has a linear relation to jet radius. However, this method works only for low viscosity.
Convolution-variation separation method for efficient modeling of optical lithography.
Liu, Shiyuan; Zhou, Xinjiang; Lv, Wen; Xu, Shuang; Wei, Haiqing
2013-07-01
We propose a general method called convolution-variation separation (CVS) to enable efficient optical imaging calculations without sacrificing accuracy when simulating images for a wide range of process variations. The CVS method is derived from first principles using a series expansion, which consists of a set of predetermined basis functions weighted by a set of predetermined expansion coefficients. The basis functions are independent of the process variations and thus may be computed and stored in advance, while the expansion coefficients depend only on the process variations. Optical image simulations for defocus and aberration variations with applications in robust inverse lithography technology and lens aberration metrology have demonstrated the main concept of the CVS method.
Mochizuki, Koji; Takayama, Kozo
2014-01-01
This study reports the results of applying the time-temperature superposition principle (TTSP) to the prediction of color changes in liquid formulations. A sample solution consisting of L-tryptophan and glucose was used as the model liquid formulation for the Maillard reaction. After accelerated aging treatment at elevated temperatures, the Commission Internationale de l'Eclairage (CIE) LAB color parameters (a*, b*, L*, and E*ab) of the sample solution were measured using a spectrophotometer. The TTSP was then applied to a kinetic analysis of the color changes. The calculated values of the apparent activation energy of a*, b*, L*, and ΔE*ab were 105.2, 109.8, 91.6, and 103.7 kJ/mol, respectively. The predicted values of the color parameters at 40°C were calculated using Arrhenius plots for each of the color parameters. A comparison of the relationships between the experimental and predicted values of each color parameter revealed the coefficients of determination for a*, b*, L*, and ΔE*ab to be 0.961, 0.979, 0.960, and 0.979, respectively. All the R(2) values were sufficiently high, and these results suggested that the prediction was highly reliable. Kinetic analysis using the TTSP was successfully applied to calculating the apparent activation energy and to predicting the color changes at any temperature or duration. PMID:25450630
Fast space-varying convolution and its application in stray light reduction
NASA Astrophysics Data System (ADS)
Wei, Jianing; Cao, Guangzhi; Bouman, Charles A.; Allebach, Jan P.
2009-02-01
Space-varying convolution often arises in the modeling or restoration of images captured by optical imaging systems. For example, in applications such as microscopy or photography the distortions introduced by lenses typically vary across the field of view, so accurate restoration also requires the use of space-varying convolution. While space-invariant convolution can be efficiently implemented with the Fast Fourier Transform (FFT), space-varying convolution requires direct implementation of the convolution operation, which can be very computationally expensive when the convolution kernel is large. In this paper, we develop a general approach to the efficient implementation of space-varying convolution through the use of matrix source coding techniques. This method can dramatically reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. This approach leads to a tradeoff between the accuracy and speed of the operation that is closely related to the distortion-rate tradeoff that is commonly made in lossy source coding. We apply our method to the problem of stray light reduction for digital photographs, where convolution with a spatially varying stray light point spread function is required. The experimental results show that our algorithm can achieve a dramatic reduction in computation while achieving high accuracy.
Learning Contextual Dependence With Convolutional Hierarchical Recurrent Neural Networks
NASA Astrophysics Data System (ADS)
Zuo, Zhen; Shuai, Bing; Wang, Gang; Liu, Xiao; Wang, Xingxing; Wang, Bing; Chen, Yushi
2016-07-01
Existing deep convolutional neural networks (CNNs) have shown their great success on image classification. CNNs mainly consist of convolutional and pooling layers, both of which are performed on local image areas without considering the dependencies among different image regions. However, such dependencies are very important for generating explicit image representation. In contrast, recurrent neural networks (RNNs) are well known for their ability of encoding contextual information among sequential data, and they only require a limited number of network parameters. General RNNs can hardly be directly applied on non-sequential data. Thus, we proposed the hierarchical RNNs (HRNNs). In HRNNs, each RNN layer focuses on modeling spatial dependencies among image regions from the same scale but different locations. While the cross RNN scale connections target on modeling scale dependencies among regions from the same location but different scales. Specifically, we propose two recurrent neural network models: 1) hierarchical simple recurrent network (HSRN), which is fast and has low computational cost; and 2) hierarchical long-short term memory recurrent network (HLSTM), which performs better than HSRN with the price of more computational cost. In this manuscript, we integrate CNNs with HRNNs, and develop end-to-end convolutional hierarchical recurrent neural networks (C-HRNNs). C-HRNNs not only make use of the representation power of CNNs, but also efficiently encodes spatial and scale dependencies among different image regions. On four of the most challenging object/scene image classification benchmarks, our C-HRNNs achieve state-of-the-art results on Places 205, SUN 397, MIT indoor, and competitive results on ILSVRC 2012.
A note on superposition of two unknown states using Deutsch CTC model
NASA Astrophysics Data System (ADS)
Sami, Sasha; Chakrabarty, Indranil
2016-08-01
In a recent work, authors prove a yet another no-go theorem that forbids the existence of a universal probabilistic quantum protocol producing a superposition of two unknown quantum states. In this short note, we show that in the presence of closed time-like curves (CTCs), one can indeed create superposition of unknown quantum states and evade the no-go result.
Faster GPU-based convolutional gridding via thread coarsening
NASA Astrophysics Data System (ADS)
Merry, B.
2016-07-01
Convolutional gridding is a processor-intensive step in interferometric imaging. While it is possible to use graphics processing units (GPUs) to accelerate this operation, existing methods use only a fraction of the available flops. We apply thread coarsening to improve the efficiency of an existing algorithm, and observe performance gains of up to 3.2 × for single-polarization gridding and 1.9 × for quad-polarization gridding on a GeForce GTX 980, and smaller but still significant gains on a Radeon R9 290X.
Simplified Syndrome Decoding of (n, 1) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.
New syndrome decoding techniques for the (n, k) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964
New Syndrome Decoding Techniques for the (n, K) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.
Convolution seal for transition duct in turbine system
Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray
2015-05-26
A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface feature for interfacing with an adjacent transition duct. The turbine system further includes a convolution seal contacting the interface feature to provide a seal between the interface feature and the adjacent transition duct.
Convolution seal for transition duct in turbine system
Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray
2015-03-10
A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface member for interfacing with a turbine section. The turbine system further includes a convolution seal contacting the interface member to provide a seal between the interface member and the turbine section.
Convolutional neural networks for mammography mass lesion classification.
Arevalo, John; Gonzalez, Fabio A; Ramos-Pollan, Raul; Oliveira, Jose L; Guevara Lopez, Miguel Angel
2015-08-01
Feature extraction is a fundamental step when mammography image analysis is addressed using learning based approaches. Traditionally, problem dependent handcrafted features are used to represent the content of images. An alternative approach successfully applied in other domains is the use of neural networks to automatically discover good features. This work presents an evaluation of convolutional neural networks to learn features for mammography mass lesions before feeding them to a classification stage. Experimental results showed that this approach is a suitable strategy outperforming the state-of-the-art representation from 79.9% to 86% in terms of area under the ROC curve. PMID:26736382
Convolutional neural networks for synthetic aperture radar classification
NASA Astrophysics Data System (ADS)
Profeta, Andrew; Rodriguez, Andres; Clouse, H. Scott
2016-05-01
For electro-optical object recognition, convolutional neural networks (CNNs) are the state-of-the-art. For large datasets, CNNs are able to learn meaningful features used for classification. However, their application to synthetic aperture radar (SAR) has been limited. In this work we experimented with various CNN architectures on the MSTAR SAR dataset. As the input to the CNN we used the magnitude and phase (2 channels) of the SAR imagery. We used the deep learning toolboxes CAFFE and Torch7. Our results show that we can achieve 93% accuracy on the MSTAR dataset using CNNs.
A digital model for streamflow routing by convolution methods
Doyle, W.H., Jr.; Shearman, H.O.; Stiltner, G.J.; Krug, W.O.
1984-01-01
U.S. Geological Survey computer model, CONROUT, for routing streamflow by unit-response convolution flow-routing techniques from an upstream channel location to a downstream channel location has been developed and documented. Calibration and verification of the flow-routing model and subsequent use of the model for simulation is also documented. Three hypothetical examples and two field applications are presented to illustrate basic flow-routing concepts. Most of the discussion is limited to daily flow routing since, to date, all completed and current studies of this nature involve daily flow routing. However, the model is programmed to accept hourly flow-routing data. (USGS)
A Fortran 90 code for magnetohydrodynamics. Part 1, Banded convolution
Walker, D.W.
1992-03-01
This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.
Aquifer response to stream-stage and recharge variations. II. Convolution method and applications
Barlow, P.M.; DeSimone, L.A.; Moench, A.F.
2000-01-01
In this second of two papers, analytical step-response functions, developed in the companion paper for several cases of transient hydraulic interaction between a fully penetrating stream and a confined, leaky, or water-table aquifer, are used in the convolution integral to calculate aquifer heads, streambank seepage rates, and bank storage that occur in response to streamstage fluctuations and basinwide recharge or evapotranspiration. Two computer programs developed on the basis of these step-response functions and the convolution integral are applied to the analysis of hydraulic interaction of two alluvial stream-aquifer systems in the northeastern and central United States. These applications demonstrate the utility of the analytical functions and computer programs for estimating aquifer and streambank hydraulic properties, recharge rates, streambank seepage rates, and bank storage. Analysis of the water-table aquifer adjacent to the Blackstone River in Massachusetts suggests that the very shallow depth of water table and associated thin unsaturated zone at the site cause the aquifer to behave like a confined aquifer (negligible specific yield). This finding is consistent with previous studies that have shown that the effective specific yield of an unconfined aquifer approaches zero when the capillary fringe, where sediment pores are saturated by tension, extends to land surface. Under this condition, the aquifer's response is determined by elastic storage only. Estimates of horizontal and vertical hydraulic conductivity, specific yield, specific storage, and recharge for a water-table aquifer adjacent to the Cedar River in eastern Iowa, determined by the use of analytical methods, are in close agreement with those estimated by use of a more complex, multilayer numerical model of the aquifer. Streambank leakance of the semipervious streambank materials also was estimated for the site. The streambank-leakance parameter may be considered to be a general (or lumped
Wei, Jianing; Bouman, Charles A; Allebach, Jan P
2014-05-01
Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy. PMID:24710398
Single-Cell Phenotype Classification Using Deep Convolutional Neural Networks.
Dürr, Oliver; Sick, Beate
2016-10-01
Deep learning methods are currently outperforming traditional state-of-the-art computer vision algorithms in diverse applications and recently even surpassed human performance in object recognition. Here we demonstrate the potential of deep learning methods to high-content screening-based phenotype classification. We trained a deep learning classifier in the form of convolutional neural networks with approximately 40,000 publicly available single-cell images from samples treated with compounds from four classes known to lead to different phenotypes. The input data consisted of multichannel images. The construction of appropriate feature definitions was part of the training and carried out by the convolutional network, without the need for expert knowledge or handcrafted features. We compare our results against the recent state-of-the-art pipeline in which predefined features are extracted from each cell using specialized software and then fed into various machine learning algorithms (support vector machine, Fisher linear discriminant, random forest) for classification. The performance of all classification approaches is evaluated on an untouched test image set with known phenotype classes. Compared to the best reference machine learning algorithm, the misclassification rate is reduced from 8.9% to 6.6%.
Convolutional neural network architectures for predicting DNA–protein binding
Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.
2016-01-01
Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608
Convolutional Neural Network Based Fault Detection for Rotating Machinery
NASA Astrophysics Data System (ADS)
Janssens, Olivier; Slavkovikj, Viktor; Vervisch, Bram; Stockman, Kurt; Loccufier, Mia; Verstockt, Steven; Van de Walle, Rik; Van Hoecke, Sofie
2016-09-01
Vibration analysis is a well-established technique for condition monitoring of rotating machines as the vibration patterns differ depending on the fault or machine condition. Currently, mainly manually-engineered features, such as the ball pass frequencies of the raceway, RMS, kurtosis an crest, are used for automatic fault detection. Unfortunately, engineering and interpreting such features requires a significant level of human expertise. To enable non-experts in vibration analysis to perform condition monitoring, the overhead of feature engineering for specific faults needs to be reduced as much as possible. Therefore, in this article we propose a feature learning model for condition monitoring based on convolutional neural networks. The goal of this approach is to autonomously learn useful features for bearing fault detection from the data itself. Several types of bearing faults such as outer-raceway faults and lubrication degradation are considered, but also healthy bearings and rotor imbalance are included. For each condition, several bearings are tested to ensure generalization of the fault-detection system. Furthermore, the feature-learning based approach is compared to a feature-engineering based approach using the same data to objectively quantify their performance. The results indicate that the feature-learning system, based on convolutional neural networks, significantly outperforms the classical feature-engineering based approach which uses manually engineered features and a random forest classifier. The former achieves an accuracy of 93.61 percent and the latter an accuracy of 87.25 percent.
Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks.
Dosovitskiy, Alexey; Fischer, Philipp; Springenberg, Jost Tobias; Riedmiller, Martin; Brox, Thomas
2016-09-01
Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101, Caltech-256). While features learned with our approach cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor. PMID:26540673
Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks.
Dosovitskiy, Alexey; Fischer, Philipp; Springenberg, Jost Tobias; Riedmiller, Martin; Brox, Thomas
2016-09-01
Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101, Caltech-256). While features learned with our approach cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor.
Multiple deep convolutional neural networks averaging for face alignment
NASA Astrophysics Data System (ADS)
Zhang, Shaohua; Yang, Hua; Yin, Zhouping
2015-05-01
Face alignment is critical for face recognition, and the deep learning-based method shows promise for solving such issues, given that competitive results are achieved on benchmarks with additional benefits, such as dispensing with handcrafted features and initial shape. However, most existing deep learning-based approaches are complicated and quite time-consuming during training. We propose a compact face alignment method for fast training without decreasing its accuracy. Rectified linear unit is employed, which allows all networks approximately five times faster convergence than a tanh neuron. An eight learnable layer deep convolutional neural network (DCNN) based on local response normalization and a padding convolutional layer (PCL) is designed to provide reliable initial values during prediction. A model combination scheme is presented to further reduce errors, while showing that only two network architectures and hyperparameter selection procedures are required in our approach. A three-level cascaded system is ultimately built based on the DCNNs and model combination mode. Extensive experiments validate the effectiveness of our method and demonstrate comparable accuracy with state-of-the-art methods on BioID, labeled face parts in the wild, and Helen datasets.
Enhancing Neutron Beam Production with a Convoluted Moderator
Iverson, Erik B; Baxter, David V; Muhrer, Guenter; Ansell, Stuart; Gallmeier, Franz X; Dalgliesh, Robert; Lu, Wei; Kaiser, Helmut
2014-10-01
We describe a new concept for a neutron moderating assembly resulting in the more efficient production of slow neutron beams. The Convoluted Moderator, a heterogeneous stack of interleaved moderating material and nearly transparent single-crystal spacers, is a directionally-enhanced neutron beam source, improving beam effectiveness over an angular range comparable to the range accepted by neutron beam lines and guides. We have demonstrated gains of 50% in slow neutron intensity for a given fast neutron production rate while simultaneously reducing the wavelength-dependent emission time dispersion by 25%, both coming from a geometric effect in which the neutron beam lines view a large surface area of moderating material in a relatively small volume. Additionally, we have confirmed a Bragg-enhancement effect arising from coherent scattering within the single-crystal spacers. We have not observed hypothesized refractive effects leading to additional gains at long wavelength. In addition to confirmation of the validity of the Convoluted Moderator concept, our measurements provide a series of benchmark experiments suitable for developing simulation and analysis techniques for practical optimization and eventual implementation at slow neutron source facilities.
Deep Convolutional Neural Networks for large-scale speech tasks.
Sainath, Tara N; Kingsbury, Brian; Saon, George; Soltau, Hagen; Mohamed, Abdel-rahman; Dahl, George; Ramabhadran, Bhuvana
2015-04-01
Convolutional Neural Networks (CNNs) are an alternative type of neural network that can be used to reduce spectral variations and model spectral correlations which exist in signals. Since speech signals exhibit both of these properties, we hypothesize that CNNs are a more effective model for speech compared to Deep Neural Networks (DNNs). In this paper, we explore applying CNNs to large vocabulary continuous speech recognition (LVCSR) tasks. First, we determine the appropriate architecture to make CNNs effective compared to DNNs for LVCSR tasks. Specifically, we focus on how many convolutional layers are needed, what is an appropriate number of hidden units, what is the best pooling strategy. Second, investigate how to incorporate speaker-adapted features, which cannot directly be modeled by CNNs as they do not obey locality in frequency, into the CNN framework. Third, given the importance of sequence training for speech tasks, we introduce a strategy to use ReLU+dropout during Hessian-free sequence training of CNNs. Experiments on 3 LVCSR tasks indicate that a CNN with the proposed speaker-adapted and ReLU+dropout ideas allow for a 12%-14% relative improvement in WER over a strong DNN system, achieving state-of-the art results in these 3 tasks.
Large quantum superpositions of a nanoparticle immersed in superfluid helium
NASA Astrophysics Data System (ADS)
Lychkovskiy, O.
2016-06-01
Preparing and detecting spatially extended quantum superpositions of a massive object comprises an important fundamental test of quantum theory. These quantum states are extremely fragile and tend to quickly decay into incoherent mixtures due to the environmental decoherence. Experimental setups considered up to date address this threat in a conceptually straightforward way—by eliminating the environment, i.e., by isolating an object in a sufficiently high vacuum. We show that another option exists: decoherence is suppressed in the presence of a strongly interacting environment if this environment is superfluid. Indeed, as long as an object immersed in a pure superfluid at zero temperature moves with a velocity below the critical one, it does not create, absorb, or scatter any excitations of the superfluid. Hence, in this idealized situation the decoherence is absent. In reality the decoherence will be present due to thermal excitations of the superfluid and impurities contaminating the superfluid. We examine various decoherence channels in the superfluid
Superposition, Transition Probabilities and Primitive Observables in Infinite Quantum Systems
NASA Astrophysics Data System (ADS)
Buchholz, Detlev; Størmer, Erling
2015-10-01
The concepts of superposition and of transition probability, familiar from pure states in quantum physics, are extended to locally normal states on funnels of type I∞ factors. Such funnels are used in the description of infinite systems, appearing for example in quantum field theory or in quantum statistical mechanics; their respective constituents are interpreted as algebras of observables localized in an increasing family of nested spacetime regions. Given a generic reference state (expectation functional) on a funnel, e.g. a ground state or a thermal equilibrium state, it is shown that irrespective of the global type of this state all of its excitations, generated by the adjoint action of elements of the funnel, can coherently be superimposed in a meaningful manner. Moreover, these states are the extreme points of their convex hull and as such are analogues of pure states. As further support of this analogy, transition probabilities are defined, complete families of orthogonal states are exhibited and a one-to-one correspondence between the states and families of minimal projections on a Hilbert space is established. The physical interpretation of these quantities relies on a concept of primitive observables. It extends the familiar framework of observable algebras and avoids some counter intuitive features of that setting. Primitive observables admit a consistent statistical interpretation of corresponding measurements and their impact on states is described by a variant of the von Neumann-Lüders projection postulate.
Solar Supergranulation Revealed as a Superposition of Traveling Waves
NASA Technical Reports Server (NTRS)
Gizon, L.; Duvall, T. L., Jr.; Schou, J.; Oegerle, William (Technical Monitor)
2002-01-01
40 years ago two new solar phenomena were described: supergranulation and the five-minute solar oscillations. While the oscillations have since been explained and exploited to determine the properties of the solar interior, the supergranulation has remained unexplained. The supergranules, appearing as convective-like cellular patterns of horizontal outward flow with a characteristic diameter of 30 Mm and an apparent lifetime of 1 day, have puzzling properties, including their apparent superrotation and the minute temperature variations over the cells. Using a 60-day sequence of data from the MDI (Michelson-Doppler Imager) instrument onboard the SOHO (Solar and Heliospheric Observatory) spacecraft, we show that the supergranulation pattern is formed by a superposition of traveling waves with periods of 5-10 days. The wave power is anisotropic with excess power in the direction of rotation and toward the equator, leading to spurious rotation rates and north-south flows as derived from correlation analyses. These newly discovered waves could play an important role in maintaining differential rotation in the upper convection zone by transporting angular momentum towards the equator.
Advanced superposition methods for high speed turbopump vibration analysis
NASA Technical Reports Server (NTRS)
Nielson, C. E.; Campany, A. D.
1981-01-01
The small, high pressure Mark 48 liquid hydrogen turbopump was analyzed and dynamically tested to determine the cause of high speed vibration at an operating speed of 92,400 rpm. This approaches the design point operating speed of 95,000 rpm. The initial dynamic analysis in the design stage and subsequent further analysis of the rotor only dynamics failed to predict the vibration characteristics found during testing. An advanced procedure for dynamics analysis was used in this investigation. The procedure involves developing accurate dynamic models of the rotor assembly and casing assembly by finite element analysis. The dynamically instrumented assemblies are independently rap tested to verify the analytical models. The verified models are then combined by modal superposition techniques to develop a completed turbopump model where dynamic characteristics are determined. The results of the dynamic testing and analysis obtained are presented and methods of moving the high speed vibration characteristics to speeds above the operating range are recommended. Recommendations for use of these advanced dynamic analysis procedures during initial design phases are given.
Gopishankar, N; Bisht, R K
2014-06-01
Purpose: To perform dosimetric evaluation of convolution algorithm in Gamma Knife (Perfexion Model) using solid acrylic anthropomorphic phantom. Methods: An in-house developed acrylic phantom with ion chamber insert was used for this purpose. The middle insert was designed to fit ion chamber from top(head) as well as from bottom(neck) of the phantom, henceforth measurement done at two different positions simultaneously. Leksell frame fixed to phantom simulated patient treatment. Prior to dosimetric study, hounsfield units and electron density of acrylic material were incorporated into the calibration curve in the TPS for convolution algorithm calculation. A CT scan of phantom with ion chamber (PTW Freiberg, 0.125cc) was obtained with following scanning parameters: Tube voltage-110kV, Slice thickness-1mm and FOV-240mm. Three separate single shot plans were generated in LGP TPS (Version 10.1.) with collimators 16mm, 8mm and 4mm respectively for both ion chamber positions. Both TMR10 and Convolution algorithm based planning (CABP) were used for dose calculation. A dose of 6Gy at 100% isodose was prescribed at centre of ion chamber visible in the CT scan. The phantom with ion chamber was positioned in the treatment couch for dose delivery. Results: The ion chamber measured dose was 5.98Gy for 16mm collimator shot plan with less than 1% deviation for convolution algorithm whereas with TMR10 measured dose was 5.6Gy. For 8mm and 4mm collimator plan merely a dose of 3.86Gy and 2.18Gy respectively were delivered at TPS calculated time for CABP. Conclusion: CABP is expected to perform accurate prediction of time for dose delivery for all collimators, but significant variation in measured dose was observed for 8mm and 4mm collimator which may be due collimator size effect. Effect of metal artifacts caused by pins and frame on the CT scan also may have role in misinterpreting CABP. The study carried out requires further investigation.
There is no MacWilliams identity for convolutional codes. [transmission gain comparison
NASA Technical Reports Server (NTRS)
Shearer, J. B.; Mceliece, R. J.
1977-01-01
An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.
Using convolutional decoding to improve time delay and phase estimation in digital communications
Ormesher, Richard C.; Mason, John J.
2010-01-26
The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.
A reciprocal space approach for locating symmetry elements in Patterson superposition maps
Hendrixson, T.
1990-09-21
A method for determining the location and possible existence of symmetry elements in Patterson superposition maps has been developed. A comparison of the original superposition map and a superposition map operated on by the symmetry element gives possible translations to the location of the symmetry element. A reciprocal space approach using structure factor-like quantities obtained from the Fourier transform of the superposition function is then used to determine the best'' location of the symmetry element. Constraints based upon the space group requirements are also used as a check on the locations. The locations of the symmetry elements are used to modify the Fourier transform coefficients of the superposition function to give an approximation of the structure factors, which are then refined using the EG relation. The analysis of several compounds using this method is presented. Reciprocal space techniques for locating multiple images in the superposition function are also presented, along with methods to remove the effect of multiple images in the Fourier transform coefficients of the superposition map. In addition, crystallographic studies of the extended chain structure of (NHC{sub 5}H{sub 5})SbI{sub 4} and of the twinning method of the orthorhombic form of the high-{Tc} superconductor YBa{sub 2}Cu{sub 3}O{sub 7-x} are presented. 54 refs.
The effect of whitening transformation on pooling operations in convolutional autoencoders
NASA Astrophysics Data System (ADS)
Li, Zuhe; Fan, Yangyu; Liu, Weihua
2015-12-01
Convolutional autoencoders (CAEs) are unsupervised feature extractors for high-resolution images. In the pre-processing step, whitening transformation has widely been adopted to remove redundancy by making adjacent pixels less correlated. Pooling is a biologically inspired operation to reduce the resolution of feature maps and achieve spatial invariance in convolutional neural networks. Conventionally, pooling methods are mainly determined empirically in most previous work. Therefore, our main purpose is to study the relationship between whitening processing and pooling operations in convolutional autoencoders for image classification. We propose an adaptive pooling approach based on the concepts of information entropy to test the effect of whitening on pooling in different conditions. Experimental results on benchmark datasets indicate that the performance of pooling strategies is associated with the distribution of feature activations, which can be affected by whitening processing. This provides guidance for the selection of pooling methods in convolutional autoencoders and other convolutional neural networks.
Yukawa, Mitsuyoshi; Miyata, Kazunori; Mizuta, Takahiro; Yonezawa, Hidehiro; Marek, Petr; Filip, Radim; Furusawa, Akira
2013-03-11
We develop an experimental scheme based on a continuous-wave (cw) laser for generating arbitrary superpositions of photon number states. In this experiment, we successfully generate superposition states of zero to three photons, namely advanced versions of superpositions of two and three coherent states. They are fully compatible with developed quantum teleportation and measurement-based quantum operations with cw lasers. Due to achieved high detection efficiency, we observe, without any loss correction, multiple areas of negativity of Wigner function, which confirm strongly nonclassical nature of the generated states. PMID:23482124
The principle of superposition and its application in ground-water hydraulics
Reilly, T.E.; Franke, O.L.; Bennett, G.D.
1984-01-01
The principle of superposition, a powerful methematical technique for analyzing certain types of complex problems in many areas of science and technology, has important application in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that solutions to individual problems can be added together to obtain solutions to complex problems. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to groundwater hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader. (USGS)
The principle of superposition and its application in ground-water hydraulics
Reilly, Thomas E.; Franke, O. Lehn; Bennett, Gordon D.
1987-01-01
The principle of superposition, a powerful mathematical technique for analyzing certain types of complex problems in many areas of science and technology, has important applications in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that problem solutions can be added together to obtain composite solutions. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to ground-water hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader.
Tomography by iterative convolution - Empirical study and application to interferometry
NASA Technical Reports Server (NTRS)
Vest, C. M.; Prikryl, I.
1984-01-01
An algorithm for computer tomography has been developed that is applicable to reconstruction from data having incomplete projections because an opaque object blocks some of the probing radiation as it passes through the object field. The algorithm is based on iteration between the object domain and the projection (Radon transform) domain. Reconstructions are computed during each iteration by the well-known convolution method. Although it is demonstrated that this algorithm does not converge, an empirically justified criterion for terminating the iteration when the most accurate estimate has been computed is presented. The algorithm has been studied by using it to reconstruct several different object fields with several different opaque regions. It also has been used to reconstruct aerodynamic density fields from interferometric data recorded in wind tunnel tests.
Small convolution kernels for high-fidelity image restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1991-01-01
An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.
Convolution properties for certain classes of multivalent functions
NASA Astrophysics Data System (ADS)
Sokól, Janusz; Trojnar-Spelina, Lucyna
2008-01-01
Recently N.E. Cho, O.S. Kwon and H.M. Srivastava [Nak Eun Cho, Oh Sang Kwon, H.M. Srivastava, Inclusion relationships and argument properties for certain subclasses of multivalent functions associated with a family of linear operators, J. Math. Anal. Appl. 292 (2004) 470-483] have introduced the class of multivalent analytic functions and have given a number of results. This class has been defined by means of a special linear operator associated with the Gaussian hypergeometric function. In this paper we have extended some of the previous results and have given other properties of this class. We have made use of differential subordinations and properties of convolution in geometric function theory.
Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks
NASA Astrophysics Data System (ADS)
Zhang, Kaipeng; Zhang, Zhanpeng; Li, Zhifeng; Qiao, Yu
2016-10-01
Face detection and alignment in unconstrained environment are challenging due to various poses, illuminations and occlusions. Recent studies show that deep learning approaches can achieve impressive performance on these two tasks. In this paper, we propose a deep cascaded multi-task framework which exploits the inherent correlation between them to boost up their performance. In particular, our framework adopts a cascaded structure with three stages of carefully designed deep convolutional networks that predict face and landmark location in a coarse-to-fine manner. In addition, in the learning process, we propose a new online hard sample mining strategy that can improve the performance automatically without manual sample selection. Our method achieves superior accuracy over the state-of-the-art techniques on the challenging FDDB and WIDER FACE benchmark for face detection, and AFLW benchmark for face alignment, while keeps real time performance.
Deep convolutional neural networks for ATR from SAR imagery
NASA Astrophysics Data System (ADS)
Morgan, David A. E.
2015-05-01
Deep architectures for classification and representation learning have recently attracted significant attention within academia and industry, with many impressive results across a diverse collection of problem sets. In this work we consider the specific application of Automatic Target Recognition (ATR) using Synthetic Aperture Radar (SAR) data from the MSTAR public release data set. The classification performance achieved using a Deep Convolutional Neural Network (CNN) on this data set was found to be competitive with existing methods considered to be state-of-the-art. Unlike most existing algorithms, this approach can learn discriminative feature sets directly from training data instead of requiring pre-specification or pre-selection by a human designer. We show how this property can be exploited to efficiently adapt an existing classifier to recognise a previously unseen target and discuss potential practical applications.
Drug-Drug Interaction Extraction via Convolutional Neural Networks.
Liu, Shengyu; Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong
2016-01-01
Drug-drug interaction (DDI) extraction as a typical relation extraction task in natural language processing (NLP) has always attracted great attention. Most state-of-the-art DDI extraction systems are based on support vector machines (SVM) with a large number of manually defined features. Recently, convolutional neural networks (CNN), a robust machine learning method which almost does not need manually defined features, has exhibited great potential for many NLP tasks. It is worth employing CNN for DDI extraction, which has never been investigated. We proposed a CNN-based method for DDI extraction. Experiments conducted on the 2013 DDIExtraction challenge corpus demonstrate that CNN is a good choice for DDI extraction. The CNN-based DDI extraction method achieves an F-score of 69.75%, which outperforms the existing best performing method by 2.75%. PMID:26941831
Drug-Drug Interaction Extraction via Convolutional Neural Networks.
Liu, Shengyu; Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong
2016-01-01
Drug-drug interaction (DDI) extraction as a typical relation extraction task in natural language processing (NLP) has always attracted great attention. Most state-of-the-art DDI extraction systems are based on support vector machines (SVM) with a large number of manually defined features. Recently, convolutional neural networks (CNN), a robust machine learning method which almost does not need manually defined features, has exhibited great potential for many NLP tasks. It is worth employing CNN for DDI extraction, which has never been investigated. We proposed a CNN-based method for DDI extraction. Experiments conducted on the 2013 DDIExtraction challenge corpus demonstrate that CNN is a good choice for DDI extraction. The CNN-based DDI extraction method achieves an F-score of 69.75%, which outperforms the existing best performing method by 2.75%.
Enhanced Line Integral Convolution with Flow Feature Detection
NASA Technical Reports Server (NTRS)
Lane, David; Okada, Arthur
1996-01-01
The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain. The method produces a flow texture image based on the input velocity field defined in the domain. Because of the nature of the algorithm, the texture image tends to be blurry. This sometimes makes it difficult to identify boundaries where flow separation and reattachments occur. We present techniques to enhance LIC texture images and use colored texture images to highlight flow separation and reattachment boundaries. Our techniques have been applied to several flow fields defined in 3D curvilinear multi-block grids and scientists have found the results to be very useful.
Plane-wave decomposition by spherical-convolution microphone array
NASA Astrophysics Data System (ADS)
Rafaely, Boaz; Park, Munhum
2001-05-01
Reverberant sound fields are widely studied, as they have a significant influence on the acoustic performance of enclosures in a variety of applications. For example, the intelligibility of speech in lecture rooms, the quality of music in auditoria, the noise level in offices, and the production of 3D sound in living rooms are all affected by the enclosed sound field. These sound fields are typically studied through frequency response measurements or statistical measures such as reverberation time, which do not provide detailed spatial information. The aim of the work presented in this seminar is the detailed analysis of reverberant sound fields. A measurement and analysis system based on acoustic theory and signal processing, designed around a spherical microphone array, is presented. Detailed analysis is achieved by decomposition of the sound field into waves, using spherical Fourier transform and spherical convolution. The presentation will include theoretical review, simulation studies, and initial experimental results.
Fast convolution with free-space Green's functions
NASA Astrophysics Data System (ADS)
Vico, Felipe; Greengard, Leslie; Ferrando, Miguel
2016-10-01
We introduce a fast algorithm for computing volume potentials - that is, the convolution of a translation invariant, free-space Green's function with a compactly supported source distribution defined on a uniform grid. The algorithm relies on regularizing the Fourier transform of the Green's function by cutting off the interaction in physical space beyond the domain of interest. This permits the straightforward application of trapezoidal quadrature and the standard FFT, with superalgebraic convergence for smooth data. Moreover, the method can be interpreted as employing a Nystrom discretization of the corresponding integral operator, with matrix entries which can be obtained explicitly and rapidly. This is of use in the design of preconditioners or fast direct solvers for a variety of volume integral equations. The method proposed permits the computation of any derivative of the potential, at the cost of an additional FFT.
Invariant Descriptor Learning Using a Siamese Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Chen, L.; Rottensteiner, F.; Heipke, C.
2016-06-01
In this paper we describe learning of a descriptor based on the Siamese Convolutional Neural Network (CNN) architecture and evaluate our results on a standard patch comparison dataset. The descriptor learning architecture is composed of an input module, a Siamese CNN descriptor module and a cost computation module that is based on the L2 Norm. The cost function we use pulls the descriptors of matching patches close to each other in feature space while pushing the descriptors for non-matching pairs away from each other. Compared to related work, we optimize the training parameters by combining a moving average strategy for gradients and Nesterov's Accelerated Gradient. Experiments show that our learned descriptor reaches a good performance and achieves state-of-art results in terms of the false positive rate at a 95 % recall rate on standard benchmark datasets.
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.
He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian
2015-09-01
Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224 × 224) input image. This requirement is "artificial" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, "spatial pyramid pooling", to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 × faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.
Battista, J J; Sharpe, M B
1992-12-01
The objective of radiation therapy is to concentrate a prescribed radiation dose accurately within a target volume in the patient. Major advances in imaging technology have greatly improved our ability to plan radiation treatments in three dimensions (3D) and to verify the treatment geometrically, but there is a concomitant need to improve dosimetric accuracy. It has been recommended that radiation doses should be computed with an accuracy of 3% within the target volume and in radiosensitive normal tissues. We review the rationale behind this recommendation, and describe a new generation of 3D dose algorithms which are capable of achieving this goal. A true 3D dose calculation tracks primary and scattered radiations in 3D space while accounting for tissue inhomogeneities. In the past, dose distributions have been computed in a 2D transverse slice with the assumption that the anatomy of the patient dose not change abruptly in nearby slices. We demonstrate the importance of computing 3D scatter contributions to dose from photons and electrons correctly, and show the magnitude of dose errors caused by using traditional 2D methods. The Monte Carlo technique is the most general and rigorous approach since individual primary and secondary particle tracks are simulated. However, this approach is too time-consuming for clinical treatment planning. We review an approach that is based on the superposition principle and achieves a reasonable compromise between the speed of computation and accuracy in dose. In this approach, dose deposition is separated into two steps. Firstly, the attenuation of incident photons interacting in the absorber is computed to determine the total energy released in the material (TERMA). This quantity is treated as an impulse at each irradiated point. Secondly, the transport of energy by scattered photons and electrons is described by a point dose spread kernel. The dose distribution is the superposition of the kernels, weighted by the magnitude of
Probing the conductance superposition law in single-molecule circuits with parallel paths.
Vazquez, H; Skouta, R; Schneebeli, S; Kamenetska, M; Breslow, R; Venkataraman, L; Hybertsen, M S
2012-10-01
According to Kirchhoff's circuit laws, the net conductance of two parallel components in an electronic circuit is the sum of the individual conductances. However, when the circuit dimensions are comparable to the electronic phase coherence length, quantum interference effects play a critical role, as exemplified by the Aharonov-Bohm effect in metal rings. At the molecular scale, interference effects dramatically reduce the electron transfer rate through a meta-connected benzene ring when compared with a para-connected benzene ring. For longer conjugated and cross-conjugated molecules, destructive interference effects have been observed in the tunnelling conductance through molecular junctions. Here, we investigate the conductance superposition law for parallel components in single-molecule circuits, particularly the role of interference. We synthesize a series of molecular systems that contain either one backbone or two backbones in parallel, bonded together cofacially by a common linker on each end. Single-molecule conductance measurements and transport calculations based on density functional theory show that the conductance of a double-backbone molecular junction can be more than twice that of a single-backbone junction, providing clear evidence for constructive interference.
Borzdov
2000-04-01
Vector plane-wave superpositions defined by a given set of orthonormal scalar functions on a two- or three-dimensional manifold-beam manifold-are treated. We present a technique for composing orthonormal beams and some other specific types of fields such as three-dimensional standing waves, moving and evolving whirls. It can be used for any linear fields, in particular, electromagnetic fields in complex media and elastic fields in crystals. For electromagnetic waves in an isotropic medium or free space, unique families of exact solutions of Maxwell's equations are obtained. The solutions are illustrated by calculating fields, energy densities, and energy fluxes of beams defined by the spherical harmonics. It is shown that the obtained results can be used for a transition from the plane-wave approximation to more accurate models of real incident beams in free-space techniques for characterizing complex media. A mathematical formalism convenient for the treatment of various beams defined by the spherical harmonics is presented.
Magnetospheric ULF Waves with an Increasing Amplitude as a Superposition of Two Wave Modes
NASA Astrophysics Data System (ADS)
Shen, Xiaochen; Zong, Qiugang; Shi, Quanqi; Tian, Anmin; Sun, Weijie; Wang, Yongfu; Zhou, Xuzhi; Fu, Suiyan; Hartinger, Michael; Angelopoulos, Vassilis
2015-04-01
Ultra-low frequency (ULF) waves play an important role in transferring energy by buffeting the magnetosphere with solar wind pressure impulses. The amplitudes of magnetospheric ULF waves, which are induced by solar wind dynamic pressure enhancements or shocks, are thought to damp in half or one wave cycle. We report on in situ observations of the solar wind dynamic pressure impulses-induced magnetospheric ULF waves with increasing amplitudes. We have found six ULF wave events, which were induced by solar wind dynamic pressure enhancements, with slow but clear wave amplitude increase. During three or four wave cycles, the amplitudes of ion velocities and electric field of these waves increased continuously by 1.3 ~4.4 times. Two significant events were selected to further study the characteristics of these ULF waves. We have found that the wave amplitude growth is mainly contributed by the toroidal mode wave. We suggest that the wave amplitude increase in the radial electric field is caused by the superposition of two wave modes, a standing wave excited by the solar wind dynamic impulse and a propagating compressional wave. When superposed, the two wave modes fit observations as does a calculation that superposes electric fields from two wave sources.
NASA Astrophysics Data System (ADS)
Ismail Ozkaya, Sait
2014-03-01
An Excel Visual Basic program, SUPERPOSE, is presented to predict the distribution, relative size and strike of tensile and shear fractures on anticlinal structures. The program is based on the concept of stress superposition; addition of curvature-related local tensile stress and regional far-field stress. The method accurately predicts fractures on many Middle East Oil Fields that were formed under a strike slip regime as duplexes, flower structures or inverted structures. The program operates on the Excel platform. The program reads the parameters and structural grid data from an Excel template and writes the results to the same template. The program has two routines to import structural grid data in the Eclipse and Zmap formats. The platform of SUPERPOSE is a single layer structural grid of a given cell size (e.g. 50×50 m). In the final output, a single tensile or two conjugate shear fractures are placed in each cell if fracturing criteria are satisfied; otherwise the cell is left blank. Strike of the representative fracture(s) is calculated and exact, but the length is an index of fracture porosity (fracture density×length×aperture) within that cell.
Design and Evaluation of a Research-Based Teaching Sequence: The Superposition of Electric Field.
ERIC Educational Resources Information Center
Viennot, L.; Rainson, S.
1999-01-01
Illustrates an approach to research-based teaching strategies and their evaluation. Addresses a teaching sequence on the superposition of electric fields implemented at the college level in an institutional framework subject to severe constraints. Contains 28 references. (DDR)
Superposition states of ultracold bosons in rotating rings with a realistic potential barrier
Nunnenkamp, Andreas; Rey, Ana Maria; Burnett, Keith
2011-11-15
In a recent paper [Phys. Rev. A 82, 063623 (2010)] Hallwood et al. argued that it is feasible to create large superposition states with strongly interacting bosons in rotating rings. Here we investigate in detail how the superposition states in rotating-ring lattices depend on interaction strength and barrier height. With respect to the latter we find a trade-off between energy gap and quality of the superposition state. Most importantly, we go beyond the {delta}-function approximation for the barrier potential and show that the energy gap decreases exponentially with the number of particles for weak barrier potentials of finite width. These are crucial issues in the design of experiments to realize superposition states.
NASA Astrophysics Data System (ADS)
Ward, W. E.; Das, U.; Du, J.
2014-12-01
It is now generally accepted that the superposition of tidal components results in geographic variations in their observed amplitudes in the mesosphere and lower thermosphere (MLT). This superposition also has implications for the dynamical and convective stability of the atmosphere at these heights. Spatial variations in the amplitude of the temperature and vertical displacement also have consequences for chemistry and chemical heating in this region. In this paper, these superposition effects are explored using diagnosed fields from the extended Canadian Middle Atmosphere Model and CMAM30. The nature and distribution of wind and temperature variability, the associated instabilities and chemical heating are discussed. Superposition effects have consequences for tidal dissipation and gravity wave propagation in the MLT. They also may be a cause for some of the inversion layers observed in this region of the atmosphere.
Chinea, F.
1983-01-24
Vector Baaumlcklund transformations which relate solutions of the vacuum Einstein equations having two commuting Killing fields are introduced. Such transformations generalize those found by Pohlmeyer in connection with the nonlinear sigma model. A simple algebraic superposition principle, which permits the combination of Baaumlcklund transforms in order to get new solutions, is given. The superposition preserves the asymptotic flatness condition, and the whole scheme is manifestly O(2,1) invariant.
Spagnolo, Nicolo; Sciarrino, Fabio; De Martini, Francesco
2010-09-15
We show that the quantum states generated by universal optimal quantum cloning of a single photon represent a universal set of quantum superpositions resilient to decoherence. We adopt the Bures distance as a tool to investigate the persistence of quantum coherence of these quantum states. According to this analysis, the process of universal cloning realizes a class of quantum superpositions that exhibits a covariance property in lossy configuration over the complete set of polarization states in the Bloch sphere.
Plasma evolution and dynamics in high-power vacuum-transmission-line post-hole convolutes
NASA Astrophysics Data System (ADS)
Rose, D. V.; Welch, D. R.; Hughes, T. P.; Clark, R. E.; Stygar, W. A.
2008-06-01
Vacuum-post-hole convolutes are used in pulsed high-power generators to join several magnetically insulated transmission lines (MITL) in parallel. Such convolutes add the output currents of the MITLs, and deliver the combined current to a single MITL that, in turn, delivers the current to a load. Magnetic insulation of electron flow, established upstream of the convolute region, is lost at the convolute due to symmetry breaking and the formation of magnetic nulls, resulting in some current losses. At very high-power operating levels and long pulse durations, the expansion of electrode plasmas into the MITL of such devices is considered likely. This work examines the evolution and dynamics of cathode plasmas in the double-post-hole convolutes used on the Z accelerator [R. B. Spielman , Phys. Plasmas 5, 2105 (1998)PHPAEN1070-664X10.1063/1.872881]. Three-dimensional particle-in-cell (PIC) simulations that model the entire radial extent of the Z accelerator convolute—from the parallel-plate transmission-line power feeds to the z-pinch load region—are used to determine electron losses in the convolute. The results of the simulations demonstrate that significant current losses (1.5 MA out of a total system current of 18.5 MA), which are comparable to the losses observed experimentally, could be caused by the expansion of cathode plasmas in the convolute regions.
NASA Astrophysics Data System (ADS)
Muthukrishnan, A.; Sangaranarayanan, M. V.; Boyarskiy, V. P.; Boyarskaya, I. A.
2010-04-01
The reductive cleavage of carbon-chlorine bonds in 2,4-dichlorobiphenyl (PCB-7) is investigated using the convolution potential sweep voltammetry and quantum chemical calculations. The potential dependence of the logarithmic rate constant is non-linear which indicates the validity of Marcus-Hush theory of quadratic activation-driving force relationship. The ortho-chlorine of the 2,4-dichlorobiphenyl gets reduced first as inferred from the quantum chemical calculations and bulk electrolysis. The standard reduction potentials pertaining to the ortho-chlorine of 2,4-dichlorobiphenyl and that corresponding to para chlorine of the 4-chlorobiphenyl have been estimated.
Convoluted nozzle design for the RL10 derivative 2B engine
NASA Technical Reports Server (NTRS)
1985-01-01
The convoluted nozzle is a conventional refractory metal nozzle extension that is formed with a portion of the nozzle convoluted to show the extendible nozzle within the length of the rocket engine. The convoluted nozzle (CN) was deployed by a system of four gas driven actuators. For spacecraft applications the optimum CN may be self-deployed by internal pressure retained, during deployment, by a jettisonable exit closure. The convoluted nozzle is included in a study of extendible nozzles for the RL10 Engine Derivative 2B for use in an early orbit transfer vehicle (OTV). Four extendible nozzle configurations for the RL10-2B engine were evaluated. Three configurations of the two position nozzle were studied including a hydrogen dump cooled metal nozzle and radiation cooled nozzles of refractory metal and carbon/carbon composite construction respectively.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.
2014-01-01
This Essay traces the centuries-long history of the phenomenological disciplines of directional radiometry and radiative transfer in turbid media, discusses their fundamental weaknesses, and outlines the convoluted process of their conversion into legitimate branches of physical optics.
NASA Astrophysics Data System (ADS)
Zamoum, R.; Lavagna, M.; Crépieux, A.
2016-06-01
We calculate the nonsymmetrized current noise in a quantum dot connected to two reservoirs by using the nonequilibrium Green function technique. We show that both the current autocorrelator (inside a single reservoir) and the current cross-correlator (between the two reservoirs) are expressed in terms of transmission amplitude and coefficient through the barriers. We identify the different energy-transfer processes involved in each contribution to the autocorrelator, and we highlight the fact that when there are several physical processes, the contribution results from a coherent superposition of scattering paths. Varying the gate and bias voltages, we discuss the profile of the differential Fano factor in light of recent experiments, and we identify the conditions for having a distinct value for the autocorrelator in the left and right reservoirs.
Toward an optimal convolutional neural network for traffic sign recognition
NASA Astrophysics Data System (ADS)
Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec
2015-12-01
Convolutional Neural Networks (CNN) beat the human performance on German Traffic Sign Benchmark competition. Both the winner and the runner-up teams trained CNNs to recognize 43 traffic signs. However, both networks are not computationally efficient since they have many free parameters and they use highly computational activation functions. In this paper, we propose a new architecture that reduces the number of the parameters 27% and 22% compared with the two networks. Furthermore, our network uses Leaky Rectified Linear Units (ReLU) as the activation function that only needs a few operations to produce the result. Specifically, compared with the hyperbolic tangent and rectified sigmoid activation functions utilized in the two networks, Leaky ReLU needs only one multiplication operation which makes it computationally much more efficient than the two other functions. Our experiments on the Gertman Traffic Sign Benchmark dataset shows 0:6% improvement on the best reported classification accuracy while it reduces the overall number of parameters 85% compared with the winner network in the competition.
Innervation of the renal proximal convoluted tubule of the rat
Barajas, L.; Powers, K. )
1989-12-01
Experimental data suggest the proximal tubule as a major site of neurogenic influence on tubular function. The functional and anatomical axial heterogeneity of the proximal tubule prompted this study of the distribution of innervation sites along the early, mid, and late proximal convoluted tubule (PCT) of the rat. Serial section autoradiograms, with tritiated norepinephrine serving as a marker for monoaminergic nerves, were used in this study. Freehand clay models and graphic reconstructions of proximal tubules permitted a rough estimation of the location of the innervation sites along the PCT. In the subcapsular nephrons, the early PCT (first third) was devoid of innervation sites with most of the innervation occurring in the mid (middle third) and in the late (last third) PCT. Innervation sites were found in the early PCT in nephrons located deeper in the cortex. In juxtamedullary nephrons, innervation sites could be observed on the PCT as it left the glomerulus. This gradient of PCT innervation can be explained by the different tubulovascular relationships of nephrons at different levels of the cortex. The absence of innervation sites in the early PCT of subcapsular nephrons suggests that any influence of the renal nerves on the early PCT might be due to an effect of neurotransmitter released from renal nerves reaching the early PCT via the interstitium and/or capillaries.
Convolution neural-network-based detection of lung structures
NASA Astrophysics Data System (ADS)
Hasegawa, Akira; Lo, Shih-Chung B.; Freedman, Matthew T.; Mun, Seong K.
1994-05-01
Chest radiography is one of the most primary and widely used techniques in diagnostic imaging. Nowadays with the advent of digital radiology, the digital medical image processing techniques for digital chest radiographs have attracted considerable attention, and several studies on the computer-aided diagnosis (CADx) as well as on the conventional image processing techniques for chest radiographs have been reported. In the automatic diagnostic process for chest radiographs, it is important to outline the areas of the lungs, the heart, and the diaphragm. This is because the original chest radiograph is composed of important anatomic structures and, without knowing exact positions of the organs, the automatic diagnosis may result in unexpected detections. The automatic extraction of an anatomical structure from digital chest radiographs can be a useful tool for (1) the evaluation of heart size, (2) automatic detection of interstitial lung diseases, (3) automatic detection of lung nodules, and (4) data compression, etc. Based on the clearly defined boundaries of heart area, rib spaces, rib positions, and rib cage extracted, one should be able to use this information to facilitate the tasks of the CADx on chest radiographs. In this paper, we present an automatic scheme for the detection of lung field from chest radiographs by using a shift-invariant convolution neural network. A novel algorithm for smoothing boundaries of lungs is also presented.
Synthesising Primary Reflections by Marchenko Redatuming and Convolutional Interferometry
NASA Astrophysics Data System (ADS)
Curtis, A.
2015-12-01
Standard active-source seismic processing and imaging steps such as velocity analysis and reverse time migration usually provide best results when all reflected waves in the input data are primaries (waves that reflect only once). Multiples (recorded waves that reflect multiple times) represent a source of coherent noise in data that must be suppressed to avoid imaging artefacts. Consequently, multiple-removal methods have been a primcipal direction of active-source seismic research for decades. We describe a new method to estimate primaries directly, which obviates the need for multiple removal. Primaries are constructed within convolutional interferometry by combining first arriving events of up-going and direct wave down-going Green's functions to virtual receivers in the subsurface. The required up-going wavefields to virtual receivers along discrete subsurface boundaries can be constructed using Marchenko redatuming. Crucially, this is possible without detailed models of the Earth's subsurface velocity structure: similarly to most migration techniques, the method only requires surface reflection data and estimates of direct (non-reflected) arrivals between subsurface sources and the acquisition surface. The method is demonstrated on a stratified synclinal model. It is shown both to improve reverse time migration compared to standard methods, and to be particularly robust against errors in the reference velocity model used.
Cell osmotic water permeability of isolated rabbit proximal convoluted tubules.
Carpi-Medina, P; González, E; Whittembury, G
1983-05-01
Cell osmotic water permeability, Pcos, of the peritubular aspect of the proximal convoluted tubule (PCT) was measured from the time course of cell volume changes subsequent to the sudden imposition of an osmotic gradient, delta Cio, across the cell membrane of PCT that had been dissected and mounted in a chamber. The possibilities of artifact were minimized. The bath was vigorously stirred, the solutions could be 95% changed within 0.1 s, and small osmotic gradients (10-20 mosM) were used. Thus, the osmotically induced water flow was a linear function of delta Cio and the effect of the 70-microns-thick unstirred layers was negligible. In addition, data were extrapolated to delta Cio = 0. Pcos for PCT was 41.6 (+/- 3.5) X 10(-4) cm3 X s-1 X osM-1 per cm2 of peritubular basal area. The standing gradient osmotic theory for transcellular osmosis is incompatible with this value. Published values for Pcos of PST are 25.1 X 10(-4), and for the transepithelial permeability Peos values are 64 X 10(-4) for PCT and 94 X 10(-4) for PST, in the same units. These results indicate that there is room for paracellular water flow in both nephron segments and that the magnitude of the transcellular and paracellular water flows may vary from one segment of the proximal tubule to another. PMID:6846543
Adapting line integral convolution for fabricating artistic virtual environment
NASA Astrophysics Data System (ADS)
Lee, Jiunn-Shyan; Wang, Chung-Ming
2003-04-01
Vector field occurs not only extensively in scientific applications but also in treasured art such as sculptures and paintings. Artist depicts our natural environment stressing valued directional feature besides color and shape information. Line integral convolution (LIC), developed for imaging vector field in scientific visualization, has potential of producing directional image. In this paper we present several techniques of exploring LIC techniques to generate impressionistic images forming artistic virtual environment. We take advantage of directional information given by a photograph, and incorporate many investigations to the work including non-photorealistic shading technique and statistical detail control. In particular, the non-photorealistic shading technique blends cool and warm colors into the photograph to imitate artists painting convention. Besides, we adopt statistical technique controlling integral length according to image variance to preserve details. Furthermore, we also propose method for generating a series of mip-maps, which revealing constant strokes under multi-resolution viewing and achieving frame coherence in an interactive walkthrough system. The experimental results show merits of emulating satisfyingly and computing efficiently, as a consequence, relying on the proposed technique successfully fabricates a wide category of non-photorealistic rendering (NPR) application such as interactive virtual environment with artistic perception.
Multi-modal vertebrae recognition using Transformed Deep Convolution Network.
Cai, Yunliang; Landis, Mark; Laidley, David T; Kornecki, Anat; Lum, Andrea; Li, Shuo
2016-07-01
Automatic vertebra recognition, including the identification of vertebra locations and naming in multiple image modalities, are highly demanded in spinal clinical diagnoses where large amount of imaging data from various of modalities are frequently and interchangeably used. However, the recognition is challenging due to the variations of MR/CT appearances or shape/pose of the vertebrae. In this paper, we propose a method for multi-modal vertebra recognition using a novel deep learning architecture called Transformed Deep Convolution Network (TDCN). This new architecture can unsupervisely fuse image features from different modalities and automatically rectify the pose of vertebra. The fusion of MR and CT image features improves the discriminativity of feature representation and enhances the invariance of the vertebra pattern, which allows us to automatically process images from different contrast, resolution, protocols, even with different sizes and orientations. The feature fusion and pose rectification are naturally incorporated in a multi-layer deep learning network. Experiment results show that our method outperforms existing detection methods and provides a fully automatic location+naming+pose recognition for routine clinical practice. PMID:27104497
A deep convolutional neural network for recognizing foods
NASA Astrophysics Data System (ADS)
Jahani Heravi, Elnaz; Habibi Aghdam, Hamed; Puig, Domenec
2015-12-01
Controlling the food intake is an efficient way that each person can undertake to tackle the obesity problem in countries worldwide. This is achievable by developing a smartphone application that is able to recognize foods and compute their calories. State-of-art methods are chiefly based on hand-crafted feature extraction methods such as HOG and Gabor. Recent advances in large-scale object recognition datasets such as ImageNet have revealed that deep Convolutional Neural Networks (CNN) possess more representation power than the hand-crafted features. The main challenge with CNNs is to find the appropriate architecture for each problem. In this paper, we propose a deep CNN which consists of 769; 988 parameters. Our experiments show that the proposed CNN outperforms the state-of-art methods and improves the best result of traditional methods 17%. Moreover, using an ensemble of two CNNs that have been trained two different times, we are able to improve the classification performance 21:5%.
Convolutional networks for fast, energy-efficient neuromorphic computing
Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.
2016-01-01
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields.
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-01
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility. PMID:26752681
Accelerating Very Deep Convolutional Networks for Classification and Detection.
Zhang, Xiangyu; Zou, Jianhua; He, Kaiming; Sun, Jian
2016-10-01
This paper aims to accelerate the test-time computation of convolutional neural networks (CNNs), especially very deep CNNs [1] that have substantially impacted the computer vision community. Unlike previous methods that are designed for approximating linear filters or linear responses, our method takes the nonlinear units into account. We develop an effective solution to the resulting nonlinear optimization problem without the need of stochastic gradient descent (SGD). More importantly, while previous methods mainly focus on optimizing one or two layers, our nonlinear method enables an asymmetric reconstruction that reduces the rapidly accumulated error when multiple (e.g., ≥ 10) layers are approximated. For the widely used very deep VGG-16 model [1] , our method achieves a whole-model speedup of 4 × with merely a 0.3 percent increase of top-5 error in ImageNet classification. Our 4 × accelerated VGG-16 model also shows a graceful accuracy degradation for object detection when plugged into the Fast R-CNN detector [2] . PMID:26599615
Predicting Semantic Descriptions from Medical Images with Convolutional Neural Networks.
Schlegl, Thomas; Waldstein, Sebastian M; Vogl, Wolf-Dieter; Schmidt-Erfurth, Ursula; Langs, Georg
2015-01-01
Learning representative computational models from medical imaging data requires large training data sets. Often, voxel-level annotation is unfeasible for sufficient amounts of data. An alternative to manual annotation, is to use the enormous amount of knowledge encoded in imaging data and corresponding reports generated during clinical routine. Weakly supervised learning approaches can link volume-level labels to image content but suffer from the typical label distributions in medical imaging data where only a small part consists of clinically relevant abnormal structures. In this paper we propose to use a semantic representation of clinical reports as a learning target that is predicted from imaging data by a convolutional neural network. We demonstrate how we can learn accurate voxel-level classifiers based on weak volume-level semantic descriptions on a set of 157 optical coherence tomography (OCT) volumes. We specifically show how semantic information increases classification accuracy for intraretinal cystoid fluid (IRC), subretinal fluid (SRF) and normal retinal tissue, and how the learning algorithm links semantic concepts to image content and geometry.
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields
NASA Astrophysics Data System (ADS)
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-01
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.
Deep convolutional neural networks for classifying GPR B-scans
NASA Astrophysics Data System (ADS)
Besaw, Lance E.; Stimac, Philip J.
2015-05-01
Symmetric and asymmetric buried explosive hazards (BEHs) present real, persistent, deadly threats on the modern battlefield. Current approaches to mitigate these threats rely on highly trained operatives to reliably detect BEHs with reasonable false alarm rates using handheld Ground Penetrating Radar (GPR) and metal detectors. As computers become smaller, faster and more efficient, there exists greater potential for automated threat detection based on state-of-the-art machine learning approaches, reducing the burden on the field operatives. Recent advancements in machine learning, specifically deep learning artificial neural networks, have led to significantly improved performance in pattern recognition tasks, such as object classification in digital images. Deep convolutional neural networks (CNNs) are used in this work to extract meaningful signatures from 2-dimensional (2-D) GPR B-scans and classify threats. The CNNs skip the traditional "feature engineering" step often associated with machine learning, and instead learn the feature representations directly from the 2-D data. A multi-antennae, handheld GPR with centimeter-accurate positioning data was used to collect shallow subsurface data over prepared lanes containing a wide range of BEHs. Several heuristics were used to prevent over-training, including cross validation, network weight regularization, and "dropout." Our results show that CNNs can extract meaningful features and accurately classify complex signatures contained in GPR B-scans, complementing existing GPR feature extraction and classification techniques.
Method for Veterbi decoding of large constraint length convolutional codes
NASA Astrophysics Data System (ADS)
Hsu, In-Shek; Truong, Trieu-Kie; Reed, Irving S.; Jing, Sun
1988-05-01
A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.
Designing the optimal convolution kernel for modeling the motion blur
NASA Astrophysics Data System (ADS)
Jelinek, Jan
2011-06-01
Motion blur acts on an image like a two dimensional low pass filter, whose spatial frequency characteristic depends both on the trajectory of the relative motion between the scene and the camera and on the velocity vector variation along it. When motion during exposure is permitted, the conventional, static notions of both the image exposure and the scene-toimage mapping become unsuitable and must be revised to accommodate the image formation dynamics. This paper develops an exact image formation model for arbitrary object-camera relative motion with arbitrary velocity profiles. Moreover, for any motion the camera may operate in either continuous or flutter shutter exposure mode. Its result is a convolution kernel, which is optimally designed for both the given motion and sensor array geometry, and hence permits the most accurate computational undoing of the blurring effects for the given camera required in forensic and high security applications. The theory has been implemented and a few examples are shown in the paper.
Method for Veterbi decoding of large constraint length convolutional codes
NASA Technical Reports Server (NTRS)
Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor); Jing, Sun (Inventor)
1988-01-01
A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.
A convolution model for computing the far-field directivity of a parametric loudspeaker array.
Shi, Chuang; Kajikawa, Yoshinobu
2015-02-01
This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost.
Stacchiotti, Alessandra; Favero, Gaia; Giugno, Lorena; Lavazza, Antonio; Reiter, Russel J; Rodella, Luigi Fabrizio; Rezzani, Rita
2014-01-01
Obesity is a common and complex health problem, which impacts crucial organs; it is also considered an independent risk factor for chronic kidney disease. Few studies have analyzed the consequence of obesity in the renal proximal convoluted tubules, which are the major tubules involved in reabsorptive processes. For optimal performance of the kidney, energy is primarily provided by mitochondria. Melatonin, an indoleamine and antioxidant, has been identified in mitochondria, and there is considerable evidence regarding its essential role in the prevention of oxidative mitochondrial damage. In this study we evaluated the mechanism(s) of mitochondrial alterations in an animal model of obesity (ob/ob mice) and describe the beneficial effects of melatonin treatment on mitochondrial morphology and dynamics as influenced by mitofusin-2 and the intrinsic apoptotic cascade. Melatonin dissolved in 1% ethanol was added to the drinking water from postnatal week 5-13; the calculated dose of melatonin intake was 100 mg/kg body weight/day. Compared to control mice, obesity-related morphological alterations were apparent in the proximal tubules which contained round mitochondria with irregular, short cristae and cells with elevated apoptotic index. Melatonin supplementation in obese mice changed mitochondria shape and cristae organization of proximal tubules, enhanced mitofusin-2 expression, which in turn modulated the progression of the mitochondria-driven intrinsic apoptotic pathway. These changes possibly aid in reducing renal failure. The melatonin-mediated changes indicate its potential protective use against renal morphological damage and dysfunction associated with obesity and metabolic disease.
Dose convolution filter: Incorporating spatial dose information into tissue response modeling
Huang Yimei; Joiner, Michael; Zhao Bo; Liao Yixiang; Burmeister, Jay
2010-03-15
Purpose: A model is introduced to integrate biological factors such as cell migration and bystander effects into physical dose distributions, and to incorporate spatial dose information in plan analysis and optimization. Methods: The model consists of a dose convolution filter (DCF) with single parameter {sigma}. Tissue response is calculated by an existing NTCP model with DCF-applied dose distribution as input. The authors determined {sigma} of rat spinal cord from published data. The authors also simulated the GRID technique, in which an open field is collimated into many pencil beams. Results: After applying the DCF, the NTCP model successfully fits the rat spinal cord data with a predicted value of {sigma}=2.6{+-}0.5 mm, consistent with 2 mm migration distances of remyelinating cells. Moreover, it enables the appropriate prediction of a high relative seriality for spinal cord. The model also predicts the sparing of normal tissues by the GRID technique when the size of each pencil beam becomes comparable to {sigma}. Conclusions: The DCF model incorporates spatial dose information and offers an improved way to estimate tissue response from complex radiotherapy dose distributions. It does not alter the prediction of tissue response in large homogenous fields, but successfully predicts increased tissue tolerance in small or highly nonuniform fields.
Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes
Houshmand, Monireh; Hosseini-Khayat, Saied
2011-02-15
Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation and practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.
Attosecond probing of state-resolved ionization and superpositions of atoms and molecules
NASA Astrophysics Data System (ADS)
Leone, Stephen
2016-05-01
Isolated attosecond pulses in the extreme ultraviolet are used to probe strong field ionization and to initiate electronic and vibrational superpositions in atoms and small molecules. Few-cycle 800 nm pulses produce strong-field ionization of Xe atoms, and the attosecond probe is used to measure the risetimes of the two spin orbit states of the ion on the 4d inner shell transitions to the 5p vacancies in the valence shell. Step-like features in the risetimes due to the subcycles of the 800 nm pulse are observed and compared with theory to elucidate the instantaneous and effective hole dynamics. Isolated attosecond pulses create massive superpositions of electronic states in Ar and nitrogen as well as vibrational superpositions among electronic states in nitrogen. An 800 nm pulse manipulates the superpositions, and specific subcycle interferences, level shifting, and quantum beats are imprinted onto the attosecond pulse as a function of time delay. Detailed outcomes are compared to theory for measurements of time-dynamic superpositions by attosecond transient absorption. Supported by DOE, NSF, ARO, AFOSR, and DARPA.
Schenke, C.; Minguzzi, A.; Hekking, F. W. J.
2011-11-15
We consider a strongly interacting quasi-one-dimensional Bose gas on a tight ring trap subjected to a localized barrier potential. We explore the possibility of forming a macroscopic superposition of a rotating and a nonrotating state under nonequilibrium conditions, achieved by a sudden quench of the barrier velocity. Using an exact solution for the dynamical evolution in the impenetrable-boson (Tonks-Girardeau) limit, we find an expression for the many-body wave function corresponding to a superposition state. The superposition is formed when the barrier velocity is tuned close to multiples of an integer or half-integer number of Coriolis flux quanta. As a consequence of the strong interactions, we find that (i) the state of the system can be mapped onto a macroscopic superposition of two Fermi spheres rather than two macroscopically occupied single-particle states as in a weakly interacting gas, and (ii) the barrier velocity should be larger than the sound velocity to better discriminate the two components of the superposition.
Fast Electron Correlation Methods for Molecular Clusters without Basis Set Superposition Errors
Kamiya, Muneaki; Hirata, So; Valiev, Marat
2008-02-19
Two critical extensions to our fast, accurate, and easy-to-implement binary or ternary interaction method for weakly-interacting molecular clusters [Hirata et al. Mol. Phys. 103, 2255 (2005)] have been proposed, implemented, and applied to water hexamers, hydrogen fluoride chains and rings, and neutral and zwitterionic glycine–water clusters with an excellent result for an initial performance assessment. Our original method included up to two- or three-body Coulomb, exchange, and correlation energies exactly and higher-order Coulomb energies in the dipole–dipole approximation. In this work, the dipole moments are replaced by atom-centered point charges determined so that they reproduce the electrostatic potentials of the cluster subunits as closely as possible and also self-consistently with one another in the cluster environment. They have been shown to lead to dramatic improvement in the description of short-range electrostatic potentials not only of large, charge-separated subunits like zwitterionic glycine but also of small subunits. Furthermore, basis set superposition errors (BSSE) known to plague direct evaluation of weak interactions have been eliminated by com-bining the Valiron–Mayer function counterpoise (VMFC) correction with our binary or ternary interaction method in an economical fashion (quadratic scaling n2 with respect to the number of subunits n when n is small and linear scaling when n is large). A new variant of VMFC has also been proposed in which three-body and all higher-order Coulomb effects on BSSE are estimated approximately. The BSSE-corrected ternary interaction method with atom-centered point charges reproduces the VMFC-corrected results of conventional electron correlation calculations within 0.1 kcal/mol. The proposed method is significantly more accurate and also efficient than conventional correlation methods uncorrected of BSSE.
Generalization of susceptibility of RF systems through far-field pattern superposition
NASA Astrophysics Data System (ADS)
Verdin, B.; Debroux, P.
2015-05-01
The purpose of this paper is to perform an analysis of RF (Radio Frequency) communication systems in a large electromagnetic environment to identify its susceptibility to jamming systems. We propose a new method that incorporates the use of reciprocity and superposition of the far-field radiation pattern of the RF system and the far-field radiation pattern of the jammer system. By using this method we can find the susceptibility pattern of RF systems with respect to the elevation and azimuth angles. A scenario was modeled with HFSS (High Frequency Structural Simulator) where the radiation pattern of the jammer was simulated as a cylindrical horn antenna. The RF jamming entry point used was a half-wave dipole inside a cavity with apertures that approximates a land-mobile vehicle, the dipole approximates a leaky coax cable. Because of the limitation of the simulation method, electrically large electromagnetic environments cannot be quickly simulated using HFSS's finite element method (FEM). Therefore, the combination of the transmit antenna radiation pattern (horn) superimposed onto the receive antenna pattern (dipole) was performed in MATLAB. A 2D or 3D susceptibility pattern is obtained with respect to the azimuth and elevation angles. In addition, by incorporating the jamming equation into this algorithm, the received jamming power as a function of distance at the RF receiver Pr(Φr, θr) can be calculated. The received power depends on antenna properties, propagation factor and system losses. Test cases include: a cavity with four apertures, a cavity above an infinite ground plane, and a land-mobile vehicle approximation. By using the proposed algorithm a susceptibility analysis of RF systems in electromagnetic environments can be performed.
Vala, Jiri; Kosloff, Ronnie; Amitay, Zohar; Zhang Bo; Leone, Stephen R.
2002-12-01
The Deutsch-Jozsa algorithm is experimentally demonstrated for three-qubit functions using pure coherent superpositions of Li{sub 2} rovibrational eigenstates. The function's character, either constant or balanced, is evaluated by first imprinting the function, using a phase-shaped femtosecond pulse, on a coherent superposition of the molecular states, and then projecting the superposition onto an ionic final state, using a second femtosecond pulse at a specific time delay.
A Particle Multi-Target Tracker for Superpositional Measurements Using Labeled Random Finite Sets
NASA Astrophysics Data System (ADS)
Papi, Francesco; Kim, Du Yong
2015-08-01
In this paper we present a general solution for multi-target tracking with superpositional measurements. Measurements that are functions of the sum of the contributions of the targets present in the surveillance area are called superpositional measurements. We base our modelling on Labeled Random Finite Set (RFS) in order to jointly estimate the number of targets and their trajectories. This modelling leads to a labeled version of Mahler's multi-target Bayes filter. However, a straightforward implementation of this tracker using Sequential Monte Carlo (SMC) methods is not feasible due to the difficulties of sampling in high dimensional spaces. We propose an efficient multi-target sampling strategy based on Superpositional Approximate CPHD (SA-CPHD) filter and the recently introduced Labeled Multi-Bernoulli (LMB) and Vo-Vo densities. The applicability of the proposed approach is verified through simulation in a challenging radar application with closely spaced targets and low signal-to-noise ratio.
Sze, Michelle Wynne C; Sugon, Quirino M; McNamara, Daniel J
2010-11-01
In this paper, we use Clifford (geometric) algebra Cl(3,0) to verify if electromagnetic energy-momentum density is still conserved for oblique superposition of two elliptically polarized plane waves with the same frequency. We show that energy-momentum conservation is valid at any time only for the superposition of two counter-propagating elliptically polarized plane waves. We show that the time-average energy-momentum of the superposition of two circularly polarized waves with opposite handedness is conserved regardless of the propagation directions of the waves. And, we show that the resulting momentum density of the superposed waves generally has a vector component perpendicular to the momentum densities of the individual waves.
Towards quantum superposition of a levitated nanodiamond with a NV center
NASA Astrophysics Data System (ADS)
Li, Tongcang
2015-05-01
Creating large Schrödinger's cat states with massive objects is one of the most challenging goals in quantum mechanics. We have previously achieved an important step of this goal by cooling the center-of-mass motion of a levitated microsphere from room temperature to millikelvin temperatures with feedback cooling. To generate spatial quantum superposition states with an optical cavity, however, requires a very strong quadratic coupling that is difficult to achieve. We proposed to optically trap a nanodiamond with a nitrogen-vacancy (NV) center in vacuum, and generate large spatial superposition states using the NV spin-optomechanical coupling in a strong magnetic gradient field. The large spatial superposition states can be used to study objective collapse theories of quantum mechanics. We have optically trapped nanodiamonds in air and are working towards this goal.
Text-Attentional Convolutional Neural Network for Scene Text Detection.
He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian
2016-06-01
Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results. PMID:27093723
A new model of the distal convoluted tubule.
Ko, Benjamin; Mistry, Abinash C; Hanson, Lauren; Mallick, Rickta; Cooke, Leslie L; Hack, Bradley K; Cunningham, Patrick; Hoover, Robert S
2012-09-01
The Na(+)-Cl(-) cotransporter (NCC) in the distal convoluted tubule (DCT) of the kidney is a key determinant of Na(+) balance. Disturbances in NCC function are characterized by disordered volume and blood pressure regulation. However, many details concerning the mechanisms of NCC regulation remain controversial or undefined. This is partially due to the lack of a mammalian cell model of the DCT that is amenable to functional assessment of NCC activity. Previously reported investigations of NCC regulation in mammalian cells have either not attempted measurements of NCC function or have required perturbation of the critical without a lysine kinase (WNK)/STE20/SPS-1-related proline/alanine-rich kinase regulatory pathway before functional assessment. Here, we present a new mammalian model of the DCT, the mouse DCT15 (mDCT15) cell line. These cells display native NCC function as measured by thiazide-sensitive, Cl(-)-dependent (22)Na(+) uptake and allow for the separate assessment of NCC surface expression and activity. Knockdown by short interfering RNA confirmed that this function was dependent on NCC protein. Similar to the mammalian DCT, these cells express many of the known regulators of NCC and display significant baseline activity and dimerization of NCC. As described in previous models, NCC activity is inhibited by appropriate concentrations of thiazides, and phorbol esters strongly suppress function. Importantly, they display release of WNK4 inhibition of NCC by small hairpin RNA knockdown. We feel that this new model represents a critical tool for the study of NCC physiology. The work that can be accomplished in such a system represents a significant step forward toward unraveling the complex regulation of NCC.
Text-Attentional Convolutional Neural Network for Scene Text Detection.
He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian
2016-06-01
Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.
A staggered-grid convolutional differentiator for elastic wave modelling
NASA Astrophysics Data System (ADS)
Sun, Weijia; Zhou, Binzhong; Fu, Li-Yun
2015-11-01
The computation of derivatives in governing partial differential equations is one of the most investigated subjects in the numerical simulation of physical wave propagation. An analytical staggered-grid convolutional differentiator (CD) for first-order velocity-stress elastic wave equations is derived in this paper by inverse Fourier transformation of the band-limited spectrum of a first derivative operator. A taper window function is used to truncate the infinite staggered-grid CD stencil. The truncated CD operator is almost as accurate as the analytical solution, and as efficient as the finite-difference (FD) method. The selection of window functions will influence the accuracy of the CD operator in wave simulation. We search for the optimal Gaussian windows for different order CDs by minimizing the spectral error of the derivative and comparing the windows with the normal Hanning window function for tapering the CD operators. It is found that the optimal Gaussian window appears to be similar to the Hanning window function for tapering the same CD operator. We investigate the accuracy of the windowed CD operator and the staggered-grid FD method with different orders. Compared to the conventional staggered-grid FD method, a short staggered-grid CD operator achieves an accuracy equivalent to that of a long FD operator, with lower computational costs. For example, an 8th order staggered-grid CD operator can achieve the same accuracy of a 16th order staggered-grid FD algorithm but with half of the computational resources and time required. Numerical examples from a homogeneous model and a crustal waveguide model are used to illustrate the superiority of the CD operators over the conventional staggered-grid FD operators for the simulation of wave propagations.
Deep convolutional networks for pancreas segmentation in CT imaging
NASA Astrophysics Data System (ADS)
Roth, Holger R.; Farag, Amal; Lu, Le; Turkbey, Evrim B.; Summers, Ronald M.
2015-03-01
Automatic organ segmentation is an important prerequisite for many computer-aided diagnosis systems. The high anatomical variability of organs in the abdomen, such as the pancreas, prevents many segmentation methods from achieving high accuracies when compared to state-of-the-art segmentation of organs like the liver, heart or kidneys. Recently, the availability of large annotated training sets and the accessibility of affordable parallel computing resources via GPUs have made it feasible for "deep learning" methods such as convolutional networks (ConvNets) to succeed in image classification tasks. These methods have the advantage that used classification features are trained directly from the imaging data. We present a fully-automated bottom-up method for pancreas segmentation in computed tomography (CT) images of the abdomen. The method is based on hierarchical coarse-to-fine classification of local image regions (superpixels). Superpixels are extracted from the abdominal region using Simple Linear Iterative Clustering (SLIC). An initial probability response map is generated, using patch-level confidences and a two-level cascade of random forest classifiers, from which superpixel regions with probabilities larger 0.5 are retained. These retained superpixels serve as a highly sensitive initial input of the pancreas and its surroundings to a ConvNet that samples a bounding box around each superpixel at different scales (and random non-rigid deformations at training time) in order to assign a more distinct probability of each superpixel region being pancreas or not. We evaluate our method on CT images of 82 patients (60 for training, 2 for validation, and 20 for testing). Using ConvNets we achieve maximum Dice scores of an average 68% +/- 10% (range, 43-80%) in testing. This shows promise for accurate pancreas segmentation, using a deep learning approach and compares favorably to state-of-the-art methods.
Text-Attentional Convolutional Neural Network for Scene Text Detection
NASA Astrophysics Data System (ADS)
He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian
2016-06-01
Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature computed globally from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this work, we present a new system for scene text detection by proposing a novel Text-Attentional Convolutional Neural Network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/nontext information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates main task of text/non-text classification. In addition, a powerful low-level detector called Contrast- Enhancement Maximally Stable Extremal Regions (CE-MSERs) is developed, which extends the widely-used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 dataset, with a F-measure of 0.82, improving the state-of-the-art results substantially.
A convolutional neural network approach for objective video quality assessment.
Le Callet, Patrick; Viard-Gaudin, Christian; Barba, Dominique
2006-09-01
This paper describes an application of neural networks in the field of objective measurement method designed to automatically assess the perceived quality of digital videos. This challenging issue aims to emulate human judgment and to replace very complex and time consuming subjective quality assessment. Several metrics have been proposed in literature to tackle this issue. They are based on a general framework that combines different stages, each of them addressing complex problems. The ambition of this paper is not to present a global perfect quality metric but rather to focus on an original way to use neural networks in such a framework in the context of reduced reference (RR) quality metric. Especially, we point out the interest of such a tool for combining features and pooling them in order to compute quality scores. The proposed approach solves some problems inherent to objective metrics that should predict subjective quality score obtained using the single stimulus continuous quality evaluation (SSCQE) method. This latter has been adopted by video quality expert group (VQEG) in its recently finalized reduced referenced and no reference (RRNR-TV) test plan. The originality of such approach compared to previous attempts to use neural networks for quality assessment, relies on the use of a convolutional neural network (CNN) that allows a continuous time scoring of the video. Objective features are extracted on a frame-by-frame basis on both the reference and the distorted sequences; they are derived from a perceptual-based representation and integrated along the temporal axis using a time-delay neural network (TDNN). Experiments conducted on different MPEG-2 videos, with bit rates ranging 2-6 Mb/s, show the effectiveness of the proposed approach to get a plausible model of temporal pooling from the human vision system (HVS) point of view. More specifically, a linear correlation criteria, between objective and subjective scoring, up to 0.92 has been obtained on
Single-trial EEG RSVP classification using convolutional neural networks
NASA Astrophysics Data System (ADS)
Shamwell, Jared; Lee, Hyungtae; Kwon, Heesung; Marathe, Amar R.; Lawhern, Vernon; Nothwang, William
2016-05-01
Traditionally, Brain-Computer Interfaces (BCI) have been explored as a means to return function to paralyzed or otherwise debilitated individuals. An emerging use for BCIs is in human-autonomy sensor fusion where physiological data from healthy subjects is combined with machine-generated information to enhance the capabilities of artificial systems. While human-autonomy fusion of physiological data and computer vision have been shown to improve classification during visual search tasks, to date these approaches have relied on separately trained classification models for each modality. We aim to improve human-autonomy classification performance by developing a single framework that builds codependent models of human electroencephalograph (EEG) and image data to generate fused target estimates. As a first step, we developed a novel convolutional neural network (CNN) architecture and applied it to EEG recordings of subjects classifying target and non-target image presentations during a rapid serial visual presentation (RSVP) image triage task. The low signal-to-noise ratio (SNR) of EEG inherently limits the accuracy of single-trial classification and when combined with the high dimensionality of EEG recordings, extremely large training sets are needed to prevent overfitting and achieve accurate classification from raw EEG data. This paper explores a new deep CNN architecture for generalized multi-class, single-trial EEG classification across subjects. We compare classification performance from the generalized CNN architecture trained across all subjects to the individualized XDAWN, HDCA, and CSP neural classifiers which are trained and tested on single subjects. Preliminary results show that our CNN meets and slightly exceeds the performance of the other classifiers despite being trained across subjects.
NASA Astrophysics Data System (ADS)
Daoud, M.; Ahl Laamara, R.
2012-07-01
We give the explicit expressions of the pairwise quantum correlations present in superpositions of multipartite coherent states. A special attention is devoted to the evaluation of the geometric quantum discord. The dynamics of quantum correlations under a dephasing channel is analyzed. A comparison of geometric measure of quantum discord with that of concurrence shows that quantum discord in multipartite coherent states is more resilient to dissipative environments than is quantum entanglement. To illustrate our results, we consider some special superpositions of Weyl-Heisenberg, SU(2) and SU(1,1) coherent states which interpolate between Werner and Greenberger-Horne-Zeilinger states.
Jeong, H.; Lund, A.P.; Ralph, T.C.
2005-07-15
We develop an all-optical scheme to generate superpositions of macroscopically distinguishable coherent states in traveling optical fields. It nondeterministically distills coherent-state superpositions (CSS's) with large amplitudes out of CSS's with small amplitudes using inefficient photon detection. The small CSS's required to produce CSS's with larger amplitudes are extremely well approximated by squeezed single photons. We discuss some remarkable features of this scheme: it effectively purifies mixed initial states emitted from inefficient single-photon sources and boosts negativity of Wigner functions of quantum states.
Jack, B.; Leach, J.; Franke-Arnold, S.; Ireland, D. G.; Padgett, M. J.; Yao, A. M.; Barnett, S. M.; Romero, J.
2010-04-15
We use spatial light modulators (SLMs) to measure correlations between arbitrary superpositions of orbital angular momentum (OAM) states generated by spontaneous parametric down-conversion. Our technique allows us to fully access a two-dimensional OAM subspace described by a Bloch sphere, within the higher-dimensional OAM Hilbert space. We quantify the entanglement through violations of a Bell-type inequality for pairs of modal superpositions that lie on equatorial, polar, and arbitrary great circles of the Bloch sphere. Our work shows that SLMs can be used to measure arbitrary spatial states with a fidelity sufficient for appropriate quantum information processing systems.
Mesoscopic superposition and sub-Planck-scale structure in molecular wave packets
Ghosh, Suranjana; Banerji, J.; Panigrahi, P. K.; Chiruvelli, Aravind
2006-01-15
We demonstrate the possibility of realizing sub-Planck-scale structures in the mesoscopic superposition of molecular wave packets involving vibrational levels. The time evolution of the wave packet, taken here as the SU(2) coherent state of the Morse potential describing hydrogen iodide molecules, produces macroscopic-quantum-superposition-like states, responsible for the above phenomenon. We investigate the phase-space dynamics of the coherent state through the Wigner function approach and identify the interference phenomena behind the sub-Planck-scale structures. The optimal parameter ranges are specified for observing these features.
Computational superposition compound eye imaging for extended depth-of-field and field-of-view.
Nakamura, Tomoya; Horisaki, Ryoichi; Tanida, Jun
2012-12-01
This paper describes a superposition compound eye imaging system for extending the depth-of-field (DOF) and the field-of-view (FOV) using a spherical array of erect imaging optics and deconvolution processing. This imaging system had a three-dimensionally space-invariant point spread function generated by the superposition optics. A sharp image with a deep DOF and a wide FOV could be reconstructed by deconvolution processing with a single filter from a single captured image. The properties of the proposed system were confirmed by ray-trace simulations.
Measuring the band structures of periodic beams using the wave superposition method
NASA Astrophysics Data System (ADS)
Junyi, L.; Ruffini, V.; Balint, D.
2016-11-01
Phononic crystals and elastic metamaterials are artificially engineered periodic structures that have several interesting properties, such as negative effective stiffness in certain frequency ranges. An interesting property of phononic crystals and elastic metamaterials is the presence of band gaps, which are bands of frequencies where elastic waves cannot propagate. The presence of band gaps gives this class of materials the potential to be used as vibration isolators. In many studies, the band structures were used to evaluate the band gaps. The presence of band gaps in a finite structure is commonly validated by measuring the frequency response as there are no direct methods of measuring the band structures. In this study, an experiment was conducted to determine the band structure of one dimension phononic crystals with two wave modes, such as a bi-material beam, using the frequency response at only 6 points to validate the wave superposition method (WSM) introduced in a previous study. A bi-material beam and an aluminium beam with varying geometry were studied. The experiment was performed by hanging the beams freely, exciting one end of the beams, and measuring the acceleration at consecutive unit cells. The measured transfer function of the beams agrees with the analytical solutions but minor discrepancies. The band structure was then determined using WSM and the band structure of one set of the waves was found to agree well with the analytical solutions. The measurements taken for the other set of waves, which are the evanescent waves in the bi-material beams, were inaccurate and noisy. The transfer functions at additional points of one of the beams were calculated from the measured band structure using WSM. The calculated transfer function agrees with the measured results except at the frequencies where the band structure was inaccurate. Lastly, a study of the potential sources of errors was also conducted using finite element modelling and the errors in
Convolution effect on TCR log response curve and the correction method for it
NASA Astrophysics Data System (ADS)
Chen, Q.; Liu, L. J.; Gao, J.
2016-09-01
Through-casing resistivity (TCR) logging has been successfully used in production wells for the dynamic monitoring of oil pools and the distribution of the residual oil, but its vertical resolution has limited its efficiency in identification of thin beds. The vertical resolution is limited by the distortion phenomenon of vertical response of TCR logging. The distortion phenomenon was studied in this work. It was found that the vertical response curve of TCR logging is the convolution of the true formation resistivity and the convolution function of TCR logging tool. Due to the effect of convolution, the measurement error at thin beds can reach 30% or even bigger. Thus the information of thin bed might be covered up very likely. The convolution function of TCR logging tool was obtained in both continuous and discrete way in this work. Through modified Lyle-Kalman deconvolution method, the true formation resistivity can be optimally estimated, so this inverse algorithm can correct the error caused by the convolution effect. Thus it can improve the vertical resolution of TCR logging tool for identification of thin beds.
Using musical intervals to demonstrate superposition of waves and Fourier analysis
NASA Astrophysics Data System (ADS)
LoPresto, Michael C.
2013-09-01
What follows is a description of a demonstration of superposition of waves and Fourier analysis using a set of four tuning forks mounted on resonance boxes and oscilloscope software to create, capture and analyze the waveforms and Fourier spectra of musical intervals.
Using Musical Intervals to Demonstrate Superposition of Waves and Fourier Analysis
ERIC Educational Resources Information Center
LoPresto, Michael C.
2013-01-01
What follows is a description of a demonstration of superposition of waves and Fourier analysis using a set of four tuning forks mounted on resonance boxes and oscilloscope software to create, capture and analyze the waveforms and Fourier spectra of musical intervals.
Application of time-temperature-stress superposition on creep of wood-plastic composites
NASA Astrophysics Data System (ADS)
Chang, Feng-Cheng; Lam, Frank; Kadla, John F.
2013-08-01
Time-temperature-stress superposition principle (TTSSP) was widely applied in studies of viscoelastic properties of materials. It involves shifting curves at various conditions to construct master curves. To extend the application of this principle, a temperature-stress hybrid shift factor and a modified Williams-Landel-Ferry (WLF) equation that incorporated variables of stress and temperature for the shift factor fitting were studied. A wood-plastic composite (WPC) was selected as the test subject to conduct a series of short-term creep tests. The results indicate that the WPC were rheologically simple materials and merely a horizontal shift was needed for the time-temperature superposition, whereas vertical shifting would be needed for time-stress superposition. The shift factor was independent of the stress for horizontal shifts in time-temperature superposition. In addition, the temperature- and stress-shift factors used to construct master curves were well fitted with the WLF equation. Furthermore, the parameters of the modified WLF equation were also successfully calibrated. The application of this method and equation can be extended to curve shifting that involves the effects of both temperature and stress simultaneously.
ERIC Educational Resources Information Center
Sengoren, Serap Kaya; Tanel, Rabia; Kavcar, Nevzat
2006-01-01
The superposition principle is used to explain many phenomena in physics. Incomplete knowledge about this topic at a basic level leads to physics students having problems in the future. As long as prospective physics teachers have difficulties in the subject, it is inevitable that high school students will have the same difficulties. The aim of…
On sufficient statistics of least-squares superposition of vector sets.
Konagurthu, Arun S; Kasarapu, Parthan; Allison, Lloyd; Collier, James H; Lesk, Arthur M
2015-06-01
The problem of superposition of two corresponding vector sets by minimizing their sum-of-squares error under orthogonal transformation is a fundamental task in many areas of science, notably structural molecular biology. This problem can be solved exactly using an algorithm whose time complexity grows linearly with the number of correspondences. This efficient solution has facilitated the widespread use of the superposition task, particularly in studies involving macromolecular structures. This article formally derives a set of sufficient statistics for the least-squares superposition problem. These statistics are additive. This permits a highly efficient (constant time) computation of superpositions (and sufficient statistics) of vector sets that are composed from its constituent vector sets under addition or deletion operation, where the sufficient statistics of the constituent sets are already known (that is, the constituent vector sets have been previously superposed). This results in a drastic improvement in the run time of the methods that commonly superpose vector sets under addition or deletion operations, where previously these operations were carried out ab initio (ignoring the sufficient statistics). We experimentally demonstrate the improvement our work offers in the context of protein structural alignment programs that assemble a reliable structural alignment from well-fitting (substructural) fragment pairs. A C++ library for this task is available online under an open-source license.
Chaos and Complexities Theories. Superposition and Standardized Testing: Are We Coming or Going?
ERIC Educational Resources Information Center
Erwin, Susan
2005-01-01
The purpose of this paper is to explore the possibility of using the principle of "superposition of states" (commonly illustrated by Schrodinger's Cat experiment) to understand the process of using standardized testing to measure a student's learning. Comparisons from literature, neuroscience, and Schema Theory will be used to expound upon the…
Li, Haiyan; He, Yijun; Wang, Wenguang
2009-01-01
The convolution between co-polarization amplitude only data is studied to improve ship detection performance. The different statistical behaviors of ships and surrounding ocean are characterized a by two-dimensional convolution function (2D-CF) between different polarization channels. The convolution value of the ocean decreases relative to initial data, while that of ships increases. Therefore the contrast of ships to ocean is increased. The opposite variation trend of ocean and ships can distinguish the high intensity ocean clutter from ships' signatures. The new criterion can generally avoid mistaken detection by a constant false alarm rate detector. Our new ship detector is compared with other polarimetric approaches, and the results confirm the robustness of the proposed method.
Wu, Xuecheng; Wu, Yingchun; Yang, Jing; Wang, Zhihua; Zhou, Binwu; Gréhan, Gérard; Cen, Kefa
2013-05-20
Application of the modified convolution method to reconstruct digital inline holography of particle illuminated by an elliptical Gaussian beam is investigated. Based on the analysis on the formation of particle hologram using the Collins formula, the convolution method is modified to compensate the astigmatism by adding two scaling factors. Both simulated and experimental holograms of transparent droplets and opaque particles are used to test the algorithm, and the reconstructed images are compared with that using FRFT reconstruction. Results show that the modified convolution method can accurately reconstruct the particle image. This method has an advantage that the reconstructed images in different depth positions have the same size and resolution with the hologram. This work shows that digital inline holography has great potential in particle diagnostics in curvature containers.
Perez-Carrasco, Jose Antonio; Acha, Begona; Serrano, Carmen; Camunas-Mesa, Luis; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2010-04-01
Address-event representation (AER) is an emergent hardware technology which shows a high potential for providing in the near future a solid technological substrate for emulating brain-like processing structures. When used for vision, AER sensors and processors are not restricted to capturing and processing still image frames, as in commercial frame-based video technology, but sense and process visual information in a pixel-level event-based frameless manner. As a result, vision processing is practically simultaneous to vision sensing, since there is no need to wait for sensing full frames. Also, only meaningful information is sensed, communicated, and processed. Of special interest for brain-like vision processing are some already reported AER convolutional chips, which have revealed a very high computational throughput as well as the possibility of assembling large convolutional neural networks in a modular fashion. It is expected that in a near future we may witness the appearance of large scale convolutional neural networks with hundreds or thousands of individual modules. In the meantime, some research is needed to investigate how to assemble and configure such large scale convolutional networks for specific applications. In this paper, we analyze AER spiking convolutional neural networks for texture recognition hardware applications. Based on the performance figures of already available individual AER convolution chips, we emulate large scale networks using a custom made event-based behavioral simulator. We have developed a new event-based processing architecture that emulates with AER hardware Manjunath's frame-based feature recognition software algorithm, and have analyzed its performance using our behavioral simulator. Recognition rate performance is not degraded. However, regarding speed, we show that recognition can be achieved before an equivalent frame is fully sensed and transmitted.
Punctured Parallel and Serial Concatenated Convolutional Codes for BPSK/QPSK Channels
NASA Technical Reports Server (NTRS)
Acikel, Omer Fatih
1999-01-01
As available bandwidth for communication applications becomes scarce, bandwidth-efficient modulation and coding schemes become ever important. Since their discovery in 1993, turbo codes (parallel concatenated convolutional codes) have been the center of the attention in the coding community because of their bit error rate performance near the Shannon limit. Serial concatenated convolutional codes have also been shown to be as powerful as turbo codes. In this dissertation, we introduce algorithms for designing bandwidth-efficient rate r = k/(k + 1),k = 2, 3,..., 16, parallel and rate 3/4, 7/8, and 15/16 serial concatenated convolutional codes via puncturing for BPSK/QPSK (Binary Phase Shift Keying/Quadrature Phase Shift Keying) channels. Both parallel and serial concatenated convolutional codes have initially, steep bit error rate versus signal-to-noise ratio slope (called the -"cliff region"). However, this steep slope changes to a moderate slope with increasing signal-to-noise ratio, where the slope is characterized by the weight spectrum of the code. The region after the cliff region is called the "error rate floor" which dominates the behavior of these codes in moderate to high signal-to-noise ratios. Our goal is to design high rate parallel and serial concatenated convolutional codes while minimizing the error rate floor effect. The design algorithm includes an interleaver enhancement procedure and finds the polynomial sets (only for parallel concatenated convolutional codes) and the puncturing schemes that achieve the lowest bit error rate performance around the floor for the code rates of interest.
Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram
NASA Technical Reports Server (NTRS)
Lee, P. J.
1984-01-01
The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.
Blind separation of convolutive sEMG mixtures based on independent vector analysis
NASA Astrophysics Data System (ADS)
Wang, Xiaomei; Guo, Yina; Tian, Wenyan
2015-12-01
An independent vector analysis (IVA) method base on variable-step gradient algorithm is proposed in this paper. According to the sEMG physiological properties, the IVA model is applied to the frequency-domain separation of convolutive sEMG mixtures to extract motor unit action potentials information of sEMG signals. The decomposition capability of proposed method is compared to the one of independent component analysis (ICA), and experimental results show the variable-step gradient IVA method outperforms ICA in blind separation of convolutive sEMG mixtures.
A convolutional learning system for object classification in 3-D Lidar data.
Prokhorov, Danil
2010-05-01
In this brief, a convolutional learning system for classification of segmented objects represented in 3-D as point clouds of laser reflections is proposed. Several novelties are discussed: (1) extension of the existing convolutional neural network (CNN) framework to direct processing of 3-D data in a multiview setting which may be helpful for rotation-invariant consideration, (2) improvement of CNN training effectiveness by employing a stochastic meta-descent (SMD) method, and (3) combination of unsupervised and supervised training for enhanced performance of CNN. CNN performance is illustrated on a two-class data set of objects in a segmented outdoor environment.
SU-E-J-60: Efficient Monte Carlo Dose Calculation On CPU-GPU Heterogeneous Systems
Xiao, K; Chen, D. Z; Hu, X. S; Zhou, B
2014-06-01
Purpose: It is well-known that the performance of GPU-based Monte Carlo dose calculation implementations is bounded by memory bandwidth. One major cause of this bottleneck is the random memory writing patterns in dose deposition, which leads to several memory efficiency issues on GPU such as un-coalesced writing and atomic operations. We propose a new method to alleviate such issues on CPU-GPU heterogeneous systems, which achieves overall performance improvement for Monte Carlo dose calculation. Methods: Dose deposition is to accumulate dose into the voxels of a dose volume along the trajectories of radiation rays. Our idea is to partition this procedure into the following three steps, which are fine-tuned for CPU or GPU: (1) each GPU thread writes dose results with location information to a buffer on GPU memory, which achieves fully-coalesced and atomic-free memory transactions; (2) the dose results in the buffer are transferred to CPU memory; (3) the dose volume is constructed from the dose buffer on CPU. We organize the processing of all radiation rays into streams. Since the steps within a stream use different hardware resources (i.e., GPU, DMA, CPU), we can overlap the execution of these steps for different streams by pipelining. Results: We evaluated our method using a Monte Carlo Convolution Superposition (MCCS) program and tested our implementation for various clinical cases on a heterogeneous system containing an Intel i7 quad-core CPU and an NVIDIA TITAN GPU. Comparing with a straightforward MCCS implementation on the same system (using both CPU and GPU for radiation ray tracing), our method gained 2-5X speedup without losing dose calculation accuracy. Conclusion: The results show that our new method improves the effective memory bandwidth and overall performance for MCCS on the CPU-GPU systems. Our proposed method can also be applied to accelerate other Monte Carlo dose calculation approaches. This research was supported in part by NSF under Grants CCF
Barraclough, Brendan; Li, Jonathan G; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2015-08-21
The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to
NASA Astrophysics Data System (ADS)
Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2015-08-01
The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to
Solenoid magnetic fields calculated from superposed semi-infinite solenoids
NASA Technical Reports Server (NTRS)
Brown, G. V.; Flax, L.
1966-01-01
Calculation of a thick solenoid coils magnetic field components is made by a superposition of the fields produced by four solenoids of infinite length and zero inner radius. The field produced by this semi-infinite solenoid is dependent on only two variables, the radial and axial field point coordinates.
On basis set superposition error corrected stabilization energies for large n-body clusters.
Walczak, Katarzyna; Friedrich, Joachim; Dolg, Michael
2011-10-01
In this contribution, we propose an approximate basis set superposition error (BSSE) correction scheme for the site-site function counterpoise and for the Valiron-Mayer function counterpoise correction of second order to account for the basis set superposition error in clusters with a large number of subunits. The accuracy of the proposed scheme has been investigated for a water cluster series at the CCSD(T), CCSD, MP2, and self-consistent field levels of theory using Dunning's correlation consistent basis sets. The BSSE corrected stabilization energies for a series of water clusters are presented. A study regarding the possible savings with respect to computational resources has been carried out as well as a monitoring of the basis set dependence of the approximate BSSE corrections. PMID:21992293
Conditional production of superpositions of coherent states with inefficient photon detection
Lund, A.P.; Jeong, H.; Ralph, T.C.; Kim, M.S.
2004-08-01
It is shown that a linear superposition of two macroscopically distinguishable optical coherent states can be generated using a single photon source and simple all-optical operations. Weak squeezing on a single photon, beam mixing with an auxiliary coherent state, and photon detecting with imperfect threshold detectors are enough to generate a coherent state superposition in a free propagating optical field with a large coherent amplitude ({alpha}>2) and high fidelity (F>0.99). In contrast to all previous schemes to generate such a state, our scheme does not need photon number resolving measurements nor Kerr-type nonlinear interactions. Furthermore, it is robust to detection inefficiency and exhibits some resilience to photon production inefficiency.
Robot Behavior Acquisition Superposition and Composting of Behaviors Learned through Teleoperation
NASA Technical Reports Server (NTRS)
Peters, Richard Alan, II
2004-01-01
Superposition of a small set of behaviors, learned via teleoperation, can lead to robust completion of a simple articulated reach-and-grasp task. Results support the hypothesis that a set of learned behaviors can be combined to generate new behaviors of a similar type. This supports the hypothesis that a robot can learn to interact purposefully with its environment through a developmental acquisition of sensory-motor coordination. Teleoperation bootstraps the process by enabling the robot to observe its own sensory responses to actions that lead to specific outcomes. A reach-and-grasp task, learned by an articulated robot through a small number of teleoperated trials, can be performed autonomously with success in the face of significant variations in the environment and perturbations of the goal. Superpositioning was performed using the Verbs and Adverbs algorithm that was developed originally for the graphical animation of articulated characters. Work was performed on Robonaut at NASA-JSC.
Superposition and detection of two helical beams for optical orbital angular momentum communication
NASA Astrophysics Data System (ADS)
Liu, Yi-Dong; Gao, Chunqing; Gao, Mingwei; Qi, Xiaoqing; Weber, Horst
2008-07-01
A loop-like system with a Dove prism is used to generate a collinear superposition of two helical beams with different azimuthal quantum numbers in this manuscript. After the generation of the helical beams distributed on the circle centered at the optical axis by using a binary amplitude grating, the diffractive field is separated into two polarized ones with the same distribution. Rotated by the Dove prism in the loop-like system in counter directions and combined together, the two fields will generate the collinear superposition of two helical beams in certain direction. The experiment shows consistency with the theoretical analysis. This method has potential applications in optical communication by using orbital angular momentum of laser beams (optical vortices).
Brain-wave representation of words by superposition of a few sine waves
Suppes, Patrick; Han, Bing
2000-01-01
Data from three previous experiments were analyzed to test the hypothesis that brain waves of spoken or written words can be represented by the superposition of a few sine waves. First, we averaged the data over trials and a set of subjects, and, in one case, over experimental conditions as well. Next we applied a Fourier transform to the averaged data and selected those frequencies with high energy, in no case more than nine in number. The superpositions of these selected sine waves were taken as prototypes. The averaged unfiltered data were the test samples. The prototypes were used to classify the test samples according to a least-squares criterion of fit. The results were seven of seven correct classifications for the first experiment using only three frequencies, six of eight for the second experiment using nine frequencies, and eight of eight for the third experiment using five frequencies. PMID:10890906
Performance of a two-state quantum engine improved by the superposition effect
NASA Astrophysics Data System (ADS)
Ou, CongJie; Huang, ZhiFu; Lin, BiHong; Chen, JinCan
2013-10-01
The performance of a two-state quantum engine under different conditions is analyzed. It is shown that the efficiency of the quantum engine can be enhanced by superposing the eigenstates at the beginning of the cycle. By employing the finite-time movement of the potential wall, the power output of the quantum engine as well as the efficiency at the maximum power output (EMP) can be obtained. A generalized potential is adopted to describe a class of two-level quantum engines in a unified way. The results obtained show clearly that the performances of these engines depend on the external potential, the geometric configuration of the quantum engines, and the superposition effect. Moreover, it is found that the superposition effect will enlarge the optimally operating region of quantum engines.
Optical threshold secret sharing scheme based on basic vector operations and coherence superposition
NASA Astrophysics Data System (ADS)
Deng, Xiaopeng; Wen, Wei; Mi, Xianwu; Long, Xuewen
2015-04-01
We propose, to our knowledge for the first time, a simple optical algorithm for secret image sharing with the (2,n) threshold scheme based on basic vector operations and coherence superposition. The secret image to be shared is firstly divided into n shadow images by use of basic vector operations. In the reconstruction stage, the secret image can be retrieved by recording the intensity of the coherence superposition of any two shadow images. Compared with the published encryption techniques which focus narrowly on information encryption, the proposed method can realize information encryption as well as secret sharing, which further ensures the safety and integrality of the secret information and prevents power from being kept centralized and abused. The feasibility and effectiveness of the proposed method are demonstrated by numerical results.
From constants of motion to superposition rules for Lie-Hamilton systems
NASA Astrophysics Data System (ADS)
Ballesteros, A.; Cariñena, J. F.; Herranz, F. J.; de Lucas, J.; Sardón, C.
2013-07-01
A Lie system is a non-autonomous system of first-order differential equations possessing a superposition rule, i.e. a map expressing its general solution in terms of a generic finite family of particular solutions and some constants. Lie-Hamilton systems form a subclass of Lie systems whose dynamics is governed by a curve in a finite-dimensional real Lie algebra of functions on a Poisson manifold. It is shown that Lie-Hamilton systems are naturally endowed with a Poisson coalgebra structure. This allows us to devise methods for deriving in an algebraic way their constants of motion and superposition rules. We illustrate our methods by studying Kummer-Schwarz equations, Riccati equations, Ermakov systems and Smorodinsky-Winternitz systems with time-dependent frequency.
Superposition-model analysis of rare-earth doped BaY2F8
NASA Astrophysics Data System (ADS)
Magnani, N.; Amoretti, G.; Baraldi, A.; Capelletti, R.
The energy level schemes of four rare-earth dopants (Ce3+ , Nd3+ , Dy3+ , and Er3+) in BaY2 F-8 , as determined by optical absorption spectra, were fitted with a single-ion Hamiltonian and analysed within Newman's Superposition Model for the crystal field. A unified picture for the four dopants was obtained, by assuming a distortion of the F- ligand cage around the RE site; within the framework of the Superposition Model, this distortion is found to have a marked anisotropic behaviour for heavy rare earths, while it turns into an isotropic expansion of the nearest-neighbours polyhedron for light rare earths. It is also inferred that the substituting ion may occupy an off-center position with respect to the original Y3+ site in the crystal.
NASA Astrophysics Data System (ADS)
Poletto, F.; Bellezza, C.; Farina, B.
2014-02-01
We examine the multidimensional deconvolution (MDD) approach for virtual-reflector (VR) signal representation by cross-convolution. Assuming wavefield separation at receivers, the VR signal can be synthesized by cross-convolution of inward and outward wavefields generated from a multiplicity of transient sources. Under suitable conditions, this virtual signal is representable as the multidimensional composition of (1) the outward wavefield from the redatumed virtual sources at receivers and (2) of the so-called point-spread function (PSF) for VRs. Multidimensional inversion of the PSF provides the solution to get deblurred signals and recover the Green's function either of transmitted wavefields or reflectivity. This approach is similar to the MDD by backward seismic interferometry of cross-correlation type. The forward approach by cross-convolution poses the issue to use suitable projections and representations by functions with convex trends in space and time. This work discusses the main differences for illumination and stability in the cross-convolution and cross-correlation approaches, providing, under appropriate coverage conditions, equivalent and robust inversion results.
The VLSI design of an error-trellis syndrome decoder for certain convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Jensen, J. M.; Hsu, I.-S.; Truong, T. K.
1986-01-01
A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.
The VLSI design of error-trellis syndrome decoding for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Jensen, J. M.; Truong, T. K.; Hsu, I. S.
1985-01-01
A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.
The multipoint de la Vallee-Poussin problem for a convolution operator
Napalkov, Valentin V; Nuyatov, Andrey A
2012-02-28
Conditions are discovered which ensure that the space of entire functions can be represented as the sum of an ideal in the space of entire functions and the kernel of a convolution operator. In this way conditions for the multipoint de la Vallee-Poussin problem to have a solution are found. Bibliography: 14 titles.
A generalized recursive convolution method for time-domain propagation in porous media.
Dragna, Didier; Pineau, Pierre; Blanc-Benon, Philippe
2015-08-01
An efficient numerical method, referred to as the auxiliary differential equation (ADE) method, is proposed to compute convolutions between relaxation functions and acoustic variables arising in sound propagation equations in porous media. For this purpose, the relaxation functions are approximated in the frequency domain by rational functions. The time variation of the convolution is thus governed by first-order differential equations which can be straightforwardly solved. The accuracy of the method is first investigated and compared to that of recursive convolution methods. It is shown that, while recursive convolution methods are first or second-order accurate in time, the ADE method does not introduce any additional error. The ADE method is then applied for outdoor sound propagation using the equations proposed by Wilson et al. in the ground [(2007). Appl. Acoust. 68, 173-200]. A first one-dimensional case is performed showing that only five poles are necessary to accurately approximate the relaxation functions for typical applications. Finally, the ADE method is used to compute sound propagation in a three-dimensional geometry over an absorbing ground. Results obtained with Wilson's equations are compared to those obtained with Zwikker and Kosten's equations and with an impedance surface for different flow resistivities.
Convolutional neural network based sensor fusion for forward looking ground penetrating radar
NASA Astrophysics Data System (ADS)
Sakaguchi, Rayn; Crosskey, Miles; Chen, David; Walenz, Brett; Morton, Kenneth
2016-05-01
Forward looking ground penetrating radar (FLGPR) is an alternative buried threat sensing technology designed to offer additional standoff compared to downward looking GPR systems. Due to additional flexibility in antenna configurations, FLGPR systems can accommodate multiple sensor modalities on the same platform that can provide complimentary information. The different sensor modalities present challenges in both developing informative feature extraction methods, and fusing sensor information in order to obtain the best discrimination performance. This work uses convolutional neural networks in order to jointly learn features across two sensor modalities and fuse the information in order to distinguish between target and non-target regions. This joint optimization is possible by modifying the traditional image-based convolutional neural network configuration to extract data from multiple sources. The filters generated by this process create a learned feature extraction method that is optimized to provide the best discrimination performance when fused. This paper presents the results of applying convolutional neural networks and compares these results to the use of fusion performed with a linear classifier. This paper also compares performance between convolutional neural networks architectures to show the benefit of fusing the sensor information in different ways.
The venom apparatus in stenogastrine wasps: subcellular features of the convoluted gland.
Petrocelli, Iacopo; Turillazzi, Stefano; Delfino, Giovanni
2014-09-01
In the wasp venom apparatus, the convoluted gland is the tract of the thin secretory unit, i.e. filament, contained in the muscular reservoir. Previous transmission electron microscope investigation on Stenogastrinae disclosed that the free filaments consist of distal and proximal tracts, from/to the venom reservoir, characterized by class 3 and 2 gland patterns, respectively. This study aims to extend the ultrastructural analysis to the convoluted tract, in order to provide a thorough, subcellular representation of the venom gland in these Asian wasps. Our findings showed that the convoluted gland is a continuation of the proximal tract, with secretory cells provided with a peculiar apical invagination, the extracellular cavity, collecting their products. This compartment holds a simple end-apparatus lined by large and ramified microvilli that contribute to the processing of the secretory product. A comparison between previous and present findings reveals a noticeable regionalization of the stenogastrine venom filaments and suggests that the secretory product acquires its ultimate composition in the convoluted tract.
NASA Astrophysics Data System (ADS)
Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud
2015-10-01
Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and 18F, 99mTc, 131I and 177Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the 99mTc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.
Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud
2015-10-21
Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and (18)F, (99m)Tc, (131)I and (177)Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the (99m)Tc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity. PMID:26406778
Mrowczynski, Stanislaw
2006-04-15
In p-p collisions the average transverse momentum is known to be correlated with the multiplicity of produced particles. The correlation is shown to survive in a superposition model of nucleus-nucleus collisions. When properly parametrized, the correlation strength appears to be independent of the collision centrality--it is the same in p-p and central A-A collisions. However, the correlation is strongly suppressed by the centrality fluctuations.
Note: An explicit solution of the optimal superposition and Eckart frame problems.
Cioslowski, Jerzy
2016-07-14
Attention is called to an explicit solution of both the optimal superposition and Eckart frame problems that requires neither matrix diagonalization nor quaternion algebra. A simple change in one variable that enters the expression for the solution matrix T allows for selection of T representing either a proper rotation or a more general orthogonal transformation. The issues concerning the use of these alternative selections and the equivalence of the two problems are addressed. PMID:27421427
NASA Astrophysics Data System (ADS)
Zeng, Huihui
In this paper, we show the large time asymptotic nonlinear stability of a superposition of viscous shock waves with viscous contact waves for systems of viscous conservation laws with small initial perturbations, provided that the strengths of these viscous waves are small with the same order. The results are obtained by elementary weighted energy estimates based on the underlying wave structure and a new estimate on the heat equation.
Note: An explicit solution of the optimal superposition and Eckart frame problems
NASA Astrophysics Data System (ADS)
Cioslowski, Jerzy
2016-07-01
Attention is called to an explicit solution of both the optimal superposition and Eckart frame problems that requires neither matrix diagonalization nor quaternion algebra. A simple change in one variable that enters the expression for the solution matrix T allows for selection of T representing either a proper rotation or a more general orthogonal transformation. The issues concerning the use of these alternative selections and the equivalence of the two problems are addressed.
NASA Astrophysics Data System (ADS)
Carvalho, C. R.; Guerra, E. S.; Jalbert, Ginette
2008-04-01
We analyse a teleportation scheme of cavity field states. The experimental sketch discussed makes use of cavity quantum electrodynamics involving the interaction of Rydberg atoms with superconducting (micromaser) cavities as well as with classical microwave (Ramsey) cavities. In our scheme the Ramsey cavities and the atoms play the role of auxiliary systems used to teleport a field state, which is formed by a linear superposition of vacuum |∅> and the one-photon state |1>, from a micromaser cavity to another.
Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud
2015-10-21
Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and (18)F, (99m)Tc, (131)I and (177)Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the (99m)Tc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.
NASA Astrophysics Data System (ADS)
Saadati-Niari, Maghsoud
2016-09-01
Creation of coherent superpositions in quantum systems with Na states in the lower set and Nb states in the upper set is presented. The solution is drived by using the Morris-Shore transformation, which step by step reduces the fully coupled system to a three-state Λ-like system and a set of decoupled states. It is shown that, for properly timed pulse, robust population transfer from an initial ground state (or superposition of M ground states) to an arbitrary coherent superposition of the ground states can be achieved by coincident pulses and/or STIRAP techniques.
Monte Carlo calculation of helical tomotherapy dose delivery
Zhao Yingli; Mackenzie, M.; Kirkby, C.; Fallone, B. G.
2008-08-15
Helical tomotherapy delivers intensity modulated radiation therapy using a binary multileaf collimator (MLC) to modulate a fan beam of radiation. This delivery occurs while the linac gantry and treatment couch are both in constant motion, so the beam describes, from a patient/phantom perspective, a spiral or helix of dose. The planning system models this continuous delivery as a large number (51) of discrete gantry positions per rotation, and given the small jaw/fan width setting typically used (1 or 2.5 cm) and the number of overlapping rotations used to cover the target (pitch often <0.5), the treatment planning system (TPS) potentially employs a very large number of static beam directions and leaf opening configurations to model the modulated fields. All dose calculations performed by the system employ a convolution/superposition model. In this work the authors perform a full Monte Carlo (MC) dose calculation of tomotherapy deliveries to phantom computed tomography (CT) data sets to verify the TPS calculations. All MC calculations are performed with the EGSnrc-based MC simulation codes, BEAMnrc and DOSXYZnrc. Simulations are performed by taking the sinogram (leaf opening versus time) of the treatment plan and decomposing it into 51 different projections per rotation, as does the TPS, each of which is segmented further into multiple MLC opening configurations, each with different weights that correspond to leaf opening times. Then the projection is simulated by the summing of all of the opening configurations, and the overall rotational treatment is simulated by the summing of all of the projection simulations. Commissioning of the source model was verified by comparing measured and simulated values for the percent depth dose and beam profiles shapes for various jaw settings. The accuracy of the MLC leaf width and tongue and groove spacing were verified by comparing measured and simulated values for the MLC leakage and a picket fence pattern. The validated source
A cute and highly contrast-sensitive superposition eye - the diurnal owlfly Libelloides macaronius.
Belušič, Gregor; Pirih, Primož; Stavenga, Doekele G
2013-06-01
The owlfly Libelloides macaronius (Insecta: Neuroptera) has large bipartite eyes of the superposition type. The spatial resolution and sensitivity of the photoreceptor array in the dorsofrontal eye part was studied with optical and electrophysiological methods. Using structured illumination microscopy, the interommatidial angle in the central part of the dorsofrontal eye was determined to be Δϕ=1.1 deg. Eye shine measurements with an epi-illumination microscope yielded an effective superposition pupil size of about 300 facets. Intracellular recordings confirmed that all photoreceptors were UV-receptors (λmax=350 nm). The average photoreceptor acceptance angle was 1.8 deg, with a minimum of 1.4 deg. The receptor dynamic range was two log units, and the Hill coefficient of the intensity-response function was n=1.2. The signal-to-noise ratio of the receptor potential was remarkably high and constant across the whole dynamic range (root mean square r.m.s. noise=0.5% Vmax). Quantum bumps could not be observed at any light intensity, indicating low voltage gain. Presumably, the combination of large aperture superposition optics feeding an achromatic array of relatively insensitive receptors with a steep intensity-response function creates a low-noise, high spatial acuity instrument. The sensitivity shift to the UV range reduces the clutter created by clouds within the sky image. These properties of the visual system are optimal for detecting small insect prey as contrasting spots against both clear and cloudy skies.
Sagnac interferometry with coherent vortex superposition states in exciton-polariton condensates
NASA Astrophysics Data System (ADS)
Moxley, Frederick Ira; Dowling, Jonathan P.; Dai, Weizhong; Byrnes, Tim
2016-05-01
We investigate prospects of using counter-rotating vortex superposition states in nonequilibrium exciton-polariton Bose-Einstein condensates for the purposes of Sagnac interferometry. We first investigate the stability of vortex-antivortex superposition states, and show that they survive at steady state in a variety of configurations. Counter-rotating vortex superpositions are of potential interest to gyroscope and seismometer applications for detecting rotations. Methods of improving the sensitivity are investigated by targeting high momentum states via metastable condensation, and the application of periodic lattices. The sensitivity of the polariton gyroscope is compared to its optical and atomic counterparts. Due to the large interferometer areas in optical systems and small de Broglie wavelengths for atomic BECs, the sensitivity per detected photon is found to be considerably less for the polariton gyroscope than with competing methods. However, polariton gyroscopes have an advantage over atomic BECs in a high signal-to-noise ratio, and have other practical advantages such as room-temperature operation, area independence, and robust design. We estimate that the final sensitivities including signal-to-noise aspects are competitive with existing methods.
[Superposition impact character of air pollution from decentralization docks in a freshwater port].
Liu, Jian-chang; Li, Xing-hua; Xu, Hong-lei; Cheng, Jin-xiang; Wang, Zhong-dai; Xiao, Yang
2013-05-01
Air pollution from freshwater port is mainly caused by dust pollution, including material loading and unloading dust, road dust, and wind erosion dust from stockpile, bare soil. The dust pollution from a single dock characterized in obvious difference with air pollution from multiple scattered docks. Jining Port of Shandong Province was selected as a case study to get superposition impact contribution of air pollution for regional air environment from multiple scattered docks and to provide technical support for system evaluation of port air pollution. The results indicate that (1) the air pollution from freshwater port occupies a low proportion of pollution impact on regional environmental quality because the port is consisted of serveral small scattered docks; (2) however, the geometric center of the region distributed by docks is severely affected with the most superposition of the air pollution; and (3) the ADMS model is helpful to attain an effective and integrated assessment to predict a superposition impact of multiple non-point pollution sources when the differences of high-altitude weather conditions was not considered on a large scale.
A deterministic partial differential equation model for dose calculation in electron radiotherapy
NASA Astrophysics Data System (ADS)
Duclous, R.; Dubroca, B.; Frank, M.
2010-07-01
High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g. Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung
NASA Astrophysics Data System (ADS)
Diniz, Leonardo G.; Alijah, Alexander; Adamowicz, Ludwik; Mohallem, José R.
2015-07-01
Non-adiabatic vibrational calculations performed with the accuracy of 0.2 cm-1 spanning the whole energy spectrum up to the dissociation limit for 7LiH are reported. A so far unknown v = 23 energy level is predicted. The key feature of the approach used in the calculations is a valence-bond (VB) based procedure for determining the effective masses of the two vibrating atoms, which depend on the internuclear distance, R. It is found that all LiH electrons participate in the vibrational motion. The R-dependent masses are obtained from the analysis of the simple VB two-configuration ionic-covalent representation of the electronic wave function. These findings are consistent with an interpretation of the chemical bond in LiH as a quantum mechanical superposition of one-electron ionic and covalent states.
NASA Astrophysics Data System (ADS)
Jung, Hyunuk; Kum, Oyeon; Han, Youngyih; Park, Byungdo; Cheong, Kwang-Ho
2014-12-01
For a better understanding of the accuracy of state-of-the-art-radiation therapies, 2-dimensional dosimetry in a patient-like environment will be helpful. Therefore, the dosimetry of EBT3 films in non-water-equivalent tissues was investigated, and the accuracy of commercially-used dose-calculation algorithms was evaluated with EBT3 measurement. Dose distributions were measured with EBT3 films for an in-house-designed phantom that contained a lung or a bone substitute, i.e., an air cavity (3 × 3 × 3 cm3) or teflon (2 × 2 × 2 cm3 or 3 × 3 × 3 cm3), respectively. The phantom was irradiated with 6-MV X-rays with field sizes of 2 × 2, 3 × 3, and 5 × 5 cm2. The accuracy of EBT3 dosimetry was evaluated by comparing the measured dose with the dose obtained from Monte Carlo (MC) simulations. A dose-to-bone-equivalent material was obtained by multiplying the EBT3 measurements by the stopping power ratio (SPR). The EBT3 measurements were then compared with the predictions from four algorithms: Monte Carlo (MC) in iPlan, acuros XB (AXB), analytical anisotropic algorithm (AAA) in Eclipse, and superposition-convolution (SC) in Pinnacle. For the air cavity, the EBT3 measurements agreed with the MC calculation to within 2% on average. For teflon, the EBT3 measurements differed by 9.297% (±0.9229%) on average from the Monte Carlo calculation before dose conversion, and by 0.717% (±0.6546%) after applying the SPR. The doses calculated by using the MC, AXB, AAA, and SC algorithms for the air cavity differed from the EBT3 measurements on average by 2.174, 2.863, 18.01, and 8.391%, respectively; for teflon, the average differences were 3.447, 4.113, 7.589, and 5.102%. The EBT3 measurements corrected with the SPR agreed with 2% on average both within and beyond the heterogeneities with MC results, thereby indicating that EBT3 dosimetry can be used in heterogeneous media. The MC and the AXB dose calculation algorithms exhibited clinically-acceptable accuracy (<5%) in
Lava flow superposition: the reactivation of flow units in compound flow fields
NASA Astrophysics Data System (ADS)
Applegarth, Jane; Pinkerton, Harry; James, Mike; Calvari, Sonia
2010-05-01
Long-lived basaltic eruptions often produce compound `a`ā lava flow fields that are constructed of many juxtaposed and superposed flow units. We have examined the processes that result from superposition when the underlying flows are sufficiently young to have immature crusts and deformable cores. It has previously been recognised that the time elapsed between the emplacement of two units determines the fate of the underlying flow[1], because it controls the rheological contrast between the units. If the time interval is long, the underlying flow is able to cool, degas and develop a rigid crust, so that it shows no significant response to loading, and the two units are easily discernable stratigraphically. If the interval is short, the underlying flow has little time to cool, so the two units may merge and cool as a single unit, forming a ‘multiple' flow[1]. In this case, the individual units are more difficult to distinguish post-eruption. The effects of superposition in intermediate cases, when underlying flows have immature roofs, are less well understood, and have received relatively little attention in the literature, possibly due to the scarcity of observations. However, the lateral and vertical coalescence of lava tubes has been described on Mt. Etna, Sicily[2], suggesting that earlier tubes can be reactivated and lengthened as a result of superposition. Through our recent analysis of images taken by INGV Catania during the 2001 eruption of Mt. Etna (Sicily), we have observed that the emplacement of new surface flows can reactivate underlying units by squeezing the still-hot flow core away from the site of loading. We have identified three different styles of reactivation that took place during that eruption, which depend on the time interval separating the emplacement of the two flows, and hence the rheological contrast between them. For relatively long time intervals (> 2 days), hence high rheological contrasts, superposition can cause an overpressure
NASA Astrophysics Data System (ADS)
Hu, Xing-Biao; Bullough, Robin
1998-03-01
In this paper, the Caudrey-Dodd-Gibbon-Kotera-Sawada hierarchy in bilinear form is considered. A Bäcklund transformation for the CDGKS hierarchy is presented. Under certain conditions, the corresponding nonlinear superposition formula is proved.
Tomita; Sugiyama; Sato; Delaunay; Hayashi
2000-01-01
Cross-sectional transmission electron microscopy observation of CoPtC thin films showed that 10 nm sized ultrafine particles of CoPt typically were elongated along the substrate normal. Analysis of the superposition of 40 micro-electron diffraction patterns showed that there was no preferred crystal orientation of CoPt particles. This superpositioning technique can be applied to thin films, whose X-ray diffraction analysis is difficult due to the small size of the crystals. PMID:10791426
Convolution-Based Forced Detection Monte Carlo Simulation Incorporating Septal Penetration Modeling
Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.
2010-01-01
In SPECT imaging, photon transport effects such as scatter, attenuation and septal penetration can negatively affect the quality of the reconstructed image and the accuracy of quantitation estimation. As such, it is useful to model these effects as carefully as possible during the image reconstruction process. Many of these effects can be included in Monte Carlo (MC) based image reconstruction using convolution-based forced detection (CFD). With CFD Monte Carlo (CFD-MC), often only the geometric response of the collimator is modeled, thereby making the assumption that the collimator materials are thick enough to completely absorb photons. However, in order to retain high collimator sensitivity and high spatial resolution, it is required that the septa be as thin as possible, thus resulting in a significant amount of septal penetration for high energy radionuclides. A method for modeling the effects of both collimator septal penetration and geometric response using ray tracing (RT) techniques has been performed and included into a CFD-MC program. Two look-up tables are pre-calculated based on the specific collimator parameters and radionuclides, and subsequently incorporated into the SIMIND MC program. One table consists of the cumulative septal thickness between any point on the collimator and the center location of the collimator. The other table presents the resultant collimator response for a point source at different distances from the collimator and for various energies. A series of RT simulations have been compared to experimental data for different radionuclides and collimators. Results of the RT technique matches experimental data of collimator response very well, producing correlation coefficients higher than 0.995. Reasonable values of the parameters in the lookup table and computation speed are discussed in order to achieve high accuracy while using minimal storage space for the look-up tables. In order to achieve noise-free projection images from MC, it
Rana, Suresh B.
2013-01-01
Purpose: It is well known that photon beam radiation therapy requires dose calculation algorithms. The objective of this study was to measure and assess the ability of pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) to predict doses beyond high density heterogeneity. Materials and Methods: An inhomogeneous phantom of five layers was created in Eclipse planning system (version 8.6.15). Each layer of phantom was assigned in terms of water (first or top), air (second), water (third), bone (fourth), and water (fifth or bottom) medium. Depth doses in water (bottom medium) were calculated for 100 monitor units (MUs) with 6 Megavoltage (MV) photon beam for different field sizes using AAA and PBC with heterogeneity correction. Combinations of solid water, Poly Vinyl Chloride (PVC), and Styrofoam were then manufactured to mimic phantoms and doses for 100 MUs were acquired with cylindrical ionization chamber at selected depths beyond high density heterogeneity interface. The measured and calculated depth doses were then compared. Results: AAA's values had better agreement with measurements at all measured depths. Dose overestimation by AAA (up to 5.3%) and by PBC (up to 6.7%) was found to be higher in proximity to the high-density heterogeneity interface, and the dose discrepancies were more pronounced for larger field sizes. The errors in dose estimation by AAA and PBC may be due to improper beam modeling of primary beam attenuation or lateral scatter contributions or combination of both in heterogeneous media that include low and high density materials. Conclusions: AAA is more accurate than PBC for dose calculations in treating deep-seated tumor beyond high-density heterogeneity interface. PMID:24455541
NASA Astrophysics Data System (ADS)
He, Cenlin; Takano, Yoshi; Liou, Kuo-Nan; Yang, Ping; Li, Qinbin; Mackowski, Daniel W.
2016-11-01
We perform a comprehensive intercomparison of the geometric-optics surface-wave (GOS) approach, the superposition T-matrix method, and laboratory measurements for optical properties of fresh and coated/aged black carbon (BC) particles with complex structures. GOS and T-matrix calculations capture the measured optical (i.e., extinction, absorption, and scattering) cross sections of fresh BC aggregates, with 5-20% differences depending on particle size. We find that the T-matrix results tend to be lower than the measurements, due to uncertainty in theoretical approximations of realistic BC structures, particle property measurements, and numerical computations in the method. On the contrary, the GOS results are higher than the measurements (hence the T-matrix results) for BC radii <100 nm, because of computational uncertainty for small particles, while the discrepancy substantially reduces to 10% for radii >100 nm. We find good agreement (differences <5%) between the two methods in asymmetry factors for various BC sizes and aggregating structures. For aged BC particles coated with sulfuric acid, GOS and T-matrix results closely match laboratory measurements of optical cross sections. Sensitivity calculations show that differences between the two methods in optical cross sections vary with coating structures for radii <100 nm, while differences decrease to 10% for radii >100 nm. We find small deviations (≤10%) in asymmetry factors computed from the two methods for most BC coating structures and sizes, but several complex structures have 10-30% differences. This study provides the foundation for downstream application of the GOS approach in radiative transfer and climate studies.
NASA Astrophysics Data System (ADS)
Anton, J. M.; Grau, J. B.; Tarquis, A. M.; Andina, D.; Sanchez, M. E.
2012-04-01
The authors have been involved in Model Codes for Construction prior to Eurocodes now Euronorms, and in a Drainage Instruction for Roads for Spain that adopted a prediction model from BPR (Bureau of Public Roads) of USA to take account of evident regional differences in Iberian Peninsula and Spanish Isles, and in some related studies. They used Extreme Value Type I (Gumbell law) models, with independent actions in superposition; this law was also adopted then to obtain maps of extreme rains by CEDEX. These methods could be extrapolated somehow with other extreme values distributions, but the first step was useful to set valid superposition schemas for actions in norms. As real case, in East of Spain rain comes usually extensively from normal weather perturbations, but in other cases from "cold drop" local high rains of about 400mm in a day occur, causing inundations and in cases local disasters. The city of Valencia in East of Spain was inundated at 1,5m high from a cold drop in 1957, and the river Turia formerly through that city was just later diverted some kilometers to South in a wider canal. With Gumbell law the expected intensity grows with time for occurrence, indicating a value for each given "return period", but the increasing speed grows with the "annual dispersion" of the Gumbell law, and some rare dangerous events may become really very possible in periods of many years. That can be proved with relatively simple models, e.g. with Extreme Law type I, and they could be made more precise or discussed. Such effects were used for superposition of actions on a structure for Model Codes, and may be combined with hydraulic effects, e.g. for bridges on rivers. These different Gumbell laws, or other extreme laws, with different dispersion may occur for marine actions of waves, earthquakes, tsunamis, and maybe for human perturbations, that could include industrial catastrophes, or civilization wars if considering historical periods.
Two-level pipelined systolic array for multi-dimensional convolution
Kung, H.T.; Ruane, L.M.; Yen, D.W.L.
1982-11-01
This paper describes a systolic array for the computation of n-dimensional (n-D) convolutions of any positive integer n. Systolic systems usually achieve high performance by allowing computations to be pipelined over a large array of processing elements. To achieve even higher performance, the systolic array of this paper utilizes a second level of pipelining by allowing the processing elements themselves to be pipelined to an arbitrary degree. Moreover, it is shown that as far as orders of magnitude are concerned, the total amount of memory required by the systolic array is no more than that needed by any convolution device that reads in each input data item only once. Thus if only schemes that use the minimum-possible I/O are considered, the systolic array is not only high performance, but also optimal in terms of the amount of required memory.
Chang, C.Y.
1986-01-01
New results on efficient forms of decoding convolutional codes based on Viterbi and stack algorithms using systolic array architecture are presented. Some theoretical aspects of systolic arrays are also investigated. First, systolic array implementation of Viterbi algorithm is considered, and various properties of convolutional codes are derived. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. The issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computations overhead ratio are also investigated. Second, the existing stack algorithm is modified and restated in a more concise version so that it can be efficiently implemented by a special type of systolic array called systolic priority queue. Three general schemes of systolic priority queue based on random access memory, shift register, and ripple register are proposed. Finally, a systematic approach is presented to design systolic arrays for certain general classes of recursively formulated algorithms.
Shkolyar, Anat; Gefen, Amit; Benayahu, Dafna; Greenspan, Hayit
2015-08-01
We propose a semi-automated pipeline for the detection of possible cell divisions in live-imaging microscopy and the classification of these mitosis candidates using a Convolutional Neural Network (CNN). We use time-lapse images of NIH3T3 scratch assay cultures, extract patches around bright candidate regions that then undergo segmentation and binarization, followed by a classification of the binary patches into either containing or not containing cell division. The classification is performed by training a Convolutional Neural Network on a specially constructed database. We show strong results of AUC = 0.91 and F-score = 0.89, competitive with state-of-the-art methods in this field. PMID:26736369
Brain tumor grading based on Neural Networks and Convolutional Neural Networks.
Yuehao Pan; Weimin Huang; Zhiping Lin; Wanzheng Zhu; Jiayin Zhou; Wong, Jocelyn; Zhongxiang Ding
2015-08-01
This paper studies brain tumor grading using multiphase MRI images and compares the results with various configurations of deep learning structure and baseline Neural Networks. The MRI images are used directly into the learning machine, with some combination operations between multiphase MRIs. Compared to other researches, which involve additional effort to design and choose feature sets, the approach used in this paper leverages the learning capability of deep learning machine. We present the grading performance on the testing data measured by the sensitivity and specificity. The results show a maximum improvement of 18% on grading performance of Convolutional Neural Networks based on sensitivity and specificity compared to Neural Networks. We also visualize the kernels trained in different layers and display some self-learned features obtained from Convolutional Neural Networks. PMID:26736358
Shkolyar, Anat; Gefen, Amit; Benayahu, Dafna; Greenspan, Hayit
2015-08-01
We propose a semi-automated pipeline for the detection of possible cell divisions in live-imaging microscopy and the classification of these mitosis candidates using a Convolutional Neural Network (CNN). We use time-lapse images of NIH3T3 scratch assay cultures, extract patches around bright candidate regions that then undergo segmentation and binarization, followed by a classification of the binary patches into either containing or not containing cell division. The classification is performed by training a Convolutional Neural Network on a specially constructed database. We show strong results of AUC = 0.91 and F-score = 0.89, competitive with state-of-the-art methods in this field.
Video-based convolutional neural networks for activity recognition from robot-centric videos
NASA Astrophysics Data System (ADS)
Ryoo, M. S.; Matthies, Larry
2016-05-01
In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.
Brain tumor grading based on Neural Networks and Convolutional Neural Networks.
Yuehao Pan; Weimin Huang; Zhiping Lin; Wanzheng Zhu; Jiayin Zhou; Wong, Jocelyn; Zhongxiang Ding
2015-08-01
This paper studies brain tumor grading using multiphase MRI images and compares the results with various configurations of deep learning structure and baseline Neural Networks. The MRI images are used directly into the learning machine, with some combination operations between multiphase MRIs. Compared to other researches, which involve additional effort to design and choose feature sets, the approach used in this paper leverages the learning capability of deep learning machine. We present the grading performance on the testing data measured by the sensitivity and specificity. The results show a maximum improvement of 18% on grading performance of Convolutional Neural Networks based on sensitivity and specificity compared to Neural Networks. We also visualize the kernels trained in different layers and display some self-learned features obtained from Convolutional Neural Networks.
Coherent atom-molecule superpositions and other weird stuff in Rb 85 BEC
NASA Astrophysics Data System (ADS)
Wieman, Carl
2002-05-01
The Feshbach resonance in rubidium 85 has opened up a new area of BEC physics involving adjustable interactions and novel methods of manipulation. We have used this to study the collapse behavior ("Bosenova") as the interactions are made negative, and a variety of curious effects when the interactions are made large and repulsive. By using rapid magnetic field pulse sequences we have recently created coherent superpositions of atomic and molecular BECs ('molatoms"). These are observed as oscillations in the existence of the condensate as a function of time. The oscillation frequency exactly matches the molecular bound state energy. I will discuss these and other interesting behaviors observed in Rb 85 condensates.
Superposition of Solitons with Arbitrary Parameters for Higher-order Equations
NASA Astrophysics Data System (ADS)
Ankiewicz, A.; Chowdury, A.
2016-07-01
The way in which solitons propagate and collide is an important theme in various areas of physics. We present a systematic study of the superposition of solitons in systems governed by higher-order equations related to the nonlinear Schrödinger family. We allow for arbitrary amplitudes and relative velocities and include an infinite number of equations in our analysis of collisions and superposed solitons. The formulae we obtain can be useful in determining the influence of subtle effects like higher-order dispersion in optical fibres and small delays in the material responses to imposed impulses.
NASA Astrophysics Data System (ADS)
Liu, Shu-Guang; Fan, Hong-Yi
2009-12-01
We find that constructing the two mutually-conjugate tripartite entangled state representations naturally leads to the entangled Fourier transformation. We then derive the convolution theorem for the threedimensional entangled fractional Fourier transformation in the context of quantum mechanics.
Fuzzy Logic Module of Convolutional Neural Network for Handwritten Digits Recognition
NASA Astrophysics Data System (ADS)
Popko, E. A.; Weinstein, I. A.
2016-08-01
Optical character recognition is one of the important issues in the field of pattern recognition. This paper presents a method for recognizing handwritten digits based on the modeling of convolutional neural network. The integrated fuzzy logic module based on a structural approach was developed. Used system architecture adjusted the output of the neural network to improve quality of symbol identification. It was shown that proposed algorithm was flexible and high recognition rate of 99.23% was achieved.
Hardware accelerator of convolution with exponential function for image processing applications
NASA Astrophysics Data System (ADS)
Panchenko, Ivan; Bucha, Victor
2015-12-01
In this paper we describe a Hardware Accelerator (HWA) for fast recursive approximation of separable convolution with exponential function. This filter can be used in many Image Processing (IP) applications, e.g. depth-dependent image blur, image enhancement and disparity estimation. We have adopted this filter RTL implementation to provide maximum throughput in constrains of required memory bandwidth and hardware resources to provide a power-efficient VLSI implementation.
Convolution-based estimation of organ dose in tube current modulated CT
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Segars, W. P.; Dixon, R. L.; Samei, Ehsan
2015-03-01
Among the various metrics that quantify radiation dose in computed tomography (CT), organ dose is one of the most representative quantities reflecting patient-specific radiation burden.1 Accurate estimation of organ dose requires one to effectively model the patient anatomy and the irradiation field. As illustrated in previous studies, the patient anatomy factor can be modeled using a library of computational phantoms with representative body habitus.2 However, the modeling of irradiation field can be practically challenging, especially for CT exams performed with tube current modulation. The central challenge is to effectively quantify the scatter irradiation field created by the dynamic change of tube current. In this study, we present a convolution-based technique to effectively quantify the primary and scatter irradiation field for TCM examinations. The organ dose for a given clinical patient can then be rapidly determined using the convolution-based method, a patient-matching technique, and a library of computational phantoms. 58 adult patients were included in this study (age range: 18-70 y.o., weight range: 60-180 kg). One computational phantom was created based on the clinical images of each patient. Each patient was optimally matched against one of the remaining 57 computational phantoms using a leave-one-out strategy. For each computational phantom, the organ dose coefficients (CTDIvol-normalized organ dose) under fixed tube current were simulated using a validated Monte Carlo simulation program. Such organ dose coefficients were multiplied by a scaling factor, (CTDIvol )organ, convolution that quantifies the regional irradiation field. The convolution-based organ dose was compared with the organ dose simulated from Monte Carlo program with TCM profiles explicitly modeled on the original phantom created based on patient images. The estimation error was within 10% across all organs and modulation profiles for abdominopelvic examination. This strategy
Quantum Fields Obtained from Convoluted Generalized White Noise Never Have Positive Metric
NASA Astrophysics Data System (ADS)
Albeverio, Sergio; Gottschalk, Hanno
2016-05-01
It is proven that the relativistic quantum fields obtained from analytic continuation of convoluted generalized (Lévy type) noise fields have positive metric, if and only if the noise is Gaussian. This follows as an easy observation from a criterion by Baumann, based on the Dell'Antonio-Robinson-Greenberg theorem, for a relativistic quantum field in positive metric to be a free field.
Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification
Yang, Xinyi
2016-01-01
In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods. PMID:27610128
Convolution modeling of two-domain, nonlinear water-level responses in karst aquifers (Invited)
NASA Astrophysics Data System (ADS)
Long, A. J.
2009-12-01
Convolution modeling is a useful method for simulating the hydraulic response of water levels to sinking streamflow or precipitation infiltration at the macro scale. This approach is particularly useful in karst aquifers, where the complex geometry of the conduit and pore network is not well characterized but can be represented approximately by a parametric impulse-response function (IRF) with very few parameters. For many applications, one-dimensional convolution models can be equally effective as complex two- or three-dimensional models for analyzing water-level responses to recharge. Moreover, convolution models are well suited for identifying and characterizing the distinct domains of quick flow and slow flow (e.g., conduit flow and diffuse flow). Two superposed lognormal functions were used in the IRF to approximate the impulses of the two flow domains. Nonlinear response characteristics of the flow domains were assessed by observing temporal changes in the IRFs. Precipitation infiltration was simulated by filtering the daily rainfall record with a backward-in-time exponential function that weights each day’s rainfall with the rainfall of previous days and thus accounts for the effects of soil moisture on aquifer infiltration. The model was applied to the Edwards aquifer in Texas and the Madison aquifer in South Dakota. Simulations of both aquifers showed similar characteristics, including a separation on the order of years between the quick-flow and slow-flow IRF peaks and temporal changes in the IRF shapes when water levels increased and empty pore spaces became saturated.
Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification.
Pang, Shan; Yang, Xinyi
2016-01-01
In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.
Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification
Yang, Xinyi
2016-01-01
In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.
Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification.
Pang, Shan; Yang, Xinyi
2016-01-01
In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods. PMID:27610128
Shah, Shweta B; Sahinidis, Nikolaos V
2012-01-01
Protein structure alignment is the problem of determining an assignment between the amino-acid residues of two given proteins in a way that maximizes a measure of similarity between the two superimposed protein structures. By identifying geometric similarities, structure alignment algorithms provide critical insights into protein functional similarities. Existing structure alignment tools adopt a two-stage approach to structure alignment by decoupling and iterating between the assignment evaluation and structure superposition problems. We introduce a novel approach, SAS-Pro, which addresses the assignment evaluation and structure superposition simultaneously by formulating the alignment problem as a single bilevel optimization problem. The new formulation does not require the sequentiality constraints, thus generalizing the scope of the alignment methodology to include non-sequential protein alignments. We employ derivative-free optimization methodologies for searching for the global optimum of the highly nonlinear and non-differentiable RMSD function encountered in the proposed model. Alignments obtained with SAS-Pro have better RMSD values and larger lengths than those obtained from other alignment tools. For non-sequential alignment problems, SAS-Pro leads to alignments with high degree of similarity with known reference alignments. The source code of SAS-Pro is available for download at http://eudoxus.cheme.cmu.edu/saspro/SAS-Pro.html.
SAS-Pro: Simultaneous Residue Assignment and Structure Superposition for Protein Structure Alignment
Shah, Shweta B.; Sahinidis, Nikolaos V.
2012-01-01
Protein structure alignment is the problem of determining an assignment between the amino-acid residues of two given proteins in a way that maximizes a measure of similarity between the two superimposed protein structures. By identifying geometric similarities, structure alignment algorithms provide critical insights into protein functional similarities. Existing structure alignment tools adopt a two-stage approach to structure alignment by decoupling and iterating between the assignment evaluation and structure superposition problems. We introduce a novel approach, SAS-Pro, which addresses the assignment evaluation and structure superposition simultaneously by formulating the alignment problem as a single bilevel optimization problem. The new formulation does not require the sequentiality constraints, thus generalizing the scope of the alignment methodology to include non-sequential protein alignments. We employ derivative-free optimization methodologies for searching for the global optimum of the highly nonlinear and non-differentiable RMSD function encountered in the proposed model. Alignments obtained with SAS-Pro have better RMSD values and larger lengths than those obtained from other alignment tools. For non-sequential alignment problems, SAS-Pro leads to alignments with high degree of similarity with known reference alignments. The source code of SAS-Pro is available for download at http://eudoxus.cheme.cmu.edu/saspro/SAS-Pro.html. PMID:22662161
NASA Astrophysics Data System (ADS)
Yamada, Hiroshi; Ikeda, Masayuki; Shimbo, Minoru; Miyano, Yasushi
In this paper, the effects of intensity of electron beam, detergent and colorant on creep rupture of polypropylene resin (PP), which is widely used in medicine containers, were investigated and the evaluation method of the long-term forecast of creep rupture was examined. Concretely, first, PP resins including colorant or not were prepared and samples that variously changed intensity of the electron beam irradiation were made. Creep rupture test of those samples was carried in detergent having various consistencies. The effects of those factors on creep rupture were considered and long-term forecast was tried by using time-temperature superposition principle about creep deformation. The following results were obtained. (1) Although creep rupture of PP resin receives the effects of the presence of colorant, intensity of electron beam irradiation and detergent, the time-temperature dependence of creep rupture of PP resin including those affecting factors can be estimated by using the time-temperature superposition principle for creep deformation of the original PP resin. Based on this equivalency, it is possible to predict the long-term forecast of creep rupture of PP resin. (2) Creep rupture is affected by the presence of colorant, intensity of electron beam irradiation and detergent and it happens earlier when the intensity of electron beam irradiation and consistency of detergent are increased.
NASA Astrophysics Data System (ADS)
Wyss, Hans M.
2007-03-01
The rheological properties of soft materials such as concentrated suspensions, emulsions, or foams often exhibit surprisingly universal linear and nonlinear features. Here we show that their linear and nonlinear viscoelastic responses can be unified in a single picture by considering the effect of the strain-rate amplitude on the structural relaxation of the material. We present a new approach to oscillatory rheology, which keeps the strain rate amplitude fixed as the oscillation frequency is varied. This allows for a detailed study of the effects of strain rate on the structural relaxation of soft materials. Our data exhibits a characteristic scaling, which isolates the response due to structural relaxation, even when it occurs at frequencies too low to be accessible with standard techniques. Our approach is reminiscent of a technique called time-temperature superposition (TTS), where rheological curves measured at different temperatures are shifted onto a single master curve that reflects the viscoelastic behavior in a dramatically extended range of frequencies. By analogy, we call our approach strain-rate frequency superposition (SRFS). Our experimental results show that nonlinear viscoelastic measurements contain useful information on the slow relaxation dynamics of soft materials. The data indicates that the yielding behavior of soft materials directly probes the structural relaxation process itself, shifted towards higher frequencies by an applied strain rate. This suggests that SRFS will provide new insight into the physical mechanisms that govern the viscoelastic response of a wide range of soft materials.
Lebyodkin, M A; Shashkov, I V; Lebedkina, T A; Mathis, K; Dobron, P; Chmelik, F
2013-10-01
Various dynamical systems with many degrees of freedom display avalanche dynamics, which is characterized by scale invariance reflected in power-law statistics. The superposition of avalanche processes in real systems driven at a finite velocity may influence the experimental determination of the underlying power law. The present paper reports results of an investigation of this effect using the example of acoustic emission (AE) accompanying plastic deformation of crystals. Indeed, recent studies of AE did not only prove that the dynamics of crystal defects obeys power-law statistics, but also led to a hypothesis of universality of the scaling law. We examine the sensitivity of the apparent statistics of AE to the parameters applied to individualize AE events. Two different alloys, MgZr and AlMg, both displaying strong AE but characterized by different plasticity mechanisms, are investigated. It is shown that the power-law indices display a good robustness in wide ranges of parameters even in the conditions leading to very strong superposition of AE events, although some deviations from the persistent values are also detected. The totality of the results confirms the scale-invariant character of deformation processes on the scale relevant to AE, but uncovers essential differences between the power-law exponents found for two kinds of alloys.
Three-dimensional air flow model for soil venting: Superposition of analytical functions
Cho, J.S.
1993-01-01
A three-dimensional computer model was developed for the simulation of the soil-air pressure distribution at steady state and specific discharge vectors during soil venting with multiple wells in unsaturated soil. The Kirchhoff transformation of dependent variables and coordinate transforms allowed the adoption of the superposition of analytical functions to satisfy the differential equations and boundary conditions. A venting well was represented with a line source of a finite length in a infinite homogeneous medium. The boundary conditions at the soil surface and the water table were approximated by the superposition of a large number of mirror image wells on the opposite sides of boundaries. The numerical accuracy of the model was checked by the evaluation of one of the boundary conditions and the comparison of a simulation result with an available analytical solution from the literature. Simulations of various layouts of operating systems with multiple wells required minimal computational expenses. The model was very flexible and easy to use, and its numerical results proved to be sufficiently accurate.
NASA Astrophysics Data System (ADS)
Garcia-March, Miguel-Angel; Carr, Lincoln D.
2011-03-01
We study the dynamics of ultracold bosons in three-dimensional double wells when they are allowed either to condense in single-particle ground states or to occupy excited states. On the one hand, the introduction of second level single-particle states opens a range of new dynamical regimes. On the other, since the second level eigenstates can carry angular momentum, NOON-like macroscopic superposition (MS) states of atoms with non-zero angular momentum can be obtained. This leads to the study of the dynamics of atoms carrying vorticity while tunneling between wells. We obtain new tunneling processes, like vortex hopping and vortex-antivortex pair superposition along with the sloshing of atoms between both wells. The resulting vortex MS states are much more robust against decoherence than the usual NOON states, as all atoms in the vortex core region must be resolved, not just a single atom. L.D.C acknowledges support from the National Science Foundation under Grant PHY-0547845 as part of the NSF CAREER program. M.A.G.M acknowledges support by the Fulbright Commission, MEC, and FECYT.
Phase sensitivity in deformed-state superposition considering nonlinear phase shifts
NASA Astrophysics Data System (ADS)
Berrada, K.
2016-07-01
We study the problem of the phase estimation for the deformation-state superposition (DSS) under perfect and lossy (due to a dissipative interaction of DSS with their environment) regimes. The study is also devoted to the phase enhancement of the quantum states resulting from a generalized non-linearity of the phase shifts, both without and with losses. We find that such a kind of superposition can give the smallest variance in the phase parameter in comparison with usual Schrödinger cat states in different order of non-linearity even if for a larger average number of photons. Due to the significance of how a system is quantum correlated with its environment in the construction of a scalable quantum computer, the entanglement between the DSS and its environment is investigated during the dissipation. We show that partial entanglement trapping occurs during the dynamics depending on the kind of deformation and mean photon number. These features make the DSS with a larger average number of photons a good candidate for implementation of schemes of quantum optics and information with high precision.
Convolution-based estimation of organ dose in tube current modulated CT
Tian, Xiaoyu; Segars, W Paul; Dixon, Robert L; Samei, Ehsan
2016-01-01
Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460–7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18–70 years, weight range: 60–180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients (hOrgan) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate (CTDIvol)organ, convolution values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying (CTDIvol)organ, convolution with the organ dose coefficients (hOrgan). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The discrepancy between the estimated organ dose and dose simulated using TCM Monte Carlo program was quantified. We further compared the
Convolution-based estimation of organ dose in tube current modulated CT
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan
2016-05-01
Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460–7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18–70 years, weight range: 60–180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled
Convolution-based estimation of organ dose in tube current modulated CT
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan
2016-05-01
Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The
Leake, Stanley A.; Greer, William; Watt, Dennis; Weghorst, Paul
2008-01-01
According to the 'Law of the River', wells that draw water from the Colorado River by underground pumping need an entitlement for the diversion of water from the Colorado River. Consumptive use can occur through direct diversions of surface water, as well as through withdrawal of water from the river by underground pumping. To develop methods for evaluating the need for entitlements for Colorado River water, an assessment of possible depletion of water in the Colorado River by pumping wells is needed. Possible methods include simple analytical models and complex numerical ground-water flow models. For this study, an intermediate approach was taken that uses numerical superposition models with complex horizontal geometry, simple vertical geometry, and constant aquifer properties. The six areas modeled include larger extents of the previously defined river aquifer from the Lake Mead area to the Yuma area. For the modeled areas, a low estimate of transmissivity and an average estimate of transmissivity were derived from statistical analyses of transmissivity data. Aquifer storage coefficient, or specific yield, was selected on the basis of results of a previous study in the Yuma area. The USGS program MODFLOW-2000 (Harbaugh and others, 2000) was used with uniform 0.25-mile grid spacing along rows and columns. Calculations of depletion of river water by wells were made for a time of 100 years since the onset of pumping. A computer program was set up to run the models repeatedly, each time with a well in a different location. Maps were constructed for at least two transmissivity values for each of the modeled areas. The modeling results, based on the selected transmissivities, indicate that low values of depletion in 100 years occur mainly in parts of side valleys that are more than a few tens of miles from the Colorado River.
Adaptive superposition of finite element meshes in linear and nonlinear dynamic analysis
NASA Astrophysics Data System (ADS)
Yue, Zhihua
2005-11-01
The numerical analysis of transient phenomena in solids, for instance, wave propagation and structural dynamics, is a very important and active area of study in engineering. Despite the current evolutionary state of modern computer hardware, practical analysis of large scale, nonlinear transient problems requires the use of adaptive methods where computational resources are locally allocated according to the interpolation requirements of the solution form. Adaptive analysis of transient problems involves obtaining solutions at many different time steps, each of which requires a sequence of adaptive meshes. Therefore, the execution speed of the adaptive algorithm is of paramount importance. In addition, transient problems require that the solution must be passed from one adaptive mesh to the next adaptive mesh with a bare minimum of solution-transfer error since this form of error compromises the initial conditions used for the next time step. A new adaptive finite element procedure (s-adaptive) is developed in this study for modeling transient phenomena in both linear elastic solids and nonlinear elastic solids caused by progressive damage. The adaptive procedure automatically updates the time step size and the spatial mesh discretization in transient analysis, achieving the accuracy and the efficiency requirements simultaneously. The novel feature of the s-adaptive procedure is the original use of finite element mesh superposition to produce spatial refinement in transient problems. The use of mesh superposition enables the s-adaptive procedure to completely avoid the need for cumbersome multipoint constraint algorithms and mesh generators, which makes the s-adaptive procedure extremely fast. Moreover, the use of mesh superposition enables the s-adaptive procedure to minimize the solution-transfer error. In a series of different solid mechanics problem types including 2-D and 3-D linear elastic quasi-static problems, 2-D material nonlinear quasi-static problems
NASA Astrophysics Data System (ADS)
Mitri, F. G.
2016-08-01
In this work, counterintuitive effects such as the generation of an axial (i.e., long the direction of wave motion) zero-energy flux density (i.e., axial Poynting singularity) and reverse (i.e., negative) propagation of nonparaxial quasi-Gaussian electromagnetic (EM) beams are examined. Generalized analytical expressions for the EM field's components of a coherent superposition of two high-order quasi-Gaussian vortex beams of opposite handedness and different amplitudes are derived based on the complex-source-point method, stemming from Maxwell's vector equations and the Lorenz gauge condition. The general solutions exhibiting unusual effects satisfy the Helmholtz and Maxwell's equations. The EM beam components are characterized by nonzero integer degree and order (n ,m ) , respectively, an arbitrary waist w0, a diffraction convergence length known as the Rayleigh range zR, and a weighting (real) factor 0 ≤α ≤1 that describes the transition of the beam from a purely vortex (α =0 ) to a nonvortex (α =1 ) type. An attractive feature for this superposition is the description of strongly focused (or strongly divergent) wave fields. Computations of the EM power density as well as the linear and angular momentum density fluxes illustrate the analysis with particular emphasis on the polarization states of the vector potentials forming the beams and the weight of the coherent beam superposition causing the transition from the vortex to the nonvortex type. Should some conditions determined by the polarization state of the vector potentials and the beam parameters be met, an axial zero-energy flux density is predicted in addition to a negative retrograde propagation effect. Moreover, rotation reversal of the angular momentum flux density with respect to the beam handedness is anticipated, suggesting the possible generation of negative (left-handed) torques. The results are particularly useful in applications involving the design of strongly focused optical laser
Prado, F. O.; Duzzioni, E. I.; Almeida, N. G. de; Moussa, M. H. Y.; Villas-Boas, C. J.
2011-07-15
In this paper we detail some results advanced in a recent letter [Prado et al., Phys. Rev. Lett. 102, 073008 (2009).] showing how to engineer reservoirs for two-level systems at absolute zero by means of a time-dependent master equation leading to a nonstationary superposition equilibrium state. We also present a general recipe showing how to build nonadiabatic coherent evolutions of a fermionic system interacting with a bosonic mode and investigate the influence of thermal reservoirs at finite temperature on the fidelity of the protected superposition state. Our analytical results are supported by numerical analysis of the full Hamiltonian model.
Convolution method and CTV-to-PTV margins for finite fractions and small systematic errors
NASA Astrophysics Data System (ADS)
Gordon, J. J.; Siebers, J. V.
2007-04-01
The van Herk margin formula (VHMF) relies on the accuracy of the convolution method (CM) to determine clinical target volume (CTV) to planning target volume (PTV) margins. This work (1) evaluates the accuracy of the CM and VHMF as a function of the number of fractions N and other parameters, and (2) proposes an alternative margin algorithm which ensures target coverage for a wider range of parameter values. Dose coverage was evaluated for a spherical target with uniform margin, using the same simplified dose model and CTV coverage criterion as were used in development of the VHMF. Systematic and random setup errors were assumed to be normally distributed with standard deviations Σ and σ. For clinically relevant combinations of σ, Σ and N, margins were determined by requiring that 90% of treatment course simulations have a CTV minimum dose greater than or equal to the static PTV minimum dose. Simulation results were compared with the VHMF and the alternative margin algorithm. The CM and VHMF were found to be accurate for parameter values satisfying the approximate criterion: σ[1 - γN/25] < 0.2, where γ = Σ/σ. They were found to be inaccurate for σ[1 - γN/25] > 0.2, because they failed to account for the non-negligible dose variability associated with random setup errors. These criteria are applicable when σ gap σP, where σP = 0.32 cm is the standard deviation of the normal dose penumbra. (Qualitative behaviour of the CM and VHMF will remain the same, though the criteria might vary if σP takes values other than 0.32 cm.) When σ Lt σP, dose variability due to random setup errors becomes negligible, and the CM and VHMF are valid regardless of the values of Σ and N. When σ gap σP, consistent with the above criteria, it was found that the VHMF can underestimate margins for large σ, small Σ and small N. A potential consequence of this underestimate is that the CTV minimum dose can fall below its planned value in more than the prescribed 10% of
Convolution method and CTV-to-PTV margins for finite fractions and small systematic errors.
Gordon, J J; Siebers, J V
2007-04-01
The van Herk margin formula (VHMF) relies on the accuracy of the convolution method (CM) to determine clinical target volume (CTV) to planning target volume (PTV) margins. This work (1) evaluates the accuracy of the CM and VHMF as a function of the number of fractions N and other parameters, and (2) proposes an alternative margin algorithm which ensures target coverage for a wider range of parameter values. Dose coverage was evaluated for a spherical target with uniform margin, using the same simplified dose model and CTV coverage criterion as were used in development of the VHMF. Systematic and random setup errors were assumed to be normally distributed with standard deviations Sigma and sigma. For clinically relevant combinations of sigma, Sigma and N, margins were determined by requiring that 90% of treatment course simulations have a CTV minimum dose greater than or equal to the static PTV minimum dose. Simulation results were compared with the VHMF and the alternative margin algorithm. The CM and VHMF were found to be accurate for parameter values satisfying the approximate criterion: sigma[1 - gammaN/25] < 0.2, where gamma = Sigma/sigma. They were found to be inaccurate for sigma[1 - gammaN/25] > 0.2, because they failed to account for the non-negligible dose variability associated with random setup errors. These criteria are applicable when sigma greater than or approximately egual sigma(P), where sigma(P) = 0.32 cm is the standard deviation of the normal dose penumbra. (Qualitative behaviour of the CM and VHMF will remain the same, though the criteria might vary if sigma(P) takes values other than 0.32 cm.) When sigma < sigma(P), dose variability due to random setup errors becomes negligible, and the CM and VHMF are valid regardless of the values of Sigma and N. When sigma greater than or approximately egual sigma(P), consistent with the above criteria, it was found that the VHMF can underestimate margins for large sigma, small Sigma and small N. A
The external magnetic field created by the superposition of identical parallel finite solenoids
NASA Astrophysics Data System (ADS)
Lim, Melody Xuan; Greenside, Henry
2016-08-01
We use superposition and numerical methods to show that the external magnetic field generated by parallel identical solenoids can be nearly uniform and substantial, even when the solenoids have lengths that are large compared to their radii. We examine both a ring of solenoids and a large hexagonal array of solenoids. In both cases, we discuss how the magnitude and uniformity of the external field depend on the length of and the spacing between the solenoids. We also discuss some novel properties of a single solenoid, e.g., that even for short solenoids the energy stored in the internal magnetic field exceeds the energy stored in the spatially infinite external magnetic field. These results should be broadly interesting to undergraduates learning about electricity and magnetism.
Nonlocal quantum macroscopic superposition in a high-thermal low-purity state.
Brezinski, Mark E; Liu, Bin
2008-12-16
Quantum state exchange between light and matter is an important ingredient for future quantum information networks as well as other applications. Photons are the fastest and simplest carriers of information for transmission but in general, it is difficult to localize and store photons, so usually one prefers choosing matter as quantum memory elements. Macroscopic superposition and nonlocal quantum interactions have received considerable interest for this purpose over recent years in fields ranging from quantum computers to cryptography, in addition to providing major insights into physical laws. However, these experiments are generally performed either with equipment or under conditions that are unrealistic for practical applications. Ideally, the two can be combined using conventional equipment and conditions to generate a "quantum teleportation"-like state, particularly with a very small amount of purity existing in an overall highly mixed thermal state (relatively low decoherence at high temperatures). In this study we used an experimental design to demonstrate these principles. We performed optical coherence tomography (OCT) using a thermal source at room temperatures of a specifically designed target in the sample arm. Here, position uncertainty (i.e., dispersion) was induced in the reference arm. In the sample arm (target) we placed two glass plates separated by a different medium while altering position uncertainty in the reference arm. This resulted in a chirped signal between the glass plate reflective surfaces in the combined interferogram. The chirping frequency, as measured by the fast Fourier transform (FFT), varies with the medium between the plates, which is a nonclassical phenomenon. These results are statistically significant and occur from a superposition between the glass surface and the medium with increasing position uncertainty, a true quantum-mechanical phenomenon produced by photon pressure from two-photon interference. The differences in
Yin, H-L; Cao, W-F; Fu, Y; Tang, Y-L; Liu, Y; Chen, T-Y; Chen, Z-B
2014-09-15
Measurement-device-independent quantum key distribution (MDI-QKD) with decoy-state method is believed to be securely applied to defeat various hacking attacks in practical quantum key distribution systems. Recently, the coherent-state superpositions (CSS) have emerged as an alternative to single-photon qubits for quantum information processing and metrology. Here, in this Letter, CSS are exploited as the source in MDI-QKD. We present an analytical method that gives two tight formulas to estimate the lower bound of yield and the upper bound of bit error rate. We exploit the standard statistical analysis and Chernoff bound to perform the parameter estimation. Chernoff bound can provide good bounds in the long-distance MDI-QKD. Our results show that with CSS, both the security transmission distance and secure key rate are significantly improved compared with those of the weak coherent states in the finite-data case. PMID:26466295
Numerical model for macroscopic quantum superpositions based on phase-covariant quantum cloning
NASA Astrophysics Data System (ADS)
Buraczewski, A.; Stobińska, M.
2012-10-01
Macroscopically populated quantum superpositions pose a question to what extent the macroscopic world obeys quantum mechanical laws. Recently, such superpositions for light, generated by an optimal quantum cloner, have been demonstrated. They are of fundamental and technological interest. We present numerical methods useful for modeling of these states. Their properties are governed by a Gaussian hypergeometric function, which cannot be reduced to either elementary or easily tractable functions. We discuss the method of efficient computation of this function for half-integer parameters and a moderate value of its argument. We show how to dynamically estimate a cutoff for infinite sums involving this function performed over its parameters. Our algorithm exceeds double precision and is parallelizable. Depending on the experimental parameters it chooses one of the several ways of summation to achieve the best efficiency. The methods presented here can be adjusted for analysis of similar experimental schemes. Program summary Program title: MQSVIS Catalogue identifier: AEMR_ v1_ 0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEMR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1643 No. of bytes in distributed program, including test data, etc.: 13212 Distribution format: tar.gz Programming language: C with OpenMP extensions (main numerical program), Python (helper scripts). Computer: Modern PC (tested on AMD and Intel processors), HP BL2x220. Operating system: Unix/Linux. Has the code been vectorized or parallelized?: Yes (OpenMP). RAM: 200 MB for single run for 1000×1000 tile Classification: 4.15, 18. External routines: OpenMP Nature of problem: Recently, macroscopically populated quantum superpositions for light, generated by an optimal quantum cloner, have
Superposition and entanglement of mesoscopic squeezed vacuum states in cavity QED
Chen Changyong; Feng Mang; Gao Kelin
2006-03-15
We propose a scheme to generate superposition and entanglement between the mesoscopic squeezed vacuum states by considering the two-photon interaction of N two-level atoms in a cavity with high quality factor, assisted by a strong driving field. By virtue of specific choices of the cavity detuning, a number of multiparty entangled states can be prepared, including the entanglement between the atomic and the squeezed vacuum cavity states and between the squeezed vacuum states and the coherent states of the cavities. We also present how to prepare entangled states and 'Schroedinger cats' states regarding the squeezed vacuum states of the cavity modes. The possible extension and application of our scheme are discussed. Our scheme is close to the reach with current cavity QED techniques.
Tsuchiya, K.; Shioya, T.
2015-04-15
We have developed a practical method for determining an excellent initial arrangement of magnetic arrays for a pure-magnet Halbach-type undulator. In this method, the longitudinal magnetic field distribution of each magnet is measured using a moving Hall probe system along the beam axis with a high positional resolution. The initial arrangement of magnetic arrays is optimized and selected by analyzing the superposition of all distribution data in order to achieve adequate spectral quality for the undulator. We applied this method to two elliptically polarizing undulators (EPUs), called U#16-2 and U#02-2, at the Photon Factory storage ring (PF ring) in the High Energy Accelerator Research Organization (KEK). The measured field distribution of the undulator was demonstrated to be excellent for the initial arrangement of the magnet array, and this method saved a great deal of effort in adjusting the magnetic fields of EPUs.
Lie-Hamilton systems on the plane: applications and superposition rules
NASA Astrophysics Data System (ADS)
Blasco, Alfonso; Herranz, Francisco J.; de Lucas, Javier; Sardón, Cristina
2015-08-01
A Lie-Hamilton (LH) system is a nonautonomous system of first-order ordinary differential equations describing the integral curves of a t-dependent vector field taking values in a finite-dimensional real Lie algebra of Hamiltonian vector fields with respect to a Poisson structure. We provide new algebraic/geometric techniques to easily determine the properties of such Lie algebras on the plane, e.g., their associated Poisson bivectors. We study new and known LH systems on {{{R}}}2 with physical, biological and mathematical applications. New results cover Cayley-Klein Riccati equations, the here defined planar diffusion Riccati systems, complex Bernoulli differential equations and projective Schrödinger equations. Constants of motion for planar LH systems are explicitly obtained which, in turn, allow us to derive superposition rules through a coalgebra approach.
Nonlocal quantum macroscopic superposition in a high-thermal low-purity state.
Brezinski, Mark E; Liu, Bin
2008-12-16
Quantum state exchange between light and matter is an important ingredient for future quantum information networks as well as other applications. Photons are the fastest and simplest carriers of information for transmission but in general, it is difficult to localize and store photons, so usually one prefers choosing matter as quantum memory elements. Macroscopic superposition and nonlocal quantum interactions have received considerable interest for this purpose over recent years in fields ranging from quantum computers to cryptography, in addition to providing major insights into physical laws. However, these experiments are generally performed either with equipment or under conditions that are unrealistic for practical applications. Ideally, the two can be combined using conventional equipment and conditions to generate a "quantum teleportation"-like state, particularly with a very small amount of purity existing in an overall highly mixed thermal state (relatively low decoherence at high temperatures). In this study we used an experimental design to demonstrate these principles. We performed optical coherence tomography (OCT) using a thermal source at room temperatures of a specifically designed target in the sample arm. Here, position uncertainty (i.e., dispersion) was induced in the reference arm. In the sample arm (target) we placed two glass plates separated by a different medium while altering position uncertainty in the reference arm. This resulted in a chirped signal between the glass plate reflective surfaces in the combined interferogram. The chirping frequency, as measured by the fast Fourier transform (FFT), varies with the medium between the plates, which is a nonclassical phenomenon. These results are statistically significant and occur from a superposition between the glass surface and the medium with increasing position uncertainty, a true quantum-mechanical phenomenon produced by photon pressure from two-photon interference. The differences in
Nonlocal quantum macroscopic superposition in a high-thermal low-purity state
NASA Astrophysics Data System (ADS)
Brezinski, Mark E.; Liu, Bin
2008-12-01
Quantum state exchange between light and matter is an important ingredient for future quantum information networks as well as other applications. Photons are the fastest and simplest carriers of information for transmission but in general, it is difficult to localize and store photons, so usually one prefers choosing matter as quantum memory elements. Macroscopic superposition and nonlocal quantum interactions have received considerable interest for this purpose over recent years in fields ranging from quantum computers to cryptography, in addition to providing major insights into physical laws. However, these experiments are generally performed either with equipment or under conditions that are unrealistic for practical applications. Ideally, the two can be combined using conventional equipment and conditions to generate a “quantum teleportation”-like state, particularly with a very small amount of purity existing in an overall highly mixed thermal state (relatively low decoherence at high temperatures). In this study we used an experimental design to demonstrate these principles. We performed optical coherence tomography (OCT) using a thermal source at room temperatures of a specifically designed target in the sample arm. Here, position uncertainty (i.e., dispersion) was induced in the reference arm. In the sample arm (target) we placed two glass plates separated by a different medium while altering position uncertainty in the reference arm. This resulted in a chirped signal between the glass plate reflective surfaces in the combined interferogram. The chirping frequency, as measured by the fast Fourier transform (FFT), varies with the medium between the plates, which is a nonclassical phenomenon. These results are statistically significant and occur from a superposition between the glass surface and the medium with increasing position uncertainty, a true quantum-mechanical phenomenon produced by photon pressure from two-photon interference. The differences
Free in-plane vibration analysis of rectangular plates by the method of superposition
NASA Astrophysics Data System (ADS)
Gorman, D. J.
2004-05-01
The superposition method is introduced as a means for obtaining analytical-type solutions for free in-plane vibration of rectangular plates. The governing differential equations and boundary conditions are expressed in dimensionless form. The problem of free in-plane vibration of the completely free rectangular plate is resolved for illustrative purposes. Convergence is found to be rapid and excellent agreement between computed results and those obtained by previous authors utilizing the Rayleigh-Ritz energy method is obtained. It is pointed out that following procedures analogous to those utilized in resolving lateral plate vibration problems, in-plane free vibration problems related to point supported plates, plates with in-plane elastic boundary support, etc., are now amenable to solution by this method.
Superposition of Cohesive Elements to Account for R-Curve Toughening in the Fracture of Composites
NASA Technical Reports Server (NTRS)
Davila, Carlos G.; Rose, Cheryl A.; Song, Kyongchan
2008-01-01
The relationships between a resistance curve (R-curve), the corresponding fracture process zone length, the shape of the traction/displacement softening law, and the propagation of fracture are examined in the context of the through-the-thickness fracture of composite laminates. A procedure that accounts for R-curve toughening mechanisms by superposing bilinear cohesive elements is proposed. Simple equations are developed for determining the separation of the critical energy release rates and the strengths that define the independent contributions of each bilinear softening law in the superposition. It is shown that the R-curve measured with a Compact Tension specimen test can be reproduced by superposing two bilinear softening laws. It is also shown that an accurate representation of the R-curve is essential for predicting the initiation and propagation of fracture in composite laminates.
Linear Superposition and Prediction of Bacterial Promoter Activity Dynamics in Complex Conditions
Rothschild, Daphna; Dekel, Erez; Hausser, Jean; Bren, Anat; Aidelberg, Guy; Szekely, Pablo; Alon, Uri
2014-01-01
Bacteria often face complex environments. We asked how gene expression in complex conditions relates to expression in simpler conditions. To address this, we obtained accurate promoter activity dynamical measurements on 94 genes in E. coli in environments made up of all possible combinations of four nutrients and stresses. We find that the dynamics across conditions is well described by two principal component curves specific to each promoter. As a result, the promoter activity dynamics in a combination of conditions is a weighted average of the dynamics in each condition alone. The weights tend to sum up to approximately one. This weighted-average property, called linear superposition, allows predicting the promoter activity dynamics in a combination of conditions based on measurements of pairs of conditions. If these findings apply more generally, they can vastly reduce the number of experiments needed to understand how E. coli responds to the combinatorially huge space of possible environments. PMID:24809350
Lund, A.P.; Ralph, T.C.
2005-03-01
In this paper we explore the possibility of fundamental tests for coherent-state optical quantum computing gates [T. C. Ralph et al., Phys. Rev. A 68, 042319 (2003)] using sophisticated but not unrealistic quantum states. The major resource required in these gates is a state diagonal to the basis states. We use the recent observation that a squeezed single-photon state [S(r) vertical bar 1>] approximates well an odd superposition of coherent states (vertical bar {alpha}>- vertical bar -{alpha}>) to address the diagonal resource problem. The approximation only holds for relatively small {alpha}, and hence these gates cannot be used in a scalable scheme. We explore the effects on fidelities and probabilities in teleportation and a rotated Hadamard gate.
Inferring superposition and entanglement in evolving systems from measurements in a single basis
Schelpe, Bella; Kent, Adrian; Munro, William; Spiller, Tim
2003-05-01
We discuss what can be inferred from measurements on evolving one- and two-qubit systems using a single measurement basis at various times. We show that, given reasonable physical assumptions, carrying out such measurements at quarter-period intervals is enough to demonstrate coherent oscillations of one or two qubits between the relevant measurement basis states. One can thus infer from such measurements alone that an approximately equal superposition of two measurement basis states has been created during a coherent oscillation experiment. Similarly, one can infer that a near-maximally entangled state of two qubits has been created part way through an experiment involving a putative SWAP gate. These results apply even if the relevant quantum systems are only approximate qubits. We discuss applications to fundamental quantum physics experiments and quantum-information processing investigations.
Composite vortex beams by coaxial superposition of Laguerre-Gaussian beams
NASA Astrophysics Data System (ADS)
Huang, Sujuan; Miao, Zhuang; He, Chao; Pang, Fufei; Li, Yingchun; Wang, Tingyun
2016-03-01
We propose the generation of novel composite vortex beams by coaxial superposition of Laguerre-Gaussian (LG) beams with common waist position and waist parameter. Computer-generated holography by conjugate-symmetric extension is applied to produce the holograms of several composite vortex beams. Utilizing the holograms, fantastic light modes including optical ring lattice, double dark-ring and double bright-ring composite vortex beams etc. are numerically reconstructed. The generated composite vortex beams show diffraction broadening with some of them showing dynamic rotation around beam centers while propagating. Optical experiments based on a computer-controlled spatial light modulator (SLM) verify the numerical results. These novel composite vortex beams possess more complicated distribution and more controllable parameters for their potential application in comparison to conventional optical ring lattice.
Helmich, Benjamin; Sierka, Marek
2012-01-15
An algorithm for similarity recognition of molecules and molecular clusters is presented which also establishes the optimum matching among atoms of different structures. In the first step of the algorithm, a set of molecules are coarsely superimposed by transforming them into a common reference coordinate system. The optimum atomic matching among structures is then found with the help of the Hungarian algorithm. For this, pairs of structures are represented as complete bipartite graphs with a weight function that uses intermolecular atomic distances. In the final step, a rotational superposition method is applied using the optimum atomic matching found. This yields the minimum root mean square deviation of intermolecular atomic distances with respect to arbitrary rotation and translation of the molecules. Combined with an effective similarity prescreening method, our algorithm shows robustness and an effective quadratic scaling of computational time with the number of atoms.
Multi-level manual and autonomous control superposition for intelligent telerobot
NASA Technical Reports Server (NTRS)
Hirai, Shigeoki; Sato, T.
1989-01-01
Space telerobots are recognized to require cooperation with human operators in various ways. Multi-level manual and autonomous control superposition in telerobot task execution is described. The object model, the structured master-slave manipulation system, and the motion understanding system are proposed to realize the concept. The object model offers interfaces for task level and object level human intervention. The structured master-slave manipulation system offers interfaces for motion level human intervention. The motion understanding system maintains the consistency of the knowledge through all the levels which supports the robot autonomy while accepting the human intervention. The superposing execution of the teleoperational task at multi-levels realizes intuitive and robust task execution for wide variety of objects and in changeful environment. The performance of several examples of operating chemical apparatuses is shown.
Lee, Su-Yong; Kim, Ho-Joon; Ji, Se-Wan; Nha, Hyunchul
2011-07-15
We investigate how the entanglement properties of a two-mode state can be improved by performing a coherent superposition operation ta+ra{sup {dagger}} of photon subtraction and addition, proposed by Lee and Nha [Phys. Rev. A 82, 053812 (2010)], on each mode. We show that the degree of entanglement, the Einstein-Podolsky-Rosen-type correlation, and the performance of quantum teleportation can be all enhanced for the output state when the coherent operation is applied to a two-mode squeezed state. The effects of the coherent operation are more prominent than those of the mere photon subtraction a and the addition a{sup {dagger}} particularly in the small-squeezing regime, whereas the optimal operation becomes the photon subtraction (case of r=0) in the large-squeezing regime.
A test of the equivalence principle(s) for quantum superpositions
NASA Astrophysics Data System (ADS)
Orlando, Patrick J.; Mann, Robert B.; Modi, Kavan; Pollock, Felix A.
2016-10-01
We propose an experimental test of the quantum equivalence principle introduced by Zych and Brukner (arXiv:1502.00971), which generalises the Einstein equivalence principle to superpositions of internal energy states. We consider a harmonically trapped {spin} - \\tfrac{1}{2} atom in the presence of both gravity and an external magnetic field and show that when the external magnetic field is suddenly switched off, various violations of the equivalence principle would manifest as otherwise forbidden transitions. Performing such an experiment would put bounds on the various phenomenological violating parameters. We further demonstrate that the classical weak equivalence principle can be tested by suddenly putting the apparatus into free fall, effectively ‘switching off’ gravity.
Low rank approximation in G 0 W 0 calculations
NASA Astrophysics Data System (ADS)
Shao, MeiYue; Lin, Lin; Yang, Chao; Liu, Fang; Da Jornada, Felipe H.; Deslippe, Jack; Louie, Steven G.
2016-08-01
The single particle energies obtained in a Kohn--Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in transport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green's function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The $G_0W_0$ approximation is a widely used technique in which the self energy is expressed as the convolution of a non-interacting Green's function ($G_0$) and a screened Coulomb interaction ($W_0$) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating $W_0$ at multiple frequencies. In this paper, we discuss how the cost of $G_0W_0$ calculation can be reduced by constructing a low rank approximation to the frequency dependent part of $W_0$. In particular, we examine the effect of such a low rank approximation on the accuracy of the $G_0W_0$ approximation. We also discuss how the numerical convolution of $G_0$ and $W_0$ can be evaluated efficiently and accurately by using a contour deformation technique with an appropriate choice of the contour.
Dynamic properties of human tympanic membrane based on frequency-temperature superposition.
Zhang, Xiangming; Gan, Rong Z
2013-01-01
The human tympanic membrane (TM) transfers sound in the ear canal into the mechanical vibration of the ossicles in the middle ear. The dynamic properties of TM directly affect the middle ear transfer function. The static or quasi-static mechanical properties of TM were reported in the literature, but the dynamic properties of TM over the auditory frequency range are very limited. In this paper, a new method was developed to measure the dynamic properties of human TM using the Dynamic-Mechanical Analyzer (DMA). The test was conducted at the frequency range of 1-40 Hz at three different temperatures: 5, 25, and 37 °C. The frequency-temperature superposition was applied to extend the testing frequency range to a much higher level (at least 3800 Hz). The generalized linear solid model was employed to describe the constitutive relation of the TM. The storage modulus E' and the loss modulus E″ were obtained from 11 specimens. The mean storage modulus was 15.1 MPa at 1 Hz and 27.6 MPa at 3800 Hz. The mean loss modulus was 0.28 MPa at 1 Hz and 4.1 MPa at 3800 Hz. The results show that the frequency-temperature superposition is a feasible approach to study the dynamic properties of the ear soft tissues. The dynamic properties of human TM obtained in this study provide a better description of the damping behavior of ear tissues. The properties can be transferred into the finite element model of the human ear to replace the Rayleigh type damping. The data reported here contribute to the biomechanics of the middle ear and improve the accuracy of the FE model for the human ear. PMID:22820983
NASA Astrophysics Data System (ADS)
Alidoost, F.; Arefi, H.
2016-06-01
In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.
NASA Astrophysics Data System (ADS)
Varlamov, Vladimir
2007-03-01
Rayleigh functions [sigma]l([nu]) are defined as series in inverse powers of the Bessel function zeros [lambda][nu],n[not equal to]0, where ; [nu] is the index of the Bessel function J[nu](x) and n=1,2,... is the number of the zeros. Convolutions of Rayleigh functions with respect to the Bessel index, are needed for constructing global-in-time solutions of semi-linear evolution equations in circular domains [V. Varlamov, On the spatially two-dimensional Boussinesq equation in a circular domain, Nonlinear Anal. 46 (2001) 699-725; V. Varlamov, Convolution of Rayleigh functions with respect to the Bessel index, J. Math. Anal. Appl. 306 (2005) 413-424]. The study of this new family of special functions was initiated in [V. Varlamov, Convolution of Rayleigh functions with respect to the Bessel index, J. Math. Anal. Appl. 306 (2005) 413-424], where the properties of R1(m) were investigated. In the present work a general representation of Rl(m) in terms of [sigma]l([nu]) is deduced. On the basis of this a representation for the function R2(m) is obtained in terms of the [psi]-function. An asymptotic expansion is computed for R2(m) as m-->[infinity]. Such asymptotics are needed for establishing function spaces for solutions of semi-linear equations in bounded domains with periodicity conditions in one coordinate. As an example of application of Rl(m) a forced Boussinesq equationutt-2b[Delta]ut=-[alpha][Delta]2u+[Delta]u+[beta][Delta](u2)+f with [alpha],b=const>0 and [beta]=const[set membership, variant]R is considered in a unit disc with homogeneous boundary and initial data. Construction of its global-in-time solutions involves the use of the functions R1(m) and R2(m) which are responsible for the nonlinear smoothing effect.
Campbell, David L.; Watts, Raymond D.
1978-01-01
Program listing, instructions, and example problems are given for 12 programs for the interpretation of geophysical data, for use on Hewlett-Packard models 67 and 97 programmable hand-held calculators. These are (1) gravity anomaly over 2D prism with = 9 vertices--Talwani method; (2) magnetic anomaly (?T, ?V, or ?H) over 2D prism with = 8 vertices?Talwani method; (3) total-field magnetic anomaly profile over thick sheet/thin dike; (4) single dipping seismic refractor--interpretation and design; (5) = 4 dipping seismic refractors--interpretation; (6) = 4 dipping seismic refractors?design; (7) vertical electrical sounding over = 10 horizontal layers--Schlumberger or Wenner forward calculation; (8) vertical electric sounding: Dar Zarrouk calculations; (9) magnetotelluric planewave apparent conductivity and phase angle over = 9 horizontal layers--forward calculation; (10) petrophysics: a.c. electrical parameters; (11) petrophysics: elastic constants; (12) digital convolution with = 10-1ength filter.
NASA Astrophysics Data System (ADS)
Fonseca, Pablo; Mendoza, Julio; Wainer, Jacques; Ferrer, Jose; Pinto, Joseph; Guerrero, Jorge; Castaneda, Benjamin
2015-03-01
Breast parenchymal density is considered a strong indicator of breast cancer risk and therefore useful for preventive tasks. Measurement of breast density is often qualitative and requires the subjective judgment of radiologists. Here we explore an automatic breast composition classification workflow based on convolutional neural networks for feature extraction in combination with a support vector machines classifier. This is compared to the assessments of seven experienced radiologists. The experiments yielded an average kappa value of 0.58 when using the mode of the radiologists' classifications as ground truth. Individual radiologist performance against this ground truth yielded kappa values between 0.56 and 0.79.
NASA Astrophysics Data System (ADS)
Jordan, Tyler S.
2016-05-01
This paper presents the findings of using convolutional neural networks (CNNs) to classify human activity from micro-Doppler features. An emphasis on activities involving potential security threats such as holding a gun are explored. An automotive 24 GHz radar on chip was used to collect the data and a CNN (normally applied to image classification) was trained on the resulting spectrograms. The CNN achieves an error rate of 1.65 % on classifying running vs. walking, 17.3 % error on armed walking vs. unarmed walking, and 22 % on classifying six different actions.
Robust and accurate transient light transport decomposition via convolutional sparse coding.
Hu, Xuemei; Deng, Yue; Lin, Xing; Suo, Jinli; Dai, Qionghai; Barsi, Christopher; Raskar, Ramesh
2014-06-01
Ultrafast sources and detectors have been used to record the time-resolved scattering of light propagating through macroscopic scenes. In the context of computational imaging, decomposition of this transient light transport (TLT) is useful for applications, such as characterizing materials, imaging through diffuser layers, and relighting scenes dynamically. Here, we demonstrate a method of convolutional sparse coding to decompose TLT into direct reflections, inter-reflections, and subsurface scattering. The method relies on the sparsity composition of the time-resolved kernel. We show that it is robust and accurate to noise during the acquisition process.
Diffuse dispersive delay and the time convolution/attenuation of transients
NASA Technical Reports Server (NTRS)
Bittner, Burt J.
1991-01-01
Test data and analytic evaluations are presented to show that relatively poor 100 KHz shielding of 12 Db can effectively provide an electromagnetic pulse transient reduction of 100 Db. More importantly, several techniques are shown for lightning surge attenuation as an alternative to crowbar, spark gap, or power zener type clipping which simply reflects the surge. A time delay test method is shown which allows CW testing, along with a convolution program to define transient shielding effectivity where the Fourier phase characteristics of the transient are known or can be broadly estimated.
Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator
Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; et al
2014-12-08
Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator’s vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator’s vacuum-insulator stack (at a radius of 1.6 m) by using standard D-dot and B-dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator’s magnetically insulated transmission lines (MITLs)more » and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z. These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that showed
Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator
NASA Astrophysics Data System (ADS)
Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; Vesey, R. A.; Jones, B.; Ampleford, D. J.; Lemke, R. W.; Martin, M. R.; Schrafel, P. C.; Lewis, S. A.; Moore, J. K.; Savage, M. E.; Stygar, W. A.
2014-12-01
Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator's vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator's vacuum-insulator stack (at a radius of 1.6 m) by using standard D -dot and B -dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator's magnetically insulated transmission lines (MITLs) and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z . These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that showed efficient
Cygrid: Cython-powered convolution-based gridding module for Python
NASA Astrophysics Data System (ADS)
Winkel, B.; Lenz, D.; Flöer, L.
2016-06-01
The Python module Cygrid grids (resamples) data to any collection of spherical target coordinates, although its typical application involves FITS maps or data cubes. The module supports the FITS world coordinate system (WCS) standard; its underlying algorithm is based on the convolution of the original samples with a 2D Gaussian kernel. A lookup table scheme allows parallelization of the code and is combined with the HEALPix tessellation of the sphere for fast neighbor searches. Cygrid's runtime scales between O(n) and O(nlog n), with n being the number of input samples.
A convolutional recursive modified Self Organizing Map for handwritten digits recognition.
Mohebi, Ehsan; Bagirov, Adil
2014-12-01
It is well known that the handwritten digits recognition is a challenging problem. Different classification algorithms have been applied to solve it. Among them, the Self Organizing Maps (SOM) produced promising results. In this paper, first we introduce a Modified SOM for the vector quantization problem with improved initialization process and topology preservation. Then we develop a Convolutional Recursive Modified SOM and apply it to the problem of handwritten digits recognition. The computational results obtained using the well known MNIST dataset demonstrate the superiority of the proposed algorithm over the existing SOM-based algorithms.
Parity retransmission hybrid ARQ using rate 1/2 convolutional codes on a nonstationary channel
NASA Technical Reports Server (NTRS)
Lugand, Laurent R.; Costello, Daniel J., Jr.; Deng, Robert H.
1989-01-01
A parity retransmission hybrid automatic repeat request (ARQ) scheme is proposed which uses rate 1/2 convolutional codes and Viterbi decoding. A protocol is described which is capable of achieving higher throughputs than previously proposed parity retransmission schemes. The performance analysis is based on a two-state Markov model of a nonstationary channel. This model constitutes a first approximation to a nonstationary channel. The two-state channel model is used to analyze the throughput and undetected error probability of the protocol presented when the receiver has both an infinite and a finite buffer size. It is shown that the throughput improves as the channel becomes more bursty.
Processing circuit with asymmetry corrector and convolutional encoder for digital data
NASA Technical Reports Server (NTRS)
Pfiffner, Harold J. (Inventor)
1987-01-01
A processing circuit is provided for correcting for input parameter variations, such as data and clock signal symmetry, phase offset and jitter, noise and signal amplitude, in incoming data signals. An asymmetry corrector circuit performs the correcting function and furnishes the corrected data signals to a convolutional encoder circuit. The corrector circuit further forms a regenerated clock signal from clock pulses in the incoming data signals and another clock signal at a multiple of the incoming clock signal. These clock signals are furnished to the encoder circuit so that encoded data may be furnished to a modulator at a high data rate for transmission.
Semi-supervised Convolutional Neural Networks for Text Categorization via Region Embedding
Johnson, Rie; Zhang, Tong
2016-01-01
This paper presents a new semi-supervised framework with convolutional neural networks (CNNs) for text categorization. Unlike the previous approaches that rely on word embeddings, our method learns embeddings of small text regions from unlabeled data for integration into a supervised CNN. The proposed scheme for embedding learning is based on the idea of two-view semi-supervised learning, which is intended to be useful for the task of interest even though the training is done on unlabeled data. Our models achieve better results than previous approaches on sentiment classification and topic classification tasks. PMID:27087766
Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator
Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; Vesey, R. A.; Jones, B.; Ampleford, D. J.; Lemke, R. W.; Martin, M. R.; Schrafel, P. C.; Lewis, S. A.; Moore, J. K.; Savage, M. E.; Stygar, W. A.
2014-12-08
Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator’s vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator’s vacuum-insulator stack (at a radius of 1.6 m) by using standard D-dot and B-dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator’s magnetically insulated transmission lines (MITLs) and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z. These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that
Performance of DPSK with convolutional encoding on time-varying fading channels
NASA Technical Reports Server (NTRS)
Mui, S. Y.; Modestino, J. W.
1977-01-01
The bit error probability performance of a differentially-coherent phase-shift keyed (DPSK) modem with convolutional encoding and Viterbi decoding on time-varying fading channels is examined. Both the Rician and the lognormal channels are considered. Bit error probability upper bounds on fully-interleaved (zero-memory) fading channels are derived and substantiated by computer simulation. It is shown that the resulting coded system performance is a relatively insensitive function of the choice of channel model provided that the channel parameters are related according to the correspondence developed as part of this paper. Finally, a comparison of DPSK with a number of other modulation strategies is provided.
Cevidanes, Lucia H.S.; Styner, Martin; Proffit, William R.; Ngom, Traduit par Papa Ibrahima
2010-01-01
RÉSUMÉ – Pour évaluer les modifications liées à la croissance ou au traitement, il est nécessaire de superposer les céphalogrammes successifs sur une structure stable. En céphalométrie bidimensionnelle (2-D), la base du crâne est souvent utilisée pour les superpositions parce que les changements qu’elle subit après le développement cérébral sont mineurs. Toutefois, sur les céphalogrammes de profil et de face, les points de repère basicraniens sont peu fiables. Dans cet article, nous présentons une nouvelle méthode de superposition tridimensionnelle (3-D) basée sur un enregistrement entièrement automatisé des intensités de voxels, au niveau de la surface de la base du crâne. Le progiciel utilisé permet l’évaluation quantitative des modifications qui apparaissent dans le temps, grâce au calcul de la distance euclidienne entre les surfaces du modèle tridimensionnel. Il permet également l’appréciation visuelle de l’emplacement et de l’importance des modifications au niveau des maxillaires, grâce à une surimpression graphique. Les modifications sont visualisées par comparaison à des tables de correspondance de couleur. On peut ainsi réaliser une étude détaillée des modes d’adaptation chez les patients dont la croissance et/ou le traitement ont provoqué des modifications squelettiques cliniquement significatives. PMID:19954732
NASA Astrophysics Data System (ADS)
Jakovidis, Greg; McLeod, Ian D.; Morgan, Michael J.
1990-05-01
The use of simple ideas applied to 'real-world' situations is of considerable pedagogical value in teaching introductory physics. The principle of wave superposition is applied to understanding the physics of two very different devices: a quantum well laser and a motor-bike exhaust system. Reasonable agreement is found between the predictions of simple models, and the measured parameters of actual devices.
NASA Technical Reports Server (NTRS)
Lee, L. N.
1976-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1977-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
Strahl, Stefan B; Ramekers, Dyan; Nagelkerke, Marjolijn M B; Schwarz, Konrad E; Spitzer, Philipp; Klis, Sjaak F L; Grolman, Wilko; Versnel, Huib
2016-01-01
The electrically evoked compound action potential (eCAP) is a routinely performed measure of the auditory nerve in cochlear implant users. Using a convolution model of the eCAP, additional information about the neural firing properties can be obtained, which may provide relevant information about the health of the auditory nerve. In this study, guinea pigs with various degrees of nerve degeneration were used to directly relate firing properties to nerve histology. The same convolution model was applied on human eCAPs to examine similarities and ultimately to examine its clinical applicability. For most eCAPs, the estimated nerve firing probability was bimodal and could be parameterised by two Gaussian distributions with an average latency difference of 0.4 ms. The ratio of the scaling factors of the late and early component increased with neural degeneration in the guinea pig. This ratio decreased with stimulation intensity in humans. The latency of the early component decreased with neural degeneration in the guinea pig. Indirectly, this was observed in humans as well, assuming that the cochlear base exhibits more neural degeneration than the apex. Differences between guinea pigs and humans were observed, among other parameters, in the width of the early component: very robust in guinea pig, and dependent on stimulation intensity and cochlear region in humans. We conclude that the deconvolution of the eCAP is a valuable addition to existing analyses, in particular as it reveals two separate firing components in the auditory nerve. PMID:27080655
NASA Astrophysics Data System (ADS)
Minnehan, Breton; Savakis, Andreas
2016-05-01
As Unmanned Aerial Systems grow in numbers, pedestrian detection from aerial platforms is becoming a topic of increasing importance. By providing greater contextual information and a reduced potential for occlusion, the aerial vantage point provided by Unmanned Aerial Systems is highly advantageous for many surveillance applications, such as target detection, tracking, and action recognition. However, due to the greater distance between the camera and scene, targets of interest in aerial imagery are generally smaller and have less detail. Deep Convolutional Neural Networks (CNN's) have demonstrated excellent object classification performance and in this paper we adopt them to the problem of pedestrian detection from aerial platforms. We train a CNN with five layers consisting of three convolution-pooling layers and two fully connected layers. We also address the computational inefficiencies of the sliding window method for object detection. In the sliding window configuration, a very large number of candidate patches are generated from each frame, while only a small number of them contain pedestrians. We utilize the Edge Box object proposal generation method to screen candidate patches based on an "objectness" criterion, so that only regions that are likely to contain objects are processed. This method significantly reduces the number of image patches processed by the neural network and makes our classification method very efficient. The resulting two-stage system is a good candidate for real-time implementation onboard modern aerial vehicles. Furthermore, testing on three datasets confirmed that our system offers high detection accuracy for terrestrial pedestrian detection in aerial imagery.
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.
Ordóñez, Francisco Javier; Roggen, Daniel
2016-01-18
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.
Serang, Oliver
2014-01-01
Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called “causal independence”). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to and the space to where is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions. PMID:24626234
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.
Ordóñez, Francisco Javier; Roggen, Daniel
2016-01-01
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation. PMID:26797612
Partial fourier reconstruction through data fitting and convolution in k-space.
Huang, Feng; Lin, Wei; Li, Yu
2009-11-01
A partial Fourier acquisition scheme has been widely adopted for fast imaging. There are two problems associated with the existing techniques. First, the majority of the existing techniques demodulate the phase information and cannot provide improved phase information over zero-padding. Second, serious artifacts can be observed in reconstruction when the phase changes rapidly because the low-resolution phase estimate in the image space is prone to error. To tackle these two problems, a novel and robust method is introduced for partial Fourier reconstruction, using k-space convolution. In this method, the phase information is implicitly estimated in k-space through data fitting; the approximated phase information is applied to recover the unacquired k-space data through Hermitian operation and convolution in k-space. In both spin echo and gradient echo imaging experiments, the proposed method consistently produced images with the lowest error level when compared to Cuppen's algorithm, projection onto convex sets-based iterative algorithm, and Homodyne algorithm. Significant improvements are observed in images with rapid phase change. Besides the improvement on magnitude, the phase map of the images reconstructed by the proposed method also has significantly lower error level than conventional methods.
NASA Astrophysics Data System (ADS)
Zhong, Yanfei; Fei, Feng; Zhang, Liangpei
2016-04-01
The increase of the spatial resolution of remote-sensing sensors helps to capture the abundant details related to the semantics of surface objects. However, it is difficult for the popular object-oriented classification approaches to acquire higher level semantics from the high spatial resolution remote-sensing (HSR-RS) images, which is often referred to as the "semantic gap." Instead of designing sophisticated operators, convolutional neural networks (CNNs), a typical deep learning method, can automatically discover intrinsic feature descriptors from a large number of input images to bridge the semantic gap. Due to the small data volume of the available HSR-RS scene datasets, which is far away from that of the natural scene datasets, there have been few reports of CNN approaches for HSR-RS image scene classifications. We propose a practical CNN architecture for HSR-RS scene classification, named the large patch convolutional neural network (LPCNN). The large patch sampling is used to generate hundreds of possible scene patches for the feature learning, and a global average pooling layer is used to replace the fully connected network as the classifier, which can greatly reduce the total parameters. The experiments confirm that the proposed LPCNN can learn effective local features to form an effective representation for different land-use scenes, and can achieve a performance that is comparable to the state-of-the-art on public HSR-RS scene datasets.
Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong
2016-01-01
We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods. PMID:26864172
Strahl, Stefan B; Ramekers, Dyan; Nagelkerke, Marjolijn M B; Schwarz, Konrad E; Spitzer, Philipp; Klis, Sjaak F L; Grolman, Wilko; Versnel, Huib
2016-01-01
The electrically evoked compound action potential (eCAP) is a routinely performed measure of the auditory nerve in cochlear implant users. Using a convolution model of the eCAP, additional information about the neural firing properties can be obtained, which may provide relevant information about the health of the auditory nerve. In this study, guinea pigs with various degrees of nerve degeneration were used to directly relate firing properties to nerve histology. The same convolution model was applied on human eCAPs to examine similarities and ultimately to examine its clinical applicability. For most eCAPs, the estimated nerve firing probability was bimodal and could be parameterised by two Gaussian distributions with an average latency difference of 0.4 ms. The ratio of the scaling factors of the late and early component increased with neural degeneration in the guinea pig. This ratio decreased with stimulation intensity in humans. The latency of the early component decreased with neural degeneration in the guinea pig. Indirectly, this was observed in humans as well, assuming that the cochlear base exhibits more neural degeneration than the apex. Differences between guinea pigs and humans were observed, among other parameters, in the width of the early component: very robust in guinea pig, and dependent on stimulation intensity and cochlear region in humans. We conclude that the deconvolution of the eCAP is a valuable addition to existing analyses, in particular as it reveals two separate firing components in the auditory nerve.
Accelerating protein docking in ZDOCK using an advanced 3D convolution library.
Pierce, Brian G; Hourai, Yuichiro; Weng, Zhiping
2011-01-01
Computational prediction of the 3D structures of molecular interactions is a challenging area, often requiring significant computational resources to produce structural predictions with atomic-level accuracy. This can be particularly burdensome when modeling large sets of interactions, macromolecular assemblies, or interactions between flexible proteins. We previously developed a protein docking program, ZDOCK, which uses a fast Fourier transform to perform a 3D search of the spatial degrees of freedom between two molecules. By utilizing a pairwise statistical potential in the ZDOCK scoring function, there were notable gains in docking accuracy over previous versions, but this improvement in accuracy came at a substantial computational cost. In this study, we incorporated a recently developed 3D convolution library into ZDOCK, and additionally modified ZDOCK to dynamically orient the input proteins for more efficient convolution. These modifications resulted in an average of over 8.5-fold improvement in running time when tested on 176 cases in a newly released protein docking benchmark, as well as substantially less memory usage, with no loss in docking accuracy. We also applied these improvements to a previous version of ZDOCK that uses a simpler non-pairwise atomic potential, yielding an average speed improvement of over 5-fold on the docking benchmark, while maintaining predictive success. This permits the utilization of ZDOCK for more intensive tasks such as docking flexible molecules and modeling of interactomes, and can be run more readily by those with limited computational resources. PMID:21949741
Noise-induced bias for convolution-based interpolation in digital image correlation.
Su, Yong; Zhang, Qingchuan; Gao, Zeren; Xu, Xiaohai
2016-01-25
In digital image correlation (DIC), the noise-induced bias is significant if the noise level is high or the contrast of the image is low. However, existing methods for the estimation of the noise-induced bias are merely applicable to traditional interpolation methods such as linear and cubic interpolation, but are not applicable to generalized interpolation methods such as BSpline and OMOMS. Both traditional interpolation and generalized interpolation belong to convolution-based interpolation. Considering the widely use of generalized interpolation, this paper presents a theoretical analysis of noise-induced bias for convolution-based interpolation. A sinusoidal approximate formula for noise-induced bias is derived; this formula motivates an estimating strategy which is with speed, ease, and accuracy; furthermore, based on this formula, the mechanism of sophisticated interpolation methods generally reducing noise-induced bias is revealed. The validity of the theoretical analysis is established by both numerical simulations and actual subpixel translation experiment. Compared to existing methods, formulae provided by this paper are simpler, briefer, and more general. In addition, a more intuitionistic explanation of the cause of noise-induced bias is provided by quantitatively characterized the position-dependence of noise variability in the spatial domain.