Cho, Nathan; Tsiamas, Panagiotis; Velarde, Esteban; Tryggestad, Erik; Jacques, Robert; Berbeco, Ross; McNutt, Todd; Kazanzides, Peter; Wong, John
2018-05-01
The Small Animal Radiation Research Platform (SARRP) has been developed for conformal microirradiation with on-board cone beam CT (CBCT) guidance. The graphics processing unit (GPU)-accelerated Superposition-Convolution (SC) method for dose computation has been integrated into the treatment planning system (TPS) for SARRP. This paper describes the validation of the SC method for the kilovoltage energy by comparing with EBT2 film measurements and Monte Carlo (MC) simulations. MC data were simulated by EGSnrc code with 3 × 10 8 -1.5 × 10 9 histories, while 21 photon energy bins were used to model the 220 kVp x-rays in the SC method. Various types of phantoms including plastic water, cork, graphite, and aluminum were used to encompass the range of densities of mouse organs. For the comparison, percentage depth dose (PDD) of SC, MC, and film measurements were analyzed. Cross beam (x,y) dosimetric profiles of SC and film measurements are also presented. Correction factors (CFz) to convert SC to MC dose-to-medium are derived from the SC and MC simulations in homogeneous phantoms of aluminum and graphite to improve the estimation. The SC method produces dose values that are within 5% of film measurements and MC simulations in the flat regions of the profile. The dose is less accurate at the edges, due to factors such as geometric uncertainties of film placement and difference in dose calculation grids. The GPU-accelerated Superposition-Convolution dose computation method was successfully validated with EBT2 film measurements and MC calculations. The SC method offers much faster computation speed than MC and provides calculations of both dose-to-water in medium and dose-to-medium in medium. © 2018 American Association of Physicists in Medicine.
Improved scatter correction using adaptive scatter kernel superposition
NASA Astrophysics Data System (ADS)
Sun, M.; Star-Lack, J. M.
2010-11-01
Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.
Real-time dose computation: GPU-accelerated source modeling and superposition/convolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacques, Robert; Wong, John; Taylor, Russell
Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for themore » total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3} times were 8.3 and 94 s, respectively, on an AMD (Sunnyvale, CA) Opteron 254 (two cores, 2.8 GHz). Conclusions: The authors have completed a comprehensive, GPU-accelerated dose engine in order to provide a substantial performance gain over CPU based implementations. Real-time dose computation is feasible with the accuracy levels of the superposition/convolution algorithm.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devpura, S; Li, H; Liu, C
Purpose: To correlate dose distributions computed using six algorithms for recurrent early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT), with outcome (local failure). Methods: Of 270 NSCLC patients treated with 12Gyx4, 20 were found to have local recurrence prior to the 2-year time point. These patients were originally planned with 1-D pencil beam (1-D PB) algorithm. 4D imaging was performed to manage tumor motion. Regions of local failures were determined from follow-up PET-CT scans. Follow-up CT images were rigidly fused to the planning CT (pCT), and recurrent tumor volumes (Vrecur) were mapped to themore » pCT. Dose was recomputed, retrospectively, using five algorithms: 3-D PB, collapsed cone convolution (CCC), anisotropic analytical algorithm (AAA), AcurosXB, and Monte Carlo (MC). Tumor control probability (TCP) was computed using the Marsden model (1,2). Patterns of failure were classified as central, in-field, marginal, and distant for Vrecur ≥95% of prescribed dose, 95–80%, 80–20%, and ≤20%, respectively (3). Results: Average PTV D95 (dose covering 95% of the PTV) for 3-D PB, CCC, AAA, AcurosXB, and MC relative to 1-D PB were 95.3±2.1%, 84.1±7.5%, 84.9±5.7%, 86.3±6.0%, and 85.1±7.0%, respectively. TCP values for 1-D PB, 3-D PB, CCC, AAA, AcurosXB, and MC were 98.5±1.2%, 95.7±3.0, 79.6±16.1%, 79.7±16.5%, 81.1±17.5%, and 78.1±20%, respectively. Patterns of local failures were similar for 1-D and 3D PB plans, which predicted that the majority of failures occur in centraldistal regions, with only ∼15% occurring distantly. However, with convolution/superposition and MC type algorithms, the majority of failures (65%) were predicted to be distant, consistent with the literature. Conclusion: Based on MC and convolution/superposition type algorithms, average PTV D95 and TCP were ∼15% lower than the planned 1-D PB dose calculation. Patterns of failure results suggest that MC and convolution/superposition type algorithms predict different outcomes for patterns of failure relative to PB algorithms. Work supported in part by Varian Medical Systems, Palo Alto, CA.« less
NASA Astrophysics Data System (ADS)
Devpura, S.; Siddiqui, M. S.; Chen, D.; Liu, D.; Li, H.; Kumar, S.; Gordon, J.; Ajlouni, M.; Movsas, B.; Chetty, I. J.
2014-03-01
The purpose of this study was to systematically evaluate dose distributions computed with 5 different dose algorithms for patients with lung cancers treated using stereotactic ablative body radiotherapy (SABR). Treatment plans for 133 lung cancer patients, initially computed with a 1D-pencil beam (equivalent-path-length, EPL-1D) algorithm, were recalculated with 4 other algorithms commissioned for treatment planning, including 3-D pencil-beam (EPL-3D), anisotropic analytical algorithm (AAA), collapsed cone convolution superposition (CCC), and Monte Carlo (MC). The plan prescription dose was 48 Gy in 4 fractions normalized to the 95% isodose line. Tumors were classified according to location: peripheral tumors surrounded by lung (lung-island, N=39), peripheral tumors attached to the rib-cage or chest wall (lung-wall, N=44), and centrally-located tumors (lung-central, N=50). Relative to the EPL-1D algorithm, PTV D95 and mean dose values computed with the other 4 algorithms were lowest for "lung-island" tumors with smallest field sizes (3-5 cm). On the other hand, the smallest differences were noted for lung-central tumors treated with largest field widths (7-10 cm). Amongst all locations, dose distribution differences were most strongly correlated with tumor size for lung-island tumors. For most cases, convolution/superposition and MC algorithms were in good agreement. Mean lung dose (MLD) values computed with the EPL-1D algorithm were highly correlated with that of the other algorithms (correlation coefficient =0.99). The MLD values were found to be ~10% lower for small lung-island tumors with the model-based (conv/superposition and MC) vs. the correction-based (pencil-beam) algorithms with the model-based algorithms predicting greater low dose spread within the lungs. This study suggests that pencil beam algorithms should be avoided for lung SABR planning. For the most challenging cases, small tumors surrounded entirely by lung tissue (lung-island type), a Monte-Carlo-based algorithm may be warranted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tajaldeen, A; Ramachandran, P; Geso, M
2015-06-15
Purpose: The purpose of this study was to investigate and quantify the variation in dose distributions in small field lung cancer radiotherapy using seven different dose calculation algorithms. Methods: The study was performed in 21 lung cancer patients who underwent Stereotactic Ablative Body Radiotherapy (SABR). Two different methods (i) Same dose coverage to the target volume (named as same dose method) (ii) Same monitor units in all algorithms (named as same monitor units) were used for studying the performance of seven different dose calculation algorithms in XiO and Eclipse treatment planning systems. The seven dose calculation algorithms include Superposition, Fastmore » superposition, Fast Fourier Transform ( FFT) Convolution, Clarkson, Anisotropic Analytic Algorithm (AAA), Acurous XB and pencil beam (PB) algorithms. Prior to this, a phantom study was performed to assess the accuracy of these algorithms. Superposition algorithm was used as a reference algorithm in this study. The treatment plans were compared using different dosimetric parameters including conformity, heterogeneity and dose fall off index. In addition to this, the dose to critical structures like lungs, heart, oesophagus and spinal cord were also studied. Statistical analysis was performed using Prism software. Results: The mean±stdev with conformity index for Superposition, Fast superposition, Clarkson and FFT convolution algorithms were 1.29±0.13, 1.31±0.16, 2.2±0.7 and 2.17±0.59 respectively whereas for AAA, pencil beam and Acurous XB were 1.4±0.27, 1.66±0.27 and 1.35±0.24 respectively. Conclusion: Our study showed significant variations among the seven different algorithms. Superposition and AcurosXB algorithms showed similar values for most of the dosimetric parameters. Clarkson, FFT convolution and pencil beam algorithms showed large differences as compared to superposition algorithms. Based on our study, we recommend Superposition and AcurosXB algorithms as the first choice of algorithms in lung cancer radiotherapy involving small fields. However, further investigation by Monte Carlo simulation is required to confirm our results.« less
Evaluation of six TPS algorithms in computing entrance and exit doses.
Tan, Yun I; Metwaly, Mohamed; Glegg, Martin; Baggarley, Shaun; Elliott, Alex
2014-05-08
Entrance and exit doses are commonly measured in in vivo dosimetry for comparison with expected values, usually generated by the treatment planning system (TPS), to verify accuracy of treatment delivery. This report aims to evaluate the accuracy of six TPS algorithms in computing entrance and exit doses for a 6 MV beam. The algorithms tested were: pencil beam convolution (Eclipse PBC), analytical anisotropic algorithm (Eclipse AAA), AcurosXB (Eclipse AXB), FFT convolution (XiO Convolution), multigrid superposition (XiO Superposition), and Monte Carlo photon (Monaco MC). Measurements with ionization chamber (IC) and diode detector in water phantoms were used as a reference. Comparisons were done in terms of central axis point dose, 1D relative profiles, and 2D absolute gamma analysis. Entrance doses computed by all TPS algorithms agreed to within 2% of the measured values. Exit doses computed by XiO Convolution, XiO Superposition, Eclipse AXB, and Monaco MC agreed with the IC measured doses to within 2%-3%. Meanwhile, Eclipse PBC and Eclipse AAA computed exit doses were higher than the IC measured doses by up to 5.3% and 4.8%, respectively. Both algorithms assume that full backscatter exists even at the exit level, leading to an overestimation of exit doses. Despite good agreements at the central axis for Eclipse AXB and Monaco MC, 1D relative comparisons showed profiles mismatched at depths beyond 11.5 cm. Overall, the 2D absolute gamma (3%/3 mm) pass rates were better for Monaco MC, while Eclipse AXB failed mostly at the outer 20% of the field area. The findings of this study serve as a useful baseline for the implementation of entrance and exit in vivo dosimetry in clinical departments utilizing any of these six common TPS algorithms for reference comparison.
Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.
2013-01-01
Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found that depth was the most dominant factor affecting the pattern of energy deposition; however, the effects of field size and off-axis distance were not negligible. For the material-specific kernels, we found that as the density of the material increased, more energy was deposited laterally by charged particles, as opposed to in the forward direction. Thus, density scaling of water kernels becomes a worse approximation as the density and the effective atomic number of the material differ more from water. Implementation of spatially variant, polyenergetic kernels increased the percent depth dose value at 25 cm depth by 2.1%–5.8% depending on the field size, while implementation of titanium kernels gave 4.9% higher dose upstream of the metal cavity (i.e., higher backscatter dose) and 8.2% lower dose downstream of the cavity. Conclusions: Of the various kernel refinements investigated, inclusion of depth-dependent and metal-specific kernels into the C/S method has the greatest potential to improve dose calculation accuracy. Implementation of spatially variant polyenergetic kernels resulted in a harder depth dose curve and thus has the potential to affect beam modeling parameters obtained in the commissioning process. For metal implants, the C/S algorithms generally underestimate the dose upstream and overestimate the dose downstream of the implant. Implementation of a metal-specific kernel mitigated both of these errors. PMID:24320507
Evaluation of six TPS algorithms in computing entrance and exit doses
Metwaly, Mohamed; Glegg, Martin; Baggarley, Shaun P.; Elliott, Alex
2014-01-01
Entrance and exit doses are commonly measured in in vivo dosimetry for comparison with expected values, usually generated by the treatment planning system (TPS), to verify accuracy of treatment delivery. This report aims to evaluate the accuracy of six TPS algorithms in computing entrance and exit doses for a 6 MV beam. The algorithms tested were: pencil beam convolution (Eclipse PBC), analytical anisotropic algorithm (Eclipse AAA), AcurosXB (Eclipse AXB), FFT convolution (XiO Convolution), multigrid superposition (XiO Superposition), and Monte Carlo photon (Monaco MC). Measurements with ionization chamber (IC) and diode detector in water phantoms were used as a reference. Comparisons were done in terms of central axis point dose, 1D relative profiles, and 2D absolute gamma analysis. Entrance doses computed by all TPS algorithms agreed to within 2% of the measured values. Exit doses computed by XiO Convolution, XiO Superposition, Eclipse AXB, and Monaco MC agreed with the IC measured doses to within 2%‐3%. Meanwhile, Eclipse PBC and Eclipse AAA computed exit doses were higher than the IC measured doses by up to 5.3% and 4.8%, respectively. Both algorithms assume that full backscatter exists even at the exit level, leading to an overestimation of exit doses. Despite good agreements at the central axis for Eclipse AXB and Monaco MC, 1D relative comparisons showed profiles mismatched at depths beyond 11.5 cm. Overall, the 2D absolute gamma (3%/3 mm) pass rates were better for Monaco MC, while Eclipse AXB failed mostly at the outer 20% of the field area. The findings of this study serve as a useful baseline for the implementation of entrance and exit in vivo dosimetry in clinical departments utilizing any of these six common TPS algorithms for reference comparison. PACS numbers: 87.55.‐x, 87.55.D‐, 87.55.N‐, 87.53.Bn PMID:24892349
Zhu, Jinhan; Chen, Lixin; Chen, Along; Luo, Guangwen; Deng, Xiaowu; Liu, Xiaowei
2015-04-11
To use a graphic processing unit (GPU) calculation engine to implement a fast 3D pre-treatment dosimetric verification procedure based on an electronic portal imaging device (EPID). The GPU algorithm includes the deconvolution and convolution method for the fluence-map calculations, the collapsed-cone convolution/superposition (CCCS) algorithm for the 3D dose calculations and the 3D gamma evaluation calculations. The results of the GPU-based CCCS algorithm were compared to those of Monte Carlo simulations. The planned and EPID-based reconstructed dose distributions in overridden-to-water phantoms and the original patients were compared for 6 MV and 10 MV photon beams in intensity-modulated radiation therapy (IMRT) treatment plans based on dose differences and gamma analysis. The total single-field dose computation time was less than 8 s, and the gamma evaluation for a 0.1-cm grid resolution was completed in approximately 1 s. The results of the GPU-based CCCS algorithm exhibited good agreement with those of the Monte Carlo simulations. The gamma analysis indicated good agreement between the planned and reconstructed dose distributions for the treatment plans. For the target volume, the differences in the mean dose were less than 1.8%, and the differences in the maximum dose were less than 2.5%. For the critical organs, minor differences were observed between the reconstructed and planned doses. The GPU calculation engine was used to boost the speed of 3D dose and gamma evaluation calculations, thus offering the possibility of true real-time 3D dosimetric verification.
Study of Nonclassical Fields in Phase-Sensitive Reservoirs
NASA Technical Reports Server (NTRS)
Kim, Myung Shik; Imoto, Nobuyuki
1996-01-01
We show that the reservoir influence can be modeled by an infinite array of beam splitters. The superposition of the input fields in the beam splitter is discussed with the convolution laws for their quasiprobabilities. We derive the Fokker-Planck equation for the cavity field coupled with a phase-sensitive reservoir using the convolution law. We also analyze the amplification in the phase-sensitive reservoir with use of the modified beam splitter model. We show the similarities and differences between the dissipation and amplification models. We show that a super-Poissonian input field cannot become sub-Poissonian by the phase-sensitive amplification.
GPU-accelerated Monte Carlo convolution/superposition implementation for dose calculation.
Zhou, Bo; Yu, Cedric X; Chen, Danny Z; Hu, X Sharon
2010-11-01
Dose calculation is a key component in radiation treatment planning systems. Its performance and accuracy are crucial to the quality of treatment plans as emerging advanced radiation therapy technologies are exerting ever tighter constraints on dose calculation. A common practice is to choose either a deterministic method such as the convolution/superposition (CS) method for speed or a Monte Carlo (MC) method for accuracy. The goal of this work is to boost the performance of a hybrid Monte Carlo convolution/superposition (MCCS) method by devising a graphics processing unit (GPU) implementation so as to make the method practical for day-to-day usage. Although the MCCS algorithm combines the merits of MC fluence generation and CS fluence transport, it is still not fast enough to be used as a day-to-day planning tool. To alleviate the speed issue of MC algorithms, the authors adopted MCCS as their target method and implemented a GPU-based version. In order to fully utilize the GPU computing power, the MCCS algorithm is modified to match the GPU hardware architecture. The performance of the authors' GPU-based implementation on an Nvidia GTX260 card is compared to a multithreaded software implementation on a quad-core system. A speedup in the range of 6.7-11.4x is observed for the clinical cases used. The less than 2% statistical fluctuation also indicates that the accuracy of the authors' GPU-based implementation is in good agreement with the results from the quad-core CPU implementation. This work shows that GPU is a feasible and cost-efficient solution compared to other alternatives such as using cluster machines or field-programmable gate arrays for satisfying the increasing demands on computation speed and accuracy of dose calculation. But there are also inherent limitations of using GPU for accelerating MC-type applications, which are also analyzed in detail in this article.
Tooth labeling in cone-beam CT using deep convolutional neural network for forensic identification
NASA Astrophysics Data System (ADS)
Miki, Yuma; Muramatsu, Chisako; Hayashi, Tatsuro; Zhou, Xiangrong; Hara, Takeshi; Katsumata, Akitoshi; Fujita, Hiroshi
2017-03-01
In large disasters, dental record plays an important role in forensic identification. However, filing dental charts for corpses is not an easy task for general dentists. Moreover, it is laborious and time-consuming work in cases of large scale disasters. We have been investigating a tooth labeling method on dental cone-beam CT images for the purpose of automatic filing of dental charts. In our method, individual tooth in CT images are detected and classified into seven tooth types using deep convolutional neural network. We employed the fully convolutional network using AlexNet architecture for detecting each tooth and applied our previous method using regular AlexNet for classifying the detected teeth into 7 tooth types. From 52 CT volumes obtained by two imaging systems, five images each were randomly selected as test data, and the remaining 42 cases were used as training data. The result showed the tooth detection accuracy of 77.4% with the average false detection of 5.8 per image. The result indicates the potential utility of the proposed method for automatic recording of dental information.
NASA Astrophysics Data System (ADS)
Qi, Youzheng; Huang, Ling; Wu, Xin; Zhu, Wanhua; Fang, Guangyou; Yu, Gang
2017-07-01
Quantitative modeling of the transient electromagnetic (TEM) response requires consideration of the full transmitter waveform, i.e., not only the specific current waveform in a half cycle but also the bipolar repetition. In this paper, we present a novel temporal interpolation and convolution (TIC) method to facilitate the accurate TEM modeling. We first calculate the temporal basis response on a logarithmic scale using the fast digital-filter-based methods. Then, we introduce a function named hamlogsinc in the framework of discrete signal processing theory to reconstruct the basis function and to make the convolution with the positive half of the waveform. Finally, a superposition procedure is used to take account of the effect of previous bipolar waveforms. Comparisons with the established fast Fourier transform method demonstrate that our TIC method can get the same accuracy with a shorter computing time.
NASA Astrophysics Data System (ADS)
Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud
2015-10-01
Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and 18F, 99mTc, 131I and 177Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the 99mTc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.
Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud
2015-10-21
Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and (18)F, (99m)Tc, (131)I and (177)Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the (99m)Tc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.
NASA Astrophysics Data System (ADS)
Xie, Shi-Peng; Luo, Li-Min
2012-06-01
The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT). The scatter kernel superposition (SKS) method has been used occasionally in previous studies. However, this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel. This study first evaluates the scatter kernel parameters using the SDB, and then isolates the scatter distribution based on the SKS. The quality of image can be improved by removing the scatter distribution. The results show that the method can effectively reduce the scatter artifacts, and increase the image quality. Our approach increases the image contrast and reduces the magnitude of cupping. The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel. This method is computationally efficient, easy to implement, and provides scatter correction using a single scan acquisition.
NASA Astrophysics Data System (ADS)
Al-Hallaq, H. A.; Reft, C. S.; Roeske, J. C.
2006-03-01
The dosimetric effects of bone and air heterogeneities in head and neck IMRT treatments were quantified. An anthropomorphic RANDO phantom was CT-scanned with 16 thermoluminescent dosimeter (TLD) chips placed in and around the target volume. A standard IMRT plan generated with CORVUS was used to irradiate the phantom five times. On average, measured dose was 5.1% higher than calculated dose. Measurements were higher by 7.1% near the heterogeneities and by 2.6% in tissue. The dose difference between measurement and calculation was outside the 95% measurement confidence interval for six TLDs. Using CORVUS' heterogeneity correction algorithm, the average difference between measured and calculated doses decreased by 1.8% near the heterogeneities and by 0.7% in tissue. Furthermore, dose differences lying outside the 95% confidence interval were eliminated for five of the six TLDs. TLD doses recalculated by Pinnacle3's convolution/superposition algorithm were consistently higher than CORVUS doses, a trend that matched our measured results. These results indicate that the dosimetric effects of air cavities are larger than those of bone heterogeneities, thereby leading to a higher delivered dose compared to CORVUS calculations. More sophisticated algorithms such as convolution/superposition or Monte Carlo should be used for accurate tailoring of IMRT dose in head and neck tumours.
NASA Astrophysics Data System (ADS)
Geng, Lin; Zhang, Xiao-Zheng; Bi, Chuan-Xing
2015-05-01
Time domain plane wave superposition method is extended to reconstruct the transient pressure field radiated by an impacted plate and the normal acceleration of the plate. In the extended method, the pressure measured on the hologram plane is expressed as a superposition of time convolutions between the time-wavenumber normal acceleration spectrum on a virtual source plane and the time domain propagation kernel relating the pressure on the hologram plane to the normal acceleration spectrum on the virtual source plane. By performing an inverse operation, the normal acceleration spectrum on the virtual source plane can be obtained by an iterative solving process, and then taken as the input to reconstruct the whole pressure field and the normal acceleration of the plate. An experiment of a clamped rectangular steel plate impacted by a steel ball is presented. The experimental results demonstrate that the extended method is effective in visualizing the transient vibration and sound radiation of an impacted plate in both time and space domains, thus providing the important information for overall understanding the vibration and sound radiation of the plate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, S; Rangaraj, D
2016-06-15
Purpose: Although cone-beam CT (CBCT) imaging became popular in radiation oncology, its imaging dose estimation is still challenging. The goal of this study is to assess the kilovoltage CBCT doses using GMctdospp - an EGSnrc based Monte Carlo (MC) framework. Methods: Two Varian OBI x-ray tube models were implemented in the GMctpdospp framework of EGSnrc MC System. The x-ray spectrum of 125 kVp CBCT beam was acquired from an EGSnrc/BEAMnrc simulation and validated with IPEM report 78. Then, the spectrum was utilized as an input spectrum in GMctdospp dose calculations. Both full and half bowtie pre-filters of the OBI systemmore » were created by using egs-prism module. The x-ray tube MC models were verified by comparing calculated dosimetric profiles (lateral and depth) to ion chamber measurements for a static x-ray beam irradiation to a cuboid water phantom. An abdominal CBCT imaging doses was simulated in GMctdospp framework using a 5-year-old anthropomorphic phantom. The organ doses and effective dose (ED) from the framework were assessed and compared to the MOSFET measurements and convolution/superposition dose calculations. Results: The lateral and depth dose profiles in the water cuboid phantom were well matched within 6% except a few areas - left shoulder of the half bowtie lateral profile and surface of water phantom. The organ doses and ED from the MC framework were found to be closer to MOSFET measurements and CS calculations within 2 cGy and 5 mSv respectively. Conclusion: This study implemented and validated the Varian OBI x-ray tube models in the GMctdospp MC framework using a cuboid water phantom and CBCT imaging doses were also evaluated in a 5-year-old anthropomorphic phantom. In future study, various CBCT imaging protocols will be implemented and validated and consequently patient CT images will be used to estimate the CBCT imaging doses in patients.« less
Information Theoretic Characterization of Physical Theories with Projective State Space
NASA Astrophysics Data System (ADS)
Zaopo, Marco
2015-08-01
Probabilistic theories are a natural framework to investigate the foundations of quantum theory and possible alternative or deeper theories. In a generic probabilistic theory, states of a physical system are represented as vectors of outcomes probabilities and state spaces are convex cones. In this picture the physics of a given theory is related to the geometric shape of the cone of states. In quantum theory, for instance, the shape of the cone of states corresponds to a projective space over complex numbers. In this paper we investigate geometric constraints on the state space of a generic theory imposed by the following information theoretic requirements: every non completely mixed state of a system is perfectly distinguishable from some other state in a single shot measurement; information capacity of physical systems is conserved under making mixtures of states. These assumptions guarantee that a generic physical system satisfies a natural principle asserting that the more a state of the system is mixed the less information can be stored in the system using that state as logical value. We show that all theories satisfying the above assumptions are such that the shape of their cones of states is that of a projective space over a generic field of numbers. Remarkably, these theories constitute generalizations of quantum theory where superposition principle holds with coefficients pertaining to a generic field of numbers in place of complex numbers. If the field of numbers is trivial and contains only one element we obtain classical theory. This result tells that superposition principle is quite common among probabilistic theories while its absence gives evidence of either classical theory or an implausible theory.
An Improved Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm
NASA Astrophysics Data System (ADS)
Jacques, Robert; McNutt, Todd
2014-03-01
Purpose: To improve the accuracy of convolution/superposition (C/S) in heterogeneous material by developing a new algorithm: heterogeneity compensated superposition (HCS). Methods: C/S has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to the faster fall-off and re-buildup of dose. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to C/S. We implemented the effective density function as a multivariate first-order recursive filter and incorporated it into GPU-accelerated, multi-energetic C/S implementation. We compared HCS against C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. Results: Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. We defined the per-voxel error, %|mm, as the minimum of the distance to agreement in mm and the dosimetric percentage error relative to the maximum MC dose. HCS improved the average mean error by 0.79 %|mm for the patient volumes; reducing the average mean error from 1.93 %|mm to 1.14 %|mm. Very low densities (i.e. < 0.1 g / cm3) remained problematic, but may be solvable with a better filter function. Conclusions: HCS improved upon C/S's density scaled heterogeneity correction with a position and direction sensitive density filter. This method significantly improved the accuracy of the GPU based algorithm reaching the accuracy levels of Monte Carlo based methods with performance in a few tenths of seconds per beam. Acknowledgement: Funding for this research was provided by the NSF Cooperative Agreement EEC9731748, Elekta / IMPAC Medical Systems, Inc. and the Johns Hopkins University. James Satterthwaite provided the Monte Carlo benchmark simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parenica, H; Ford, J; Mavroidis, P
Purpose: To quantify and compare the effect of metallic dental implants (MDI) on dose distributions calculated using Collapsed Cone Convolution Superposition (CCCS) algorithm or a Monte Carlo algorithm (with and without correcting for the density of the MDI). Methods: Seven previously treated patients to the head and neck region were included in this study. The MDI and the streaking artifacts on the CT images were carefully contoured. For each patient a plan was optimized and calculated using the Pinnacle3 treatment planning system (TPS). For each patient two dose calculations were performed, a) with the densities of the MDI and CTmore » artifacts overridden (12 g/cc and 1 g/cc respectively) and b) without density overrides. The plans were then exported to the Monaco TPS and recalculated using Monte Carlo dose calculation algorithm. The changes in dose to PTVs and surrounding Regions of Interest (ROIs) were examined between all plans. Results: The Monte Carlo dose calculation indicated that PTVs received 6% lower dose than the CCCS algorithm predicted. In some cases, the Monte Carlo algorithm indicated that surrounding ROIs received higher dose (up to a factor of 2). Conclusion: Not properly accounting for dental implants can impact both the high dose regions (PTV) and the low dose regions (OAR). This study implies that if MDI and the artifacts are not appropriately contoured and given the correct density, there is potential significant impact on PTV coverage and OAR maximum doses.« less
FAST-PT: a novel algorithm to calculate convolution integrals in cosmological perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.
2016-09-01
We present a novel algorithm, FAST-PT, for performing convolution or mode-coupling integrals that appear in nonlinear cosmological perturbation theory. The algorithm uses several properties of gravitational structure formation—the locality of the dark matter equations and the scale invariance of the problem—as well as Fast Fourier Transforms to describe the input power spectrum as a superposition of power laws. This yields extremely fast performance, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation. We describe the algorithm and demonstrate its application to calculating nonlinear corrections to the matter power spectrum, including one-loop standard perturbation theorymore » and the renormalization group approach. We also describe our public code (in Python) to implement this algorithm. The code, along with a user manual and example implementations, is available at https://github.com/JoeMcEwen/FAST-PT.« less
Interaction of Aquifer and River-Canal Network near Well Field.
Ghosh, Narayan C; Mishra, Govinda C; Sandhu, Cornelius S S; Grischek, Thomas; Singh, Vikrant V
2015-01-01
The article presents semi-analytical mathematical models to asses (1) enhancements of seepage from a canal and (2) induced flow from a partially penetrating river in an unconfined aquifer consequent to groundwater withdrawal in a well field in the vicinity of the river and canal. The nonlinear exponential relation between seepage from a canal reach and hydraulic head in the aquifer beneath the canal reach is used for quantifying seepage from the canal reach. Hantush's (1967) basic solution for water table rise due to recharge from a rectangular spreading basin in absence of pumping well is used for generating unit pulse response function coefficients for water table rise in the aquifer. Duhamel's convolution theory and method of superposition are applied to obtain water table position due to pumping and recharge from different canal reaches. Hunt's (1999) basic solution for river depletion due to constant pumping from a well in the vicinity of a partially penetrating river is used to generate unit pulse response function coefficients. Applying convolution technique and superposition, treating the recharge from canal reaches as recharge through conceptual injection wells, river depletion consequent to variable pumping and recharge is quantified. The integrated model is applied to a case study in Haridwar (India). The well field consists of 22 pumping wells located in the vicinity of a perennial river and a canal network. The river bank filtrate portion consequent to pumping is quantified. © 2014, National GroundWater Association.
Cone-shaped source characteristics and inductance effect of transient electromagnetic method
NASA Astrophysics Data System (ADS)
Yang, Hai-Yan; Li, Feng-Ping; Yue, Jian-Hua; Guo, Fu-Sheng; Liu, Xu-Hua; Zhang, Hua
2017-03-01
Small multi-turn coil devices are used with the transient electromagnetic method (TEM) in areas with limited space, particularly in underground environments such as coal mines roadways and engineering tunnels, and for detecting shallow geological targets in environmental and engineering fields. However, the equipment involved has strong mutual inductance coupling, which causes a lengthy turn-offtime and a deep "blind zone". This study proposes a new transmitter device with a conical-shape source and derives the radius formula of each coil and the mutual inductance coefficient of the cone. According to primary field characteristics, results of the two fields created, calculation of the conical-shaped source in a uniform medium using theoretical analysis, and a comparison of the inductance of the new device with that of the multi-turn coil, show that inductance of the multi-turn coil is nine times greater than that of the conical source with the same equivalent magnetic moment of 926.1 A·m2. This indicates that the new source leads to a much shallower "blind zone." Furthermore, increasing the bottom radius and turn of the cone creates a larger mutual inductance but increasing the cone height results in a lower mutual inductance. Using the superposition principle, the primary and secondary magnetic fields for a conical source in a homogeneous medium are calculated; results indicate that the magnetic behavior of the cone is the same as that of the multi-turn coils, but the transient responses of the secondary field and the total field are more stronger than those of the multi-turn coils. To study the transient response characteristics using a cone-shaped source in a layered earth, a numerical filtering algorithm is then developed using the fast Hankel transform and the improved cosine transform, again using the superposition principle. During development, an average apparent resistivity inverted from the induced electromotive force using each coil is defined to represent the comprehensive resistivity of the conical source. To verify the forward calculation method, the transient responses of H type models and KH type models are calculated, and data are inverted using a "smoke ring" inversion. The results of inversion have good agreement with original models and show that the forward calculation method is effective. The results of this study provide an option for solving the problem of a deep "blind zone" and also provide a theoretical indicator for further research.
Compressional and Shear Wakes in a 2D Dusty Plasma Crystal
NASA Astrophysics Data System (ADS)
Nosenko, V.; Goree, J.; Ma, Z. W.; Dubin, D. H. E.
2001-10-01
A 2D crystalline lattice can vibrate with two kinds of sound waves, compressional and shear (transverse), where the latter has a much slower sound speed. When these waves are excited by a moving supersonic disturbance, the superposition of the waves creates a Mach cone, i.e., a V-shaped wake. In our experiments, the supersonic disturbance was a moving spot of argon laser light, and this laser light applied a force, due to radiation pressure, on the particles. The beam was swept across the lattice in a controlled and repeatable manner. The particles were levitated in an argon rf discharge. By moving the laser spot faster than the shear sound speed c_t, but slower than the compressional sound speed c_l, we excited a shear wave Mach cone. Alternatively, by moving the laser spot faster than c_l, we excited both cones. In addition to Mach cones, we also observed a wake structure that arises from the compressional wave’s dispersion. We compare our results to Dubin’s theory (Phys. Plasmas 2000) and to molecular dynamics (MD) simulations.
Automated detection of geological landforms on Mars using Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Palafox, Leon F.; Hamilton, Christopher W.; Scheidt, Stephen P.; Alvarez, Alexander M.
2017-04-01
The large volume of high-resolution images acquired by the Mars Reconnaissance Orbiter has opened a new frontier for developing automated approaches to detecting landforms on the surface of Mars. However, most landform classifiers focus on crater detection, which represents only one of many geological landforms of scientific interest. In this work, we use Convolutional Neural Networks (ConvNets) to detect both volcanic rootless cones and transverse aeolian ridges. Our system, named MarsNet, consists of five networks, each of which is trained to detect landforms of different sizes. We compare our detection algorithm with a widely used method for image recognition, Support Vector Machines (SVMs) using Histogram of Oriented Gradients (HOG) features. We show that ConvNets can detect a wide range of landforms and has better accuracy and recall in testing data than traditional classifiers based on SVMs.
View-interpolation of sparsely sampled sinogram using convolutional neural network
NASA Astrophysics Data System (ADS)
Lee, Hoyeon; Lee, Jongha; Cho, Suengryong
2017-02-01
Spare-view sampling and its associated iterative image reconstruction in computed tomography have actively investigated. Sparse-view CT technique is a viable option to low-dose CT, particularly in cone-beam CT (CBCT) applications, with advanced iterative image reconstructions with varying degrees of image artifacts. One of the artifacts that may occur in sparse-view CT is the streak artifact in the reconstructed images. Another approach has been investigated for sparse-view CT imaging by use of the interpolation methods to fill in the missing view data and that reconstructs the image by an analytic reconstruction algorithm. In this study, we developed an interpolation method using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing projection data and compared its performances with the other interpolation techniques.
Automated detection of geological landforms on Mars using Convolutional Neural Networks.
Palafox, Leon F; Hamilton, Christopher W; Scheidt, Stephen P; Alvarez, Alexander M
2017-04-01
The large volume of high-resolution images acquired by the Mars Reconnaissance Orbiter has opened a new frontier for developing automated approaches to detecting landforms on the surface of Mars. However, most landform classifiers focus on crater detection, which represents only one of many geological landforms of scientific interest. In this work, we use Convolutional Neural Networks (ConvNets) to detect both volcanic rootless cones and transverse aeolian ridges. Our system, named MarsNet, consists of five networks, each of which is trained to detect landforms of different sizes. We compare our detection algorithm with a widely used method for image recognition, Support Vector Machines (SVMs) using Histogram of Oriented Gradients (HOG) features. We show that ConvNets can detect a wide range of landforms and has better accuracy and recall in testing data than traditional classifiers based on SVMs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez, P.; Tambasco, M.; LaFontaine, R.
2014-08-15
Our goal is to compare the dosimetric accuracy of the Pinnacle-3 9.2 Collapsed Cone Convolution Superposition (CCCS) and the iPlan 4.1 Monte Carlo (MC) and Pencil Beam (PB) algorithms in an anthropomorphic lung phantom using measurement as the gold standard. Ion chamber measurements were taken for 6, 10, and 18 MV beams in a CIRS E2E SBRT Anthropomorphic Lung Phantom, which mimics lung, spine, ribs, and tissue. The plan implemented six beams with a 5×5 cm{sup 2} field size, delivering a total dose of 48 Gy. Data from the planning systems were computed at the treatment isocenter in the leftmore » lung, and two off-axis points, the spinal cord and the right lung. The measurements were taken using a pinpoint chamber. The best results between data from the algorithms and our measurements occur at the treatment isocenter. For the 6, 10, and 18 MV beams, iPlan 4.1 MC software performs the best with 0.3%, 0.2%, and 4.2% absolute percent difference from measurement, respectively. Differences between our measurements and algorithm data are much greater for the off-axis points. The best agreement seen for the right lung and spinal cord is 11.4% absolute percent difference with 6 MV iPlan 4.1 PB and 18 MV iPlan 4.1 MC, respectively. As energy increases absolute percent difference from measured data increases up to 54.8% for the 18 MV CCCS algorithm. This study suggests that iPlan 4.1 MC computes peripheral dose and target dose in the lung more accurately than the iPlan 4.1 PB and Pinnicale CCCS algorithms.« less
Classification of teeth in cone-beam CT using deep convolutional neural network.
Miki, Yuma; Muramatsu, Chisako; Hayashi, Tatsuro; Zhou, Xiangrong; Hara, Takeshi; Katsumata, Akitoshi; Fujita, Hiroshi
2017-01-01
Dental records play an important role in forensic identification. To this end, postmortem dental findings and teeth conditions are recorded in a dental chart and compared with those of antemortem records. However, most dentists are inexperienced at recording the dental chart for corpses, and it is a physically and mentally laborious task, especially in large scale disasters. Our goal is to automate the dental filing process by using dental x-ray images. In this study, we investigated the application of a deep convolutional neural network (DCNN) for classifying tooth types on dental cone-beam computed tomography (CT) images. Regions of interest (ROIs) including single teeth were extracted from CT slices. Fifty two CT volumes were randomly divided into 42 training and 10 test cases, and the ROIs obtained from the training cases were used for training the DCNN. For examining the sampling effect, random sampling was performed 3 times, and training and testing were repeated. We used the AlexNet network architecture provided in the Caffe framework, which consists of 5 convolution layers, 3 pooling layers, and 2 full connection layers. For reducing the overtraining effect, we augmented the data by image rotation and intensity transformation. The test ROIs were classified into 7 tooth types by the trained network. The average classification accuracy using the augmented training data by image rotation and intensity transformation was 88.8%. Compared with the result without data augmentation, data augmentation resulted in an approximately 5% improvement in classification accuracy. This indicates that the further improvement can be expected by expanding the CT dataset. Unlike the conventional methods, the proposed method is advantageous in obtaining high classification accuracy without the need for precise tooth segmentation. The proposed tooth classification method can be useful in automatic filing of dental charts for forensic identification. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nielsen, Tine B; Wieslander, Elinore; Fogliata, Antonella; Nielsen, Morten; Hansen, Olfred; Brink, Carsten
2011-05-01
To investigate differences in calculated doses and normal tissue complication probability (NTCP) values between different dose algorithms. Six dose algorithms from four different treatment planning systems were investigated: Eclipse AAA, Oncentra MasterPlan Collapsed Cone and Pencil Beam, Pinnacle Collapsed Cone and XiO Multigrid Superposition, and Fast Fourier Transform Convolution. Twenty NSCLC patients treated in the period 2001-2006 at the same accelerator were included and the accelerator used for treatments were modeled in the different systems. The treatment plans were recalculated with the same number of monitor units and beam arrangements across the dose algorithms. Dose volume histograms of the GTV, PTV, combined lungs (excluding the GTV), and heart were exported and evaluated. NTCP values for heart and lungs were calculated using the relative seriality model and the LKB model, respectively. Furthermore, NTCP for the lungs were calculated from two different model parameter sets. Calculations and evaluations were performed both including and excluding density corrections. There are found statistical significant differences between the calculated dose to heart, lung, and targets across the algorithms. Mean lung dose and V20 are not very sensitive to change between the investigated dose calculation algorithms. However, the different dose levels for the PTV averaged over the patient population are varying up to 11%. The predicted NTCP values for pneumonitis vary between 0.20 and 0.24 or 0.35 and 0.48 across the investigated dose algorithms depending on the chosen model parameter set. The influence of the use of density correction in the dose calculation on the predicted NTCP values depends on the specific dose calculation algorithm and the model parameter set. For fixed values of these, the changes in NTCP can be up to 45%. Calculated NTCP values for pneumonitis are more sensitive to the choice of algorithm than mean lung dose and V20 which are also commonly used for plan evaluation. The NTCP values for heart complication are, in this study, not very sensitive to the choice of algorithm. Dose calculations based on density corrections result in quite different NTCP values than calculations without density corrections. It is therefore important when working with NTCP planning to use NTCP parameter values based on calculations and treatments similar to those for which the NTCP is of interest.
Scintillator performance considerations for dedicated breast computed tomography
NASA Astrophysics Data System (ADS)
Vedantham, Srinivasan; Shi, Linxi; Karellas, Andrew
2017-09-01
Dedicated breast computed tomography (BCT) is an emerging clinical modality that can eliminate tissue superposition and has the potential for improved sensitivity and specificity for breast cancer detection and diagnosis. It is performed without physical compression of the breast. Most of the dedicated BCT systems use large-area detectors operating in cone-beam geometry and are referred to as cone-beam breast CT (CBBCT) systems. The large-area detectors in CBBCT systems are energy-integrating, indirect-type detectors employing a scintillator that converts x-ray photons to light, followed by detection of optical photons. A key consideration that determines the image quality achieved by such CBBCT systems is the choice of scintillator and its performance characteristics. In this work, a framework for analyzing the impact of the scintillator on CBBCT performance and its use for task-specific optimization of CBBCT imaging performance is described.
Correction of scatter in megavoltage cone-beam CT
NASA Astrophysics Data System (ADS)
Spies, L.; Ebert, M.; Groh, B. A.; Hesse, B. M.; Bortfeld, T.
2001-03-01
The role of scatter in a cone-beam computed tomography system using the therapeutic beam of a medical linear accelerator and a commercial electronic portal imaging device (EPID) is investigated. A scatter correction method is presented which is based on a superposition of Monte Carlo generated scatter kernels. The kernels are adapted to both the spectral response of the EPID and the dimensions of the phantom being scanned. The method is part of a calibration procedure which converts the measured transmission data acquired for each projection angle into water-equivalent thicknesses. Tomographic reconstruction of the projections then yields an estimate of the electron density distribution of the phantom. It is found that scatter produces cupping artefacts in the reconstructed tomograms. Furthermore, reconstructed electron densities deviate greatly (by about 30%) from their expected values. The scatter correction method removes the cupping artefacts and decreases the deviations from 30% down to about 8%.
SU-E-T-510: Calculation of High Resolution and Material-Specific Photon Energy Deposition Kernels.
Huang, J; Childress, N; Kry, S
2012-06-01
To calculate photon energy deposition kernels (EDKs) used for convolution/superposition dose calculation at a higher resolution than the original Mackie et al. 1988 kernels and to calculate material-specific kernels that describe how energy is transported and deposited by secondary particles when the incident photon interacts in a material other than water. The high resolution EDKs for various incident photon energies were generated using the EGSnrc user-code EDKnrc, which forces incident photons to interact at the center of a 60 cm radius sphere of water. The simulation geometry is essentially the same as the original Mackie calculation but with a greater number of scoring voxels (48 radial, 144 angular bins). For the material-specific EDKs, incident photons were forced to interact at the center of a 1 mm radius sphere of material (lung, cortical bone, silver, or titanium) surrounded by a 60 cm radius water sphere, using the original scoring voxel geometry implemented by Mackie et al. 1988 (24 radial, 48 angular bins). Our Monte Carlo-calculated high resolution EDKs showed excellent agreement with the Mackie kernels, with our kernels providing more information about energy deposition close to the interaction site. Furthermore, our EDKs resulted in smoother dose deposition functions due to the finer resolution and greater number of simulation histories. The material-specific EDK results show that the angular distribution of energy deposition is different for incident photons interacting in different materials. Calculated from the angular dose distribution for 300 keV incident photons, the expected polar angle for dose deposition (
Graded-Index Optics are Matched to Optical Geometry in the Superposition Eyes of Scarab Beetles
NASA Astrophysics Data System (ADS)
McIntyre, P.; Caveney, S.
1985-11-01
Detailed measurements were made of the gradients of refractive index (g.r.i.) and relevant optical properties of the lens components in the ventral superposition eyes of three crepuscular species of the dung-beetle genus Onitis (Scarabaeinae). Each ommatidial lens has two components, a corneal facet and a crystalline cone; in both of these, the gradients provide a significant proportion of the refractive power. The spatial relationship between the lenses and the retina (optical geometry) was also determined. A computer ray-trace model based on these data was used to analyse the optical properties of the lenses and of the eye as a whole. Ray traces were done in two and three dimensions. The ommatidial lenses in all three species are afocal g.r.i. telescopes of low angular magnification. Parallel incident rays emerge approximately parallel for all angles of incidence up to the maximum. The superposition image of a distant point source is a small patch of light about the size of a rhabdom. There are obvious differences in the lens properties of the three species, most significantly in the shape of the refractive-index gradients in the crystalline cone, in the extent of the g.r.i. region in the two lens components and in the front-surface curvature of the corneal facet lens. These give rise to different angular magnifications M of the ommatidial lenses, the values for the three species being 1.7, 1.3, 1.0. This variation in M is matched by a variation in optical geometry, most evident in the different clear-zone widths. As a result, the level of the best superposition image lies close to the retina in the model eyes of all three species. The angular magnification also sets the maximum aperture or pupil of the eye and hence the brightness of the image on the retina. The smaller M, the larger the aperture and the brighter the image. By adopting a suitable value for M and the appropriate eye geometry, an eye can set image brightness and hence sensitivity within a certain range. Differences in the eye design are related to when the beetles fly at dusk. Flight experiments comparing two of the species show that the species with the higher value for M and corresponding lower sensitivity, initiates and terminates its flight earlier in the dusk than the other species with 2.8 times the sensitivity.
SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, M; Jiang, S; Lu, W
Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data,more » as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.« less
Portal scatter to primary dose ratio of 4 to 18 MV photon spectra incident on heterogeneous phantoms
NASA Astrophysics Data System (ADS)
Ozard, Siobhan R.
Electronic portal imagers designed and used to verify the positioning of a cancer patient undergoing radiation treatment can also be employed to measure the in vivo dose received by the patient. This thesis investigates the ratio of the dose from patient-scattered particles to the dose from primary (unscattered) photons at the imaging plane, called the scatter to primary dose ratio (SPR). The composition of the SPR according to the origin of scatter is analyzed more thoroughly than in previous studies. A new analytical method for calculating the SPR is developed and experimentally verified for heterogeneous phantoms. A novel technique that applies the analytical SPR method for in vivo dosimetry with a portal imager is evaluated. Monte Carlo simulation was used to determine the imager dose from patient-generated electrons and photons that scatter one or more times within the object. The database of SPRs reported from this investigation is new since the contribution from patient-generated electrons was neglected by previous Monte Carlo studies. The SPR from patient-generated electrons was found here to be as large as 0.03. The analytical SPR method relies on the established result that the scatter dose is uniform for an air gap between the patient and the imager that is greater than 50 cm. This method also applies the hypothesis that first-order Compton scatter only, is sufficient for scatter estimation. A comparison of analytical and measured SPRs for neck, thorax, and pelvis phantoms showed that the maximum difference was within +/-0.03, and the mean difference was less than +/-0.01 for most cases. This accuracy was comparable to similar analytical approaches that are limited to homogeneous phantoms. The analytical SPR method could replace lookup tables of measured scatter doses that can require significant time to measure. In vivo doses were calculated by combining our analytical SPR method and the convolution/superposition algorithm. Our calculated in vivo doses agreed within +/-3% with the doses measured in the phantom. The present in vivo method was faster compared to other techniques that use convolution/superposition. Our method is a feasible and satisfactory approach that contributes to on-line patient dose monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spadea, Maria Francesca, E-mail: mfspadea@unicz.it; Verburg, Joost Mathias; Seco, Joao
2014-01-15
Purpose: The aim of the study was to evaluate the dosimetric impact of low-Z and high-Z metallic implants on IMRT plans. Methods: Computed tomography (CT) scans of three patients were analyzed to study effects due to the presence of Titanium (low-Z), Platinum and Gold (high-Z) inserts. To eliminate artifacts in CT images, a sinogram-based metal artifact reduction algorithm was applied. IMRT dose calculations were performed on both the uncorrected and corrected images using a commercial planning system (convolution/superposition algorithm) and an in-house Monte Carlo platform. Dose differences between uncorrected and corrected datasets were computed and analyzed using gamma index (Pγ{submore » <1}) and setting 2 mm and 2% as distance to agreement and dose difference criteria, respectively. Beam specific depth dose profiles across the metal were also examined. Results: Dose discrepancies between corrected and uncorrected datasets were not significant for low-Z material. High-Z materials caused under-dosage of 20%–25% in the region surrounding the metal and over dosage of 10%–15% downstream of the hardware. Gamma index test yielded Pγ{sub <1}>99% for all low-Z cases; while for high-Z cases it returned 91% < Pγ{sub <1}< 99%. Analysis of the depth dose curve of a single beam for low-Z cases revealed that, although the dose attenuation is altered inside the metal, it does not differ downstream of the insert. However, for high-Z metal implants the dose is increased up to 10%–12% around the insert. In addition, Monte Carlo method was more sensitive to the presence of metal inserts than superposition/convolution algorithm. Conclusions: The reduction in terms of dose of metal artifacts in CT images is relevant for high-Z implants. In this case, dose distribution should be calculated using Monte Carlo algorithms, given their superior accuracy in dose modeling in and around the metal. In addition, the knowledge of the composition of metal inserts improves the accuracy of the Monte Carlo dose calculation significantly.« less
Adaptive intensity modulated radiotherapy for advanced prostate cancer
NASA Astrophysics Data System (ADS)
Ludlum, Erica Marie
The purpose of this research is to develop and evaluate improvements in intensity modulated radiotherapy (IMRT) for concurrent treatment of prostate and pelvic lymph nodes. The first objective is to decrease delivery time while maintaining treatment quality, and evaluate the effectiveness and efficiency of novel one-step optimization compared to conventional two-step optimization. Both planning methods are examined at multiple levels of complexity by comparing the number of beam apertures, or segments, the amount of radiation delivered as measured by monitor units (MUs), and delivery time. One-step optimization is demonstrated to simplify IMRT planning and reduce segments (from 160 to 40), MUs (from 911 to 746), and delivery time (from 22 to 7 min) with comparable plan quality. The second objective is to examine the capability of three commercial dose calculation engines employing different levels of accuracy and efficiency to handle high--Z materials, such as metallic hip prostheses, included in the treatment field. Pencil beam, convolution superposition, and Monte Carlo dose calculation engines are compared by examining the dose differences for patient plans with unilateral and bilateral hip prostheses, and for phantom plans with a metal insert for comparison with film measurements. Convolution superposition and Monte Carlo methods calculate doses that are 1.3% and 34.5% less than the pencil beam method, respectively. Film results demonstrate that Monte Carlo most closely represents actual radiation delivery, but none of the three engines accurately predict the dose distribution when high-Z heterogeneities exist in the treatment fields. The final objective is to improve the accuracy of IMRT delivery by accounting for independent organ motion during concurrent treatment of the prostate and pelvic lymph nodes. A leaf-shifting algorithm is developed to track daily prostate position without requiring online dose calculation. Compared to conventional methods of adjusting patient position, adjusting the multileaf collimator (MLC) leaves associated with the prostate in each segment significantly improves lymph node dose coverage (maintains 45 Gy compared to 42.7, 38.3, and 34.0 Gy for iso-shifts of 0.5, 1 and 1.5 cm). Altering the MLC portal shape is demonstrated as a new and effective solution to independent prostate movement during concurrent treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, K; Leung, R; Law, G
Background: Commercial treatment planning system Pinnacle3 (Philips, Fitchburg, WI, USA) employs a convolution-superposition algorithm for volumetric-modulated arc radiotherapy (VMAT) optimization and dose calculation. Study of Monte Carlo (MC) dose recalculation of VMAT plans for advanced-stage nasopharyngeal cancers (NPC) is currently limited. Methods: Twenty-nine VMAT prescribed 70Gy, 60Gy, and 54Gy to the planning target volumes (PTVs) were included. These clinical plans achieved with a CS dose engine on Pinnacle3 v9.0 were recalculated by the Monaco TPS v5.0 (Elekta, Maryland Heights, MO, USA) with a XVMC-based MC dose engine. The MC virtual source model was built using the same measurement beam datasetmore » as for the Pinnacle beam model. All MC recalculation were based on absorbed dose to medium in medium (Dm,m). Differences in dose constraint parameters per our institution protocol (Supplementary Table 1) were analyzed. Results: Only differences in maximum dose to left brachial plexus, left temporal lobe and PTV54Gy were found to be statistically insignificant (p> 0.05). Dosimetric differences of other tumor targets and normal organs are found in supplementary Table 1. Generally, doses outside the PTV in the normal organs are lower with MC than with CS. This is also true in the PTV54-70Gy doses but higher dose in the nasal cavity near the bone interfaces is consistently predicted by MC, possibly due to the increased backscattering of short-range scattered photons and the secondary electrons that is not properly modeled by the CS. The straight shoulders of the PTV dose volume histograms (DVH) initially resulted from the CS optimization are merely preserved after MC recalculation. Conclusion: Significant dosimetric differences in VMAT NPC plans were observed between CS and MC calculations. Adjustments of the planning dose constraints to incorporate the physics differences from conventional CS algorithm should be made when VMAT optimization is carried out directly with MC dose engine.« less
Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen
2012-01-01
In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081
Use of cone beam computed tomography in periodontology
Acar, Buket; Kamburoğlu, Kıvanç
2014-01-01
Diagnosis of periodontal disease mainly depends on clinical signs and symptoms. However, in the case of bone destruction, radiographs are valuable diagnostic tools as an adjunct to the clinical examination. Two dimensional periapical and panoramic radiographs are routinely used for diagnosing periodontal bone levels. In two dimensional imaging, evaluation of bone craters, lamina dura and periodontal bone level is limited by projection geometry and superpositions of adjacent anatomical structures. Those limitations of 2D radiographs can be eliminated by three-dimensional imaging techniques such as computed tomography. Cone beam computed tomography (CBCT) generates 3D volumetric images and is also commonly used in dentistry. All CBCT units provide axial, coronal and sagittal multi-planar reconstructed images without magnification. Also, panoramic images without distortion and magnification can be generated with curved planar reformation. CBCT displays 3D images that are necessary for the diagnosis of intra bony defects, furcation involvements and buccal/lingual bone destructions. CBCT applications provide obvious benefits in periodontics, however; it should be used only in correct indications considering the necessity and the potential hazards of the examination. PMID:24876918
Hydrograph separation for karst watersheds using a two-domain rainfall-discharge model
Long, Andrew J.
2009-01-01
Highly parameterized, physically based models may be no more effective at simulating the relations between rainfall and outflow from karst watersheds than are simpler models. Here an antecedent rainfall and convolution model was used to separate a karst watershed hydrograph into two outflow components: one originating from focused recharge in conduits and one originating from slow flow in a porous annex system. In convolution, parameters of a complex system are lumped together in the impulse-response function (IRF), which describes the response of the system to an impulse of effective precipitation. Two parametric functions in superposition approximate the two-domain IRF. The outflow hydrograph can be separated into flow components by forward modeling with isolated IRF components, which provides an objective criterion for separation. As an example, the model was applied to a karst watershed in the Madison aquifer, South Dakota, USA. Simulation results indicate that this watershed is characterized by a flashy response to storms, with a peak response time of 1 day, but that 89% of the flow results from the slow-flow domain, with a peak response time of more than 1 year. This long response time may be the result of perched areas that store water above the main water table. Simulation results indicated that some aspects of the system are stationary but that nonlinearities also exist.
Wizard CD Plus and ProTaper Universal: analysis of apical transportation using new software
GIANNASTASIO, Daiana; da ROSA, Ricardo Abreu; PERES, Bernardo Urbanetto; BARRETO, Mirela Sangoi; DOTTO, Gustavo Nogara; KUGA, Milton Carlos; PEREIRA, Jefferson Ricardo; SÓ, Marcus Vinícius Reis
2013-01-01
Objective This study has two aims: 1) to evaluate the apical transportation of the Wizard CD Plus and ProTaper Universal after preparation of simulated root canals; 2) to compare, with Adobe Photoshop, the ability of a new software (Regeemy) in superposing and subtracting images. Material and Methods Twenty five simulated root canals in acrylic-resin blocks (with 20º curvature) underwent cone beam computed tomography before and after preparation with the rotary systems (70 kVp, 4 mA, 10 s and with the 8×8 cm FoV selection). Canals were prepared up to F2 (ProTaper) and 24.04 (Wizard CD Plus) instruments and the working length was established to 15 mm. The tomographic images were imported into iCAT Vision software and CorelDraw for standardization. The superposition of pre- and post-instrumentation images from both systems was performed using Regeemy and Adobe Photoshop. The apical transportation was measured in millimetres using Image J. Five acrylic resin blocks were used to validate the superposition achieved by the software. Student's t-test for independent samples was used to evaluate the apical transportation achieved by the rotary systems using each software individually. Student's t-test for paired samples was used to compare the ability of each software in superposing and subtracting images from one rotary system per time. Results The values obtained with Regeemy and Adobe Photoshop were similar to rotary systems (P>0.05). ProTaper Universal and Wizard CD Plus promoted similar apical transportation regardless of the software used for image's superposition and subtraction (P>0.05). Conclusion Wizard CD Plus and ProTaper Universal promoted little apical transportation. Regeemy consists in a feasible software to superpose and subtract images and appears to be an alternative to Adobe Photoshop. PMID:24212994
Wizard CD Plus and ProTaper Universal: analysis of apical transportation using new software.
Giannastasio, Daiana; Rosa, Ricardo Abreu da; Peres, Bernardo Urbanetto; Barreto, Mirela Sangoi; Dotto, Gustavo Nogara; Kuga, Milton Carlos; Pereira, Jefferson Ricardo; Só, Marcus Vinícius Reis
2013-01-01
This study has two aims: 1) to evaluate the apical transportation of the Wizard CD Plus and ProTaper Universal after preparation of simulated root canals; 2) to compare, with Adobe Photoshop, the ability of a new software (Regeemy) in superposing and subtracting images. Twenty five simulated root canals in acrylic-resin blocks (with 20º curvature) underwent cone beam computed tomography before and after preparation with the rotary systems (70 kVp, 4 mA, 10 s and with the 8×8 cm FoV selection). Canals were prepared up to F2 (ProTaper) and 24.04 (Wizard CD Plus) instruments and the working length was established to 15 mm. The tomographic images were imported into iCAT Vision software and CorelDraw for standardization. The superposition of pre- and post-instrumentation images from both systems was performed using Regeemy and Adobe Photoshop. The apical transportation was measured in millimetres using Image J. Five acrylic resin blocks were used to validate the superposition achieved by the software. Student's t-test for independent samples was used to evaluate the apical transportation achieved by the rotary systems using each software individually. Student's t-test for paired samples was used to compare the ability of each software in superposing and subtracting images from one rotary system per time. The values obtained with Regeemy and Adobe Photoshop were similar to rotary systems (P>0.05). ProTaper Universal and Wizard CD Plus promoted similar apical transportation regardless of the software used for image's superposition and subtraction (P>0.05). Wizard CD Plus and ProTaper Universal promoted little apical transportation. Regeemy consists in a feasible software to superpose and subtract images and appears to be an alternative to Adobe Photoshop.
Discrete surface roughness effects on a blunt hypersonic cone in a quiet tunnel
NASA Astrophysics Data System (ADS)
Sharp, Nicole; White, Edward
2013-11-01
The mechanisms by which surface roughness creates boundary-layer disturbances in hypersonic flow are little understood. Work by Reshotko (AIAA 2008-4294) and others suggests that transient growth, resulting from the superposition of decaying non-orthogonal modes, may be responsible. The present study examines transient growth experimentally using a smooth 5-degree half-angle conic frustum paired with blunted nosetips with and without an azimuthal array of discrete roughness elements. A combination of hotwire anemometry and Pitot measurements in the low-disturbance Mach 6 Quiet Tunnel are used for boundary layer profiles downstream of the ring of roughness elements as well as azimuthal measurements to examine the high- and low-speed streaks characteristic of transient growth of stationary roughness-induced disturbances.
Yu, Jian-Hong; Lo, Lun-Jou; Hsu, Pin-Hsin
2017-01-01
This study integrates cone-beam computed tomography (CBCT)/laser scan image superposition, computer-aided design (CAD), and 3D printing (3DP) to develop a technology for producing customized dental (orthodontic) miniscrew surgical templates using polymer material. Maxillary bone solid models with the bone and teeth reconstructed using CBCT images and teeth and mucosa outer profile acquired using laser scanning were superimposed to allow miniscrew visual insertion planning and permit surgical template fabrication. The customized surgical template CAD model was fabricated offset based on the teeth/mucosa/bracket contour profiles in the superimposition model and exported to duplicate the plastic template using the 3DP technique and polymer material. An anterior retraction and intrusion clinical test for the maxillary canines/incisors showed that two miniscrews were placed safely and did not produce inflammation or other discomfort symptoms one week after surgery. The fitness between the mucosa and template indicated that the average gap sizes were found smaller than 0.5 mm and confirmed that the surgical template presented good holding power and well-fitting adaption. This study addressed integrating CBCT and laser scan image superposition; CAD and 3DP techniques can be applied to fabricate an accurate customized surgical template for dental orthodontic miniscrews. PMID:28280726
Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny
2011-01-01
Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required. Copyright © 2011 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
TU-AB-303-08: GPU-Based Software Platform for Efficient Image-Guided Adaptive Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, S; Robinson, A; McNutt, T
2015-06-15
Purpose: In this study, we develop an integrated software platform for adaptive radiation therapy (ART) that combines fast and accurate image registration, segmentation, and dose computation/accumulation methods. Methods: The proposed system consists of three key components; 1) deformable image registration (DIR), 2) automatic segmentation, and 3) dose computation/accumulation. The computationally intensive modules including DIR and dose computation have been implemented on a graphics processing unit (GPU). All required patient-specific data including the planning CT (pCT) with contours, daily cone-beam CTs, and treatment plan are automatically queried and retrieved from their own databases. To improve the accuracy of DIR between pCTmore » and CBCTs, we use the double force demons DIR algorithm in combination with iterative CBCT intensity correction by local intensity histogram matching. Segmentation of daily CBCT is then obtained by propagating contours from the pCT. Daily dose delivered to the patient is computed on the registered pCT by a GPU-accelerated superposition/convolution algorithm. Finally, computed daily doses are accumulated to show the total delivered dose to date. Results: Since the accuracy of DIR critically affects the quality of the other processes, we first evaluated our DIR method on eight head-and-neck cancer cases and compared its performance. Normalized mutual-information (NMI) and normalized cross-correlation (NCC) computed as similarity measures, and our method produced overall NMI of 0.663 and NCC of 0.987, outperforming conventional methods by 3.8% and 1.9%, respectively. Experimental results show that our registration method is more consistent and roust than existing algorithms, and also computationally efficient. Computation time at each fraction took around one minute (30–50 seconds for registration and 15–25 seconds for dose computation). Conclusion: We developed an integrated GPU-accelerated software platform that enables accurate and efficient DIR, auto-segmentation, and dose computation, thus supporting an efficient ART workflow. This work was supported by NIH/NCI under grant R42CA137886.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, J; Followill, D; Howell, R
2015-06-15
Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titaniummore » and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus, these two strategies do have the potential to improve accuracy for patients with metal implants in certain scenarios. This work was supported by Public Health Service grants CA 180803 and CA 10953 awarded by the National Cancer Institute, United States of Health and Human Services, and in part by Mobius Medical Systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, P; Ma, L
Purpose: To study the feasibility of treating multiple brain tumors withlarge number of noncoplanar IMRT beams. Methods: Thirty beams are selected from 390 deliverable beams separated by six degree in 4pi space. Beam selection optimization is based on a column generation algorithm. MLC leaf size is 2 mm. Dose matrices are calculated with collapsed cone convolution and superposition method in a 2 mm by 2mm by 2 mm grid. Twelve brain tumors of various shapes, sizes and locations are used to generate four plans treating 3, 6, 9 and 12 tumors. The radiation dose was 20 Gy prescribed to themore » 100% isodose line. Dose Volume Histograms for tumor and brain were compared. Results: All results are based on a 2 mm by 2 mm by 2 mm CT grid. For 3, 6, 9 and 12 tumor plans, minimum tumor doses are all 20 Gy. Mean tumor dose are 20.0, 20.1, 20.1 and 20.1 Gy. Maximum tumor dose are 23.3, 23.6, 25.4 and 25.4 Gy. Mean ventricles dose are 0.7, 1.7, 2.4 and 3.1 Gy.Mean subventricular zone dose are 0.8, 1.3, 2.2 and 3.2 Gy. Average Equivalent uniform dose (gEUD) values for tumor are 20.1, 20.1, 20.2 and 20.2 Gy. The conformity index (CI) values are close to 1 for all 4 plans. The gradient index (GI) values are 2.50, 2.05, 2.09 and 2.19. Conclusion: Compared with published Gamma Knife treatment studies, noncoplanar IMRT treatment plan is superior in terms of dose conformity. Due to maximum limit of beams per plan, Gamma knife has to treat multiple tumors separately in different plans. Noncoplanar IMRT plans theoretically can be delivered in a single plan on any modern linac with an automated couch and image guidance. This warrants further study of using noncoplanar IMRT as a viable treatment solution for multiple brain tumors.« less
SU-E-T-117: Analysis of the ArcCHECK Dosimetry Gamma Failure Using the 3DVH System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, S; Choi, W; Lee, H
2015-06-15
Purpose: To evaluate gamma analysis failure for the VMAT patient specific QA using ArcCHECK cylindrical phantom. The 3DVH system(Sun Nuclear, FL) was used to analyze the dose difference statistic between measured dose and treatment planning system calculated dose. Methods: Four case of gamma analysis failure were selected retrospectively. Our institution gamma analysis indexes were absolute dose, 3%/3mm and 90%pass rate in the ArcCHECK dosimetry. The collapsed cone convolution superposition (CCCS) dose calculation algorithm for VMAT was used. Dose delivery was performed with Elekta Agility. The A1SL(standard imaging, WI) and cavity plug were used for point dose measurement. Delivery QA plansmore » and images were used for 3DVH Reference data instead of patient plan and image. The measured data of ‘.txt’ file was used for comparison at diodes to acquire a global dose level. The,.acml’ file was used for AC-PDP and to calculated point dose. Results: The global dose of 3DVH was calculated as 1.10 Gy, 1.13, 1.01 and 0.2 Gy respectively. The global dose of 0.2 Gy case was induced by distance discrepancy. The TPS calculated point dose of was 2.33 Gy to 2.77 Gy and 3DVH calculated dose was 2.33 Gy to 2.68 Gy. The maximum dose differences were −2.83% and −3.1% for TPS vs. measured dose and TPS vs. 3DVH calculated respectively in the same case. The difference between measured and 3DVH was 0.1% in that case. The 3DVH gamma pass rate was 98% to 99.7%. Conclusion: We found the TPS calculation error by 3DVH calculation using ArcCHECK measured dose. It seemed that our CCCS algorithm RTP system over estimated at the central region and underestimated scattering at the peripheral diode detector point. The relative gamma analysis and point dose measurement would be recommended for VMAT DQA in the gamma failure case of ArcCHECK dosimetry.« less
Development of a fast and feasible spectrum modeling technique for flattening filter free beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Woong; Bush, Karl; Mok, Ed
Purpose: To develop a fast and robust technique for the determination of optimized photon spectra for flattening filter free (FFF) beams to be applied in convolution/superposition dose calculations. Methods: A two-step optimization method was developed to derive optimal photon spectra for FFF beams. In the first step, a simple functional form of the photon spectra proposed by Ali ['Functional forms for photon spectra of clinical linacs,' Phys. Med. Biol. 57, 31-50 (2011)] is used to determine generalized shapes of the photon spectra. In this method, the photon spectra were defined for the ranges of field sizes to consider the variationsmore » of the contributions of scattered photons with field size. Percent depth doses (PDDs) for each field size were measured and calculated to define a cost function, and a collapsed cone convolution (CCC) algorithm was used to calculate the PDDs. In the second step, the generalized functional form of the photon spectra was fine-tuned in a process whereby the weights of photon fluence became the optimizing free parameters. A line search method was used for the optimization and first order derivatives with respect to the optimizing parameters were derived from the CCC algorithm to enhance the speed of the optimization. The derived photon spectra were evaluated, and the dose distributions using the optimized spectra were validated. Results: The optimal spectra demonstrate small variations with field size for the 6 MV FFF beam and relatively large variations for the 10 MV FFF beam. The mean energies of the optimized 6 MV FFF spectra were decreased from 1.31 MeV for a 3 Multiplication-Sign 3 cm{sup 2} field to 1.21 MeV for a 40 Multiplication-Sign 40 cm{sup 2} field, and from 2.33 MeV at 3 Multiplication-Sign 3 cm{sup 2} to 2.18 MeV at 40 Multiplication-Sign 40 cm{sup 2} for the 10 MV FFF beam. The developed method could significantly improve the agreement between the calculated and measured PDDs. Root mean square differences on the optimized PDDs were observed to be 0.41% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.21% (40 Multiplication-Sign 40 cm{sup 2}) for the 6 MV FFF beam, and 0.35% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.29% (40 Multiplication-Sign 40 cm{sup 2}) for the 10 MV FFF beam. The first order derivatives from the functional form were found to improve the speed of computational time up to 20 times compared to the other techniques. Conclusions: The derived photon spectra resulted in good agreements with measured PDDs over the range of field sizes investigated. The suggested method is easily applicable to commercial radiation treatment planning systems since it only requires measured PDDs as input.« less
Strong-field ionization with twisted laser pulses
NASA Astrophysics Data System (ADS)
Paufler, Willi; Böning, Birger; Fritzsche, Stephan
2018-04-01
We apply quantum trajectory Monte Carlo computations in order to model strong-field ionization of atoms by twisted Bessel pulses and calculate photoelectron momentum distributions (PEMD). Since Bessel beams can be considered as an infinite superposition of circularly polarized plane waves with the same helicity, whose wave vectors lie on a cone, we compared the PEMD of such Bessel pulses to those of a circularly polarized pulse. We focus on the momentum distributions in propagation direction of the pulse and show how these momentum distributions are affected by experimental accessible parameters, such as the opening angle of the beam or the impact parameter of the atom with regard to the beam axis. In particular, we show that we can find higher momenta of the photoelectrons, if the opening angle is increased.
Wavespace-Based Coherent Deconvolution
NASA Technical Reports Server (NTRS)
Bahr, Christopher J.; Cattafesta, Louis N., III
2012-01-01
Array deconvolution is commonly used in aeroacoustic analysis to remove the influence of a microphone array's point spread function from a conventional beamforming map. Unfortunately, the majority of deconvolution algorithms assume that the acoustic sources in a measurement are incoherent, which can be problematic for some aeroacoustic phenomena with coherent, spatially-distributed characteristics. While several algorithms have been proposed to handle coherent sources, some are computationally intractable for many problems while others require restrictive assumptions about the source field. Newer generalized inverse techniques hold promise, but are still under investigation for general use. An alternate coherent deconvolution method is proposed based on a wavespace transformation of the array data. Wavespace analysis offers advantages over curved-wave array processing, such as providing an explicit shift-invariance in the convolution of the array sampling function with the acoustic wave field. However, usage of the wavespace transformation assumes the acoustic wave field is accurately approximated as a superposition of plane wave fields, regardless of true wavefront curvature. The wavespace technique leverages Fourier transforms to quickly evaluate a shift-invariant convolution. The method is derived for and applied to ideal incoherent and coherent plane wave fields to demonstrate its ability to determine magnitude and relative phase of multiple coherent sources. Multi-scale processing is explored as a means of accelerating solution convergence. A case with a spherical wave front is evaluated. Finally, a trailing edge noise experiment case is considered. Results show the method successfully deconvolves incoherent, partially-coherent, and coherent plane wave fields to a degree necessary for quantitative evaluation. Curved wave front cases warrant further investigation. A potential extension to nearfield beamforming is proposed.
NASA Astrophysics Data System (ADS)
Matheus, B. R. N.; Centurion, B. S.; Rubira-Bullen, I. R. F.; Schiabel, H.
2017-03-01
Cone Beam Computed Tomography (CBCT), a kind of face and neck exams can be opportunity to identify, as an incidental finding, calcifications of the carotid artery (CACA). Given the similarity of the CACA with calcification found in several x-ray exams, this work suggests that a similar technique designed to detect breast calcifications in mammography images could be applied to detect such calcifications in CBCT. The method used a 3D version of the calcification detection technique [1], based on a signal enhancement using a convolution with a 3D Laplacian of Gaussian (LoG) function followed by removing the high contrast bone structure from the image. Initial promising results show a 71% sensitivity with 0.48 false positive per exam.
Naqvi, Shahid A; D'Souza, Warren D
2005-04-01
Current methods to calculate dose distributions with organ motion can be broadly classified as "dose convolution" and "fluence convolution" methods. In the former, a static dose distribution is convolved with the probability distribution function (PDF) that characterizes the motion. However, artifacts are produced near the surface and around inhomogeneities because the method assumes shift invariance. Fluence convolution avoids these artifacts by convolving the PDF with the incident fluence instead of the patient dose. In this paper we present an alternative method that improves the accuracy, generality as well as the speed of dose calculation with organ motion. The algorithm starts by sampling an isocenter point from a parametrically defined space curve corresponding to the patient-specific motion trajectory. Then a photon is sampled in the linac head and propagated through the three-dimensional (3-D) collimator structure corresponding to a particular MLC segment chosen randomly from the planned IMRT leaf sequence. The photon is then made to interact at a point in the CT-based simulation phantom. Randomly sampled monoenergetic kernel rays issued from this point are then made to deposit energy in the voxels. Our method explicitly accounts for MLC-specific effects (spectral hardening, tongue-and-groove, head scatter) as well as changes in SSD with isocentric displacement, assuming that the body moves rigidly with the isocenter. Since the positions are randomly sampled from a continuum, there is no motion discretization, and the computation takes no more time than a static calculation. To validate our method, we obtained ten separate film measurements of an IMRT plan delivered on a phantom moving sinusoidally, with each fraction starting with a random phase. For 2 cm motion amplitude, we found that a ten-fraction average of the film measurements gave an agreement with the calculated infinite fraction average to within 2 mm in the isodose curves. The results also corroborate the existing notion that the interfraction dose variability due to the interplay between the MLC motion and breathing motion averages out over typical multifraction treatments. Simulation with motion waveforms more representative of real breathing indicate that the motion can produce penumbral spreading asymmetric about the static dose distributions. Such calculations can help a clinician decide to use, for example, a larger margin in the superior direction than in the inferior direction. In the paper we demonstrate that a 15 min run on a single CPU can readily illustrate the effect of a patient-specific breathing waveform, and can guide the physician in making informed decisions about margin expansion and dose escalation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez-Castano, D. M.; Gonzalez, L. Brualla; Gago-Arias, M. A.
2012-01-15
Purpose: This work contains an alternative methodology for obtaining correction factors for ionization chamber (IC) dosimetry of small fields and composite fields such as IMRT. The method is based on the convolution/superposition (C/S) of an IC response function (RF) with the dose distribution in a certain plane which includes chamber position. This method is an alternative to the full Monte Carlo (MC) approach that has been used previously by many authors for the same objective. Methods: The readout of an IC at a point inside a phantom irradiated by a certain beam can be obtained as the convolution of themore » dose spatial distribution caused by the beam and the IC two-dimensional RF. The proposed methodology has been applied successfully to predict the response of a PTW 30013 IC when measuring different nonreference fields, namely: output factors of 6 MV small fields, beam profiles of cobalt 60 narrow fields and 6 MV radiosurgery segments. The two-dimensional RF of a PTW 30013 IC was obtained by MC simulation of the absorbed dose to cavity air when the IC was scanned by a 0.6 x 0.6 mm{sup 2} cross section parallel pencil beam at low depth in a water phantom. For each of the cases studied, the results of the IC direct measurement were compared with the corresponding obtained by the C/S method. Results: For all of the cases studied, the agreement between the IC direct measurement and the IC calculated response was excellent (better than 1.5%). Conclusions: This method could be implemented in TPS in order to calculate dosimetry correction factors when an experimental IMRT treatment verification with in-phantom ionization chamber is performed. The miss-response of the IC due to the nonreference conditions could be quickly corrected by this method rather than employing MC derived correction factors. This method can be considered as an alternative to the plan-class associated correction factors proposed recently as part of an IAEA work group on nonstandard field dosimetry.« less
Aquifer response to stream-stage and recharge variations. I. Analytical step-response functions
Moench, A.F.; Barlow, P.M.
2000-01-01
Laplace transform step-response functions are presented for various homogeneous confined and leaky aquifer types and for anisotropic, homogeneous unconfined aquifers interacting with perennial streams. Flow is one-dimensional, perpendicular to the stream in the confined and leaky aquifers, and two-dimensional in a plane perpendicular to the stream in the water-table aquifers. The stream is assumed to penetrate the full thickness of the aquifer. The aquifers may be semi-infinite or finite in width and may or may not be bounded at the stream by a semipervious streambank. The solutions are presented in a unified manner so that mathematical relations among the various aquifer configurations are clearly demonstrated. The Laplace transform solutions are inverted numerically to obtain the real-time step-response functions for use in the convolution (or superposition) integral. To maintain linearity in the case of unconfined aquifers, fluctuations in the elevation of the water table are assumed to be small relative to the saturated thickness, and vertical flow into or out of the zone above the water table is assumed to occur instantaneously. Effects of hysteresis in the moisture distribution above the water table are therefore neglected. Graphical comparisons of the new solutions are made with known closed-form solutions.Laplace transform step-response functions are presented for various homogeneous confined and leaky aquifer types and for anisotropic, homogeneous unconfined aquifers interacting with perennial streams. Flow is one-dimensional, perpendicular to the stream in the confined and leaky aquifers, and two-dimensional in a plane perpendicular to the stream in the water-table aquifers. The stream is assumed to penetrate the full thickness of the aquifer. The aquifers may be semi-infinite or finite in width and may or may not be bounded at the stream by a semipervious streambank. The solutions are presented in a unified manner so that mathematical relations among the various aquifer configurations are clearly demonstrated. The Laplace transform solutions are inverted numerically to obtain the real-time step-response functions for use in the convolution (or superposition) integral. To maintain linearity in the case of unconfined aquifers, fluctuations in the elevation of the water table are assumed to be small relative to the saturated thickness, and vertical flow into or out of the zone above the water table is assumed to occur instantaneously. Effects of hysteresis in the moisture distribution above the water table are therefore neglected. Graphical comparisons of the new solutions are made with known closed-form solutions.
Retrieving rupture history using waveform inversions in time sequence
NASA Astrophysics Data System (ADS)
Yi, L.; Xu, C.; Zhang, X.
2017-12-01
The rupture history of large earthquakes is generally regenerated using the waveform inversion through utilizing seismological waveform records. In the waveform inversion, based on the superposition principle, the rupture process is linearly parameterized. After discretizing the fault plane into sub-faults, the local source time function of each sub-fault is usually parameterized using the multi-time window method, e.g., mutual overlapped triangular functions. Then the forward waveform of each sub-fault is synthesized through convoluting the source time function with its Green function. According to the superposition principle, these forward waveforms generated from the fault plane are summarized in the recorded waveforms after aligning the arrival times. Then the slip history is retrieved using the waveform inversion method after the superposing of all forward waveforms for each correspond seismological waveform records. Apart from the isolation of these forward waveforms generated from each sub-fault, we also realize that these waveforms are gradually and sequentially superimposed in the recorded waveforms. Thus we proposed a idea that the rupture model is possibly detachable in sequent rupture times. According to the constrained waveform length method emphasized in our previous work, the length of inverted waveforms used in the waveform inversion is objectively constrained by the rupture velocity and rise time. And one essential prior condition is the predetermined fault plane that limits the duration of rupture time, which means the waveform inversion is restricted in a pre-set rupture duration time. Therefore, we proposed a strategy to inverse the rupture process sequentially using the progressively shift rupture times as the rupture front expanding in the fault plane. And we have designed a simulation inversion to test the feasibility of the method. Our test result shows the prospect of this idea that requiring furthermore investigation.
Cadetti, Lucia; Bartoletti, Theodore M.; Thoreson, Wallace B.
2012-01-01
At the photoreceptor ribbon synapse, glutamate released from vesicles at different positions along the ribbon reaches the same postsynaptic receptors. Thus, vesicles may not exert entirely independent effects. We examined whether responses of salamander retinal horizontal cells evoked by light or direct depolarization during paired recordings could be predicted by summation of individual miniature excitatory postsynaptic currents (mEPSCs). For EPSCs evoked by depolarization of rods or cones, linear convolution of mEPSCs with photoreceptor release functions predicted EPSC waveforms and changes caused by inhibiting glutamate receptor desensitization. A low-affinity glutamate antagonist, kynurenic acid (KynA), preferentially reduced later components of rod-driven EPSCs, suggesting lower levels of glutamate are present during the later sustained component of the EPSC. A glutamate-scavenging enzyme, glutamic-pyruvic transaminase, did not inhibit mEPSCs or the initial component of rod-driven EPSCs, but reduced later components of the EPSC. Inhibiting glutamate uptake with a low concentration of dl-threo-β-benzoyloxyaspartate (TBOA) also did not alter mEPSCs or the initial component of rod-driven EPSCs, but enhanced later components of the EPSC. Low concentrations of TBOA and KynA did not affect the kinetics of fast cone-driven EPSCs. Under both rod- and cone-dominated conditions, light-evoked currents (LECs) were enhanced considerably by TBOA. LECs were more strongly inhibited than EPSCs by KynA, suggesting the presence of lower glutamate levels. Collectively, these results indicate that the initial EPSC component can be largely predicted from a linear sum of individual mEPSCs, but with sustained release, residual amounts of glutamate from multiple vesicles pool together, influencing LECs and later components of EPSCs. PMID:18547244
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kieselmann, J; Bartzsch, S; Oelfke, U
Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less
NASA Astrophysics Data System (ADS)
Ji, Y.; Shen, C.
2014-03-01
With consideration of magnetic field line curvature (FLC) pitch angle scattering and charge exchange reactions, the O+ (>300 keV) in the inner magnetosphere loss rates are investigated by using an eigenfunction analysis. The FLC scattering provides a mechanism for the ring current O+ to enter the loss cone and influence the loss rates caused by charge exchange reactions. Assuming that the pitch angle change is small for each scattering event, the diffusion equation including a charge exchange term is constructed and solved; the eigenvalues of the equation are identified. The resultant loss rates of O+ are approximately equal to the linear superposition of the loss rate without considering the charge exchange reactions and the loss rate associated with charge exchange reactions alone. The loss time is consistent with the observations from the early recovery phases of magnetic storms.
On static triplet structures in fluids with quantum behavior.
Sesé, Luis M
2018-03-14
The problem of the equilibrium triplet structures in fluids with quantum behavior is discussed. Theoretical questions of interest to the real space structures are addressed by studying the three types of structures that can be determined via path integrals (instantaneous, centroid, and total thermalized-continuous linear response). The cases of liquid para-H 2 and liquid neon on their crystallization lines are examined with path-integral Monte Carlo simulations, the focus being on the instantaneous and the centroid triplet functions (equilateral and isosceles configurations). To analyze the results further, two standard closures, Kirkwood superposition and Jackson-Feenberg convolution, are utilized. In addition, some pilot calculations with path integrals and closures of the instantaneous triplet structure factor of liquid para-H 2 are also carried out for the equilateral components. Triplet structural regularities connected to the pair radial structures are identified, a remarkable usefulness of the closures employed is observed (e.g., triplet spatial functions for medium-long distances, triplet structure factors for medium k wave numbers), and physical insight into the role of pair correlations near quantum crystallization is gained.
Analytical solutions to non-Fickian subsurface dispersion in uniform groundwater flow
Zou, S.; Xia, J.; Koussis, Antonis D.
1996-01-01
Analytical solutions are obtained by the Fourier transform technique for the one-, two-, and three-dimensional transport of a conservative solute injected instantaneously in a uniform groundwater flow. These solutions account for dispersive non-linearity caused by the heterogeneity of the hydraulic properties of aquifer systems and can be used as building blocks to construct solutions by convolution (principle of superposition) for source conditions other than slug injection. The dispersivity is assumed to vary parabolically with time and is thus constant for the entire system at any given time. Two approaches for estimating time-dependent dispersion parameters are developed for two-dimensional plumes. They both require minimal field tracer test data and, therefore, represent useful tools for assessing real-world aquifer contamination sites. The first approach requires mapped plume-area measurements at two specific times after the tracer injection. The second approach requires concentration-versus-time data from two sampling wells through which the plume passes. Detailed examples and comparisons with other procedures show that the methods presented herein are sufficiently accurate and easier to use than other available methods.
On static triplet structures in fluids with quantum behavior
NASA Astrophysics Data System (ADS)
Sesé, Luis M.
2018-03-01
The problem of the equilibrium triplet structures in fluids with quantum behavior is discussed. Theoretical questions of interest to the real space structures are addressed by studying the three types of structures that can be determined via path integrals (instantaneous, centroid, and total thermalized-continuous linear response). The cases of liquid para-H2 and liquid neon on their crystallization lines are examined with path-integral Monte Carlo simulations, the focus being on the instantaneous and the centroid triplet functions (equilateral and isosceles configurations). To analyze the results further, two standard closures, Kirkwood superposition and Jackson-Feenberg convolution, are utilized. In addition, some pilot calculations with path integrals and closures of the instantaneous triplet structure factor of liquid para-H2 are also carried out for the equilateral components. Triplet structural regularities connected to the pair radial structures are identified, a remarkable usefulness of the closures employed is observed (e.g., triplet spatial functions for medium-long distances, triplet structure factors for medium k wave numbers), and physical insight into the role of pair correlations near quantum crystallization is gained.
Interrelation of creep and relaxation: a modeling approach for ligaments.
Lakes, R S; Vanderby, R
1999-12-01
Experimental data (Thornton et al., 1997) show that relaxation proceeds more rapidly (a greater slope on a log-log scale) than creep in ligament, a fact not explained by linear viscoelasticity. An interrelation between creep and relaxation is therefore developed for ligaments based on a single-integral nonlinear superposition model. This interrelation differs from the convolution relation obtained by Laplace transforms for linear materials. We demonstrate via continuum concepts of nonlinear viscoelasticity that such a difference in rate between creep and relaxation phenomenologically occurs when the nonlinearity is of a strain-stiffening type, i.e., the stress-strain curve is concave up as observed in ligament. We also show that it is inconsistent to assume a Fung-type constitutive law (Fung, 1972) for both creep and relaxation. Using the published data of Thornton et al. (1997), the nonlinear interrelation developed herein predicts creep behavior from relaxation data well (R > or = 0.998). Although data are limited and the causal mechanisms associated with viscoelastic tissue behavior are complex, continuum concepts demonstrated here appear capable of interrelating creep and relaxation with fidelity.
NASA Astrophysics Data System (ADS)
Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.
2016-03-01
Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.
A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures.
Neylon, J; Sheng, K; Yu, V; Chen, Q; Low, D A; Kupelian, P; Santhanam, A
2014-10-01
Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.
A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neylon, J., E-mail: jneylon@mednet.ucla.edu; Sheng, K.; Yu, V.
Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy intomore » a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. Results: The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. Conclusions: The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrotriya, D., E-mail: shrotriya2007@gmail.com; Srivastava, R. N. L.; Kumar, S.
The accurate dose delivery to the clinical target volume in radiotherapy can be affected by various pelvic tissues heterogeneities. An in-house heterogeneous woman pelvic phantom was designed and used to verify the consistency and computational capability of treatment planning system of radiation dose delivery in the treatment of cancer cervix. Oncentra 3D-TPS with collapsed cone convolution (CCC) dose calculation algorithm was used to generate AP/PA and box field technique plan. the radiation dose was delivered by Primus Linac (Siemens make) employing high energy 15 MV photon beam by isocenter technique. A PTW make, 0.125cc ionization chamber was used for directmore » measurements at various reference points in cervix, bladder and rectum. The study revealed that maximum variation between computed and measured dose at cervix reference point was 1% in both the techniques and 3% and 4% variation in AP/PA field and 5% and 4.5% in box technique at bladder and rectum points respectively.« less
Stability of hypersonic boundary-layer flows with chemistry
NASA Technical Reports Server (NTRS)
Reed, Helen L.; Stuckert, Gregory K.; Haynes, Timothy S.
1993-01-01
The effects of nonequilibrium chemistry and three dimensionality on the stability characteristics of hypersonic flows are discussed. In two-dimensional (2-D) and axisymmetric flows, the inclusion of chemistry causes a shift of the second mode of Mack to lower frequencies. This is found to be due to the increase in size of the region of relative supersonic flow because of the lower speeds of sound in the relatively cooler boundary layers. Although this shift in frequency is present in both the equilibrium and nonequilibrium air results, the equilibrium approximation predicts modes which are not observed in the nonequilibrium calculations (for the flight conditions considered). These modes are superpositions of incoming and outgoing unstable disturbances which travel supersonically relative to the boundary-layer edge velocity. Such solutions are possible because of the finite shock stand-off distance. Their corresponding wall-normal profiles exhibit an oscillatory behavior in the inviscid region between the boundary-layer edge and the bow shock. For the examination of three-dimensional (3-D) effects, a rotating cone is used as a model of a swept wing. An increase of stagnation temperature is found to be only slightly stabilizing. The correlation of transition location (N = 9) with parameters describing the crossflow profile is discussed. Transition location does not correlate with the traditional crossflow Reynolds number. A new parameter that appears to correlate for boundary-layer flow was found. A verification with experiments on a yawed cone is provided.
Cone beam CT of the musculoskeletal system: clinical applications.
Posadzy, Magdalena; Desimpel, Julie; Vanhoenacker, Filip
2018-02-01
The aim of this pictorial review is to illustrate the use of CBCT in a broad spectrum of musculoskeletal disorders and to compare its diagnostic merit with other imaging modalities, such as conventional radiography (CR), Multidetector Computed Tomography (MDCT) and Magnetic Resonance Imaging. Cone Beam Computed Tomography (CBCT) has been widely used for dental imaging for over two decades. Current CBCT equipment allows use for imaging of various musculoskeletal applications. Because of its low cost and relatively low irradiation, CBCT may have an emergent role in making a more precise diagnosis, assessment of local extent and follow-up of fractures and dislocations of small bones and joints. Due to its exquisite high spatial resolution, CBCT in combination with arthrography may be the preferred technique for detection and local staging of cartilage lesions in small joints. Evaluation of degenerative joint disorders may be facilitated by CBCT compared to CR, particularly in those anatomical areas in which there is much superposition of adjacent bony structures. The use of CBCT in evaluation of osteomyelitis is restricted to detection of sequestrum formation in chronic osteomyelitis. Miscellaneous applications include assessment of (symptomatic) variants, detection and characterization of tumour and tumour-like conditions of bone. • Review the spectrum of MSK disorders in which CBCT may be complementary to other imaging techniques. • Compare the advantages and drawbacks of CBCT compared to other imaging techniques. • Define the present and future role of CBCT in musculoskeletal imaging.
Entanglement-assisted quantum convolutional coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilde, Mark M.; Brun, Todd A.
2010-04-15
We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.
Dosimetry audit simulation of treatment planning system in multicenters radiotherapy
NASA Astrophysics Data System (ADS)
Kasmuri, S.; Pawiro, S. A.
2017-07-01
Treatment Planning System (TPS) is an important modality that determines radiotherapy outcome. TPS requires input data obtained through commissioning and the potentially error occurred. Error in this stage may result in the systematic error. The aim of this study to verify the TPS dosimetry to know deviation range between calculated and measurement dose. This study used CIRS phantom 002LFC representing the human thorax and simulated all external beam radiotherapy stages. The phantom was scanned using CT Scanner and planned 8 test cases that were similar to those in clinical practice situation were made, tested in four radiotherapy centers. Dose measurement using 0.6 cc ionization chamber. The results of this study showed that generally, deviation of all test cases in four centers was within agreement criteria with average deviation about -0.17±1.59 %, -1.64±1.92 %, 0.34±1.34 % and 0.13±1.81 %. The conclusion of this study was all TPS involved in this study showed good performance. The superposition algorithm showed rather poor performance than either analytic anisotropic algorithm (AAA) and convolution algorithm with average deviation about -1.64±1.92 %, -0.17±1.59 % and -0.27±1.51 % respectively.
Resource Theory of Superposition
NASA Astrophysics Data System (ADS)
Theurer, T.; Killoran, N.; Egloff, D.; Plenio, M. B.
2017-12-01
The superposition principle lies at the heart of many nonclassical properties of quantum mechanics. Motivated by this, we introduce a rigorous resource theory framework for the quantification of superposition of a finite number of linear independent states. This theory is a generalization of resource theories of coherence. We determine the general structure of operations which do not create superposition, find a fundamental connection to unambiguous state discrimination, and propose several quantitative superposition measures. Using this theory, we show that trace decreasing operations can be completed for free which, when specialized to the theory of coherence, resolves an outstanding open question and is used to address the free probabilistic transformation between pure states. Finally, we prove that linearly independent superposition is a necessary and sufficient condition for the faithful creation of entanglement in discrete settings, establishing a strong structural connection between our theory of superposition and entanglement theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, N; Young, L; Parvathaneni, U
Purpose: The presence of high density dental amalgam in patient CT image data sets causes dose calculation errors for head and neck (HN) treatment planning. This study assesses and compares dosimetric variations in IMRT and VMAT treatment plans due to dental artifacts. Methods: Sixteen HN patients with similar treatment sites (oropharynx), tumor volume and extensive dental artifacts were divided into two groups: IMRT (n=8, 6 to 9 beams) and VMAT (n=8, 2 arcs with 352° rotation). All cases were planned with the Pinnacle 9.2 treatment planning software using the collapsed cone convolution superposition algorithm and a range of prescription dosemore » from 60 to 72Gy. Two different treatment plans were produced, each based on one of two image sets: (a)uncorrected; (b)dental artifacts density overridden (set to 1.0g/cm{sup 3}). Differences between the two treatment plans for each of the IMRT and VMAT techniques were quantified by the following dosimetric parameters: maximum point dose, maximum spinal cord and brainstem dose, mean left and right parotid dose, and PTV coverage (V95%Rx). Average differences generated for these dosimetric parameters were compared between IMRT and VMAT plans. Results: The average absolute dose differences (plan a minus plan b) for the VMAT and IMRT techniques, respectively, caused by dental artifacts were: 2.2±3.3cGy vs. 37.6±57.5cGy (maximum point dose, P=0.15); 1.2±0.9cGy vs. 7.9±6.7cGy (maximum spinal cord dose, P=0.026); 2.2±2.4cGy vs. 12.1±13.0cGy (maximum brainstem dose, P=0.077); 0.9±1.1cGy vs. 4.1±3.5cGy (mean left parotid dose, P=0.038); 0.9±0.8cGy vs. 7.8±11.9cGy (mean right parotid dose, P=0.136); 0.021%±0.014% vs. 0.803%±1.44% (PTV coverage, P=0.17). Conclusion: For the HN plans studied, dental artifacts demonstrated a greater dose calculation error for IMRT plans compared to VMAT plans. Rotational arcs appear on the average to compensate dose calculation errors induced by dental artifacts. Thus, compared to VMAT, density overrides for dental artifacts are more important when planning IMRT of HN.« less
Macroscopicity of quantum superpositions on a one-parameter unitary path in Hilbert space
NASA Astrophysics Data System (ADS)
Volkoff, T. J.; Whaley, K. B.
2014-12-01
We analyze quantum states formed as superpositions of an initial pure product state and its image under local unitary evolution, using two measurement-based measures of superposition size: one based on the optimal quantum binary distinguishability of the branches of the superposition and another based on the ratio of the maximal quantum Fisher information of the superposition to that of its branches, i.e., the relative metrological usefulness of the superposition. A general formula for the effective sizes of these states according to the branch-distinguishability measure is obtained and applied to superposition states of N quantum harmonic oscillators composed of Gaussian branches. Considering optimal distinguishability of pure states on a time-evolution path leads naturally to a notion of distinguishability time that generalizes the well-known orthogonalization times of Mandelstam and Tamm and Margolus and Levitin. We further show that the distinguishability time provides a compact operational expression for the superposition size measure based on the relative quantum Fisher information. By restricting the maximization procedure in the definition of this measure to an appropriate algebra of observables, we show that the superposition size of, e.g., NOON states and hierarchical cat states, can scale linearly with the number of elementary particles comprising the superposition state, implying precision scaling inversely with the total number of photons when these states are employed as probes in quantum parameter estimation of a 1-local Hamiltonian in this algebra.
Tweaked residual convolutional network for face alignment
NASA Astrophysics Data System (ADS)
Du, Wenchao; Li, Ke; Zhao, Qijun; Zhang, Yi; Chen, Hu
2017-08-01
We propose a novel Tweaked Residual Convolutional Network approach for face alignment with two-level convolutional networks architecture. Specifically, the first-level Tweaked Convolutional Network (TCN) module predicts the landmark quickly but accurately enough as a preliminary, by taking low-resolution version of the detected face holistically as the input. The following Residual Convolutional Networks (RCN) module progressively refines the landmark by taking as input the local patch extracted around the predicted landmark, particularly, which allows the Convolutional Neural Network (CNN) to extract local shape-indexed features to fine tune landmark position. Extensive evaluations show that the proposed Tweaked Residual Convolutional Network approach outperforms existing methods.
Radiation dose delivery verification in the treatment of carcinoma-cervix
NASA Astrophysics Data System (ADS)
Shrotriya, D.; Kumar, S.; Srivastava, R. N. L.
2015-06-01
The accurate dose delivery to the clinical target volume in radiotherapy can be affected by various pelvic tissues heterogeneities. An in-house heterogeneous woman pelvic phantom was designed and used to verify the consistency and computational capability of treatment planning system of radiation dose delivery in the treatment of cancer cervix. Oncentra 3D-TPS with collapsed cone convolution (CCC) dose calculation algorithm was used to generate AP/PA and box field technique plan. the radiation dose was delivered by Primus Linac (Siemens make) employing high energy 15 MV photon beam by isocenter technique. A PTW make, 0.125cc ionization chamber was used for direct measurements at various reference points in cervix, bladder and rectum. The study revealed that maximum variation between computed and measured dose at cervix reference point was 1% in both the techniques and 3% and 4% variation in AP/PA field and 5% and 4.5% in box technique at bladder and rectum points respectively.
Strom, E.W.; Oakley, W.T.
1995-01-01
The cities of Mendenhall and D'Lo, located in Simpson County, rely on ground water for their public supply and industrial needs. Most of the ground water comes from an aquifer of Miocene age. A study began in 1991 to describe the hydrogeology, analyze effects of ground-water withdrawal by making a drawdown map, and estimate the effects increased ground-water withdrawal might have on water levels in the Miocene age aquifer in the Mendenhall-D'Lo area. The most significant withdrawals of ground water in the study area are from 10 wells screened in the lower sand of the Catahoula Formation of Miocene age. Analysis of the effect of withdrawals from the 10 wells was made using the Theis non- equilibrium equation and applying the principle of superposition. Analysis of 1994 conditions was based on the pumpage history and aquifer properties deter- mined for each well. The drawdown surface resulting from the analysis indicates three general cones of depression. One cone is in the northwestern D'Lo area, one in the south-central Mendenhall area, and one about 1-1/2 miles east of Mendenhall. Calculated drawdown ranges from 21 to 47 feet. Potential drawdown-surface maps were made for 10 years and 20 years beyond 1994 using a constant pumpage. The map made for 10 years beyond 1994 indicates an average total increase in drawdown of about 5.3 feet. The map made for 20 years beyond 1994 indicates an average total increase in drawdown of about 7.3 feet.
SU-E-J-135: Feasibility of Using Quantitative Cone Beam CT for Proton Adaptive Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jingqian, W; Wang, Q; Zhang, X
2015-06-15
Purpose: To investigate the feasibility of using scatter corrected cone beam CT (CBCT) for proton adaptive planning. Methods: Phantom study was used to evaluate the CT number difference between the planning CT (pCT), quantitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units using adaptive scatter kernel superposition (ASKS) technique, and raw CBCT (rCBCT). After confirming the CT number accuracy, prostate patients, each with a pCT and several sets of weekly CBCT, were investigated for this study. Spot scanning proton treatment plans were independently generated on pCT, qCBCT and rCBCT. The treatment plans were then recalculated on all images. Dose-volume-histogrammore » (DVH) parameters and gamma analysis were used to compare between dose distributions. Results: Phantom study suggested that Hounsfield unit accuracy for different materials are within 20 HU for qCBCT and over 250 HU for rCBCT. For prostate patients, proton dose could be calculated accurately on qCBCT but not on rCBCT. When the original plan was recalculated on qCBCT, tumor coverage was maintained when anatomy was consistent with pCT. However, large dose variance was observed when patient anatomy change. Adaptive plan using qCBCT was able to recover tumor coverage and reduce dose to normal tissue. Conclusion: It is feasible to use qu antitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units for proton dose calculation and adaptive planning in proton therapy. Partly supported by Varian Medical Systems.« less
Communication: Two measures of isochronal superposition
NASA Astrophysics Data System (ADS)
Roed, Lisa Anita; Gundermann, Ditte; Dyre, Jeppe C.; Niss, Kristine
2013-09-01
A liquid obeys isochronal superposition if its dynamics is invariant along the isochrones in the thermodynamic phase diagram (the curves of constant relaxation time). This paper introduces two quantitative measures of isochronal superposition. The measures are used to test the following six liquids for isochronal superposition: 1,2,6 hexanetriol, glycerol, polyphenyl ether, diethyl phthalate, tetramethyl tetraphenyl trisiloxane, and dibutyl phthalate. The latter four van der Waals liquids obey isochronal superposition to a higher degree than the two hydrogen-bonded liquids. This is a prediction of the isomorph theory, and it confirms findings by other groups.
Communication: Two measures of isochronal superposition.
Roed, Lisa Anita; Gundermann, Ditte; Dyre, Jeppe C; Niss, Kristine
2013-09-14
A liquid obeys isochronal superposition if its dynamics is invariant along the isochrones in the thermodynamic phase diagram (the curves of constant relaxation time). This paper introduces two quantitative measures of isochronal superposition. The measures are used to test the following six liquids for isochronal superposition: 1,2,6 hexanetriol, glycerol, polyphenyl ether, diethyl phthalate, tetramethyl tetraphenyl trisiloxane, and dibutyl phthalate. The latter four van der Waals liquids obey isochronal superposition to a higher degree than the two hydrogen-bonded liquids. This is a prediction of the isomorph theory, and it confirms findings by other groups.
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures
Theobald, Douglas L.; Wuttke, Deborah S.
2008-01-01
Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907
Convolutional coding techniques for data protection
NASA Technical Reports Server (NTRS)
Massey, J. L.
1975-01-01
Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.
Wei, Jianing; Bouman, Charles A; Allebach, Jan P
2014-05-01
Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.
Chow, J; Leung, M; Van Dyk, J
2008-07-01
This study provides new information on the evaluation of the lung dose calculation algorithms as a function of the relative electron density of lung, ρ e,lung . Doses calculated using the collapsed cone convolution (CCC) and adaptive convolution (AC) algorithm in lung with the Pinnacle 3 system were compared to those calculated using the Monte Carlo (MC) simulation (EGSnrc-based code). Three groups of lung phantoms, namely, "Slab", "Column" and "Cube" with different ρ e,lung (0.05-0.7), positions, volumes and shapes of lung in water were used. 6 and 18MV photon beams with 4×4 and 10×10cm 2 field sizes produced by a Varian 21EX Linac were used in the MC dose calculations. Results show that the CCC algorithm agrees well with AC to within ±1% for doses calculated in the lung phantoms, indicating that the AC, with 3-4 times less computing time required than CCC, is a good substitute for the CCC method. Comparing the CCC and AC with MC, dose deviations are found when ρ e,lung are ⩽0.1-0.3. The degree of deviation depends on the photon beam energy and field size, and is relatively large when high-energy photon beams with small field are used. For the penumbra widths (20%-80%), the CCC and AC agree well with MC for the "Slab" and "Cube" phantoms with the lung volumes at the central beam axis (CAX). However, deviations >2mm occur in the "Column" phantoms, with two lung volumes separated by a water column along the CAX, using the 18MV (4×4cm 2 ) photon beams with ρ e,lung ⩽0.1. © 2008 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Asai, Kazuto
2009-02-01
We determine essentially all partial differential equations satisfied by superpositions of tree type and of a further special type. These equations represent necessary and sufficient conditions for an analytic function to be locally expressible as an analytic superposition of the type indicated. The representability of a real analytic function by a superposition of this type is independent of whether that superposition involves real-analytic functions or C^{\\rho}-functions, where the constant \\rho is determined by the structure of the superposition. We also prove that the function u defined by u^n=xu^a+yu^b+zu^c+1 is generally non-representable in any real (resp. complex) domain as f\\bigl(g(x,y),h(y,z)\\bigr) with twice differentiable f and differentiable g, h (resp. analytic f, g, h).
The trellis complexity of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Lin, W.
1995-01-01
It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.
Optimal simultaneous superpositioning of multiple structures with missing data.
Theobald, Douglas L; Steindel, Phillip A
2012-08-01
Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually 'missing' from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation-maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. dtheobald@brandeis.edu Supplementary data are available at Bioinformatics online.
Improving energy efficiency in handheld biometric applications
NASA Astrophysics Data System (ADS)
Hoyle, David C.; Gale, John W.; Schultz, Robert C.; Rakvic, Ryan N.; Ives, Robert W.
2012-06-01
With improved smartphone and tablet technology, it is becoming increasingly feasible to implement powerful biometric recognition algorithms on portable devices. Typical iris recognition algorithms, such as Ridge Energy Direction (RED), utilize two-dimensional convolution in their implementation. This paper explores the energy consumption implications of 12 different methods of implementing two-dimensional convolution on a portable device. Typically, convolution is implemented using floating point operations. If a given algorithm implemented integer convolution vice floating point convolution, it could drastically reduce the energy consumed by the processor. The 12 methods compared include 4 major categories: Integer C, Integer Java, Floating Point C, and Floating Point Java. Each major category is further divided into 3 implementations: variable size looped convolution, static size looped convolution, and unrolled looped convolution. All testing was performed using the HTC Thunderbolt with energy measured directly using a Tektronix TDS5104B Digital Phosphor oscilloscope. Results indicate that energy savings as high as 75% are possible by using Integer C versus Floating Point C. Considering the relative proportion of processing time that convolution is responsible for in a typical algorithm, the savings in energy would likely result in significantly greater time between battery charges.
NASA Astrophysics Data System (ADS)
QingJie, Wei; WenBin, Wang
2017-06-01
In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.
Theobald, Douglas L; Wuttke, Deborah S
2006-09-01
THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.
Liu, Xiang; Chandrasekhar, S; Winzer, P J; Chraplyvy, A R; Tkach, R W; Zhu, B; Taunay, T F; Fishteyn, M; DiGiovanni, D J
2012-08-13
Coherent superposition of light waves has long been used in various fields of science, and recent advances in digital coherent detection and space-division multiplexing have enabled the coherent superposition of information-carrying optical signals to achieve better communication fidelity on amplified-spontaneous-noise limited communication links. However, fiber nonlinearity introduces highly correlated distortions on identical signals and diminishes the benefit of coherent superposition in nonlinear transmission regime. Here we experimentally demonstrate that through coordinated scrambling of signal constellations at the transmitter, together with appropriate unscrambling at the receiver, the full benefit of coherent superposition is retained in the nonlinear transmission regime of a space-diversity fiber link based on an innovatively engineered multi-core fiber. This scrambled coherent superposition may provide the flexibility of trading communication capacity for performance in future optical fiber networks, and may open new possibilities in high-performance and secure optical communications.
Comparison of modal superposition methods for the analytical solution to moving load problems.
DOT National Transportation Integrated Search
1994-01-01
The response of bridge structures to moving loads is investigated using modal superposition methods. Two distinct modal superposition methods are available: the modedisplacement method and the mode-acceleration method. While the mode-displacement met...
Deep multi-scale convolutional neural network for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Zhang, Feng-zhe; Yang, Xia
2018-04-01
In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.
The origin of non-classical effects in a one-dimensional superposition of coherent states
NASA Technical Reports Server (NTRS)
Buzek, V.; Knight, P. L.; Barranco, A. Vidiella
1992-01-01
We investigate the nature of the quantum fluctuations in a light field created by the superposition of coherent fields. We give a physical explanation (in terms of Wigner functions and phase-space interference) why the 1-D superposition of coherent states in the direction of the x-quadrature leads to the squeezing of fluctuations in the y-direction, and show that such a superposition can generate the squeezed vacuum and squeezed coherent states.
Optimal simultaneous superpositioning of multiple structures with missing data
Theobald, Douglas L.; Steindel, Phillip A.
2012-01-01
Motivation: Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually ‘missing’ from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Results: Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation–maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. Availability and implementation: The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. Contact: dtheobald@brandeis.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22543369
The analysis of convolutional codes via the extended Smith algorithm
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Onyszchuk, I.
1993-01-01
Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.
Efstratiadis, Stella; Baumrind, Sheldon; Shofer, Frances; Jacobsson-Hunt, Ulla; Laster, Larry; Ghafari, Joseph
2005-11-01
The aims of this study were (1) to evaluate cephalometric changes in subjects with Class II Division 1 malocclusion who were treated with headgear (HG) or Fränkel function regulator (FR) and (2) to compare findings from regional superpositions of cephalometric structures with those from conventional cephalometric measurements. Cephalographs were taken at baseline, after 1 year, and after 2 years of 65 children enrolled in a prospective randomized clinical trial. The spatial location of the landmarks derived from regional superpositions was evaluated in a coordinate system oriented on natural head position. The superpositions included the best anatomic fit of the anterior cranial base, maxillary base, and mandibular structures. Both the HG and the FR were effective in correcting the distoclusion, and they generated enhanced differential growth between the jaws. Differences between cranial and maxillary superpositions regarding mandibular displacement (Point B, pogonion, gnathion, menton) were noted: the HG had a more horizontal vector on maxillary superposition that was also greater (.0001 < P < .05) than the horizontal displacement observed with the FR. This discrepancy appeared to be related to (1) the clockwise (backward) rotation of the palatal and mandibular planes observed with the HG; the palatal plane's rotation, which was transferred through the occlusion to the mandibular plane, was factored out on maxillary superposition; and (2) the interaction between the inclination of the maxillary incisors and the forward movement of the mandible during growth. Findings from superpositions agreed with conventional angular and linear measurements regarding the basic conclusions for the primary effects of HG and FR. However, the results suggest that inferences of mandibular displacement are more reliable from maxillary than cranial superposition when evaluating occlusal changes during treatment.
Investigation on the Accuracy of Superposition Predictions of Film Cooling Effectiveness
NASA Astrophysics Data System (ADS)
Meng, Tong; Zhu, Hui-ren; Liu, Cun-liang; Wei, Jian-sheng
2018-05-01
Film cooling effectiveness on flat plates with double rows of holes has been studied experimentally and numerically in this paper. This configuration is widely used to simulate the multi-row film cooling on turbine vane. Film cooling effectiveness of double rows of holes and each single row was used to study the accuracy of superposition predictions. Method of stable infrared measurement technique was used to measure the surface temperature on the flat plate. This paper analyzed the factors that affect the film cooling effectiveness including hole shape, hole arrangement, row-to-row spacing and blowing ratio. Numerical simulations were performed to analyze the flow structure and film cooling mechanisms between each film cooling row. Results show that the blowing ratio within the range of 0.5 to 2 has a significant influence on the accuracy of superposition predictions. At low blowing ratios, results obtained by superposition method agree well with the experimental data. While at high blowing ratios, the accuracy of superposition prediction decreases. Another significant factor is hole arrangement. Results obtained by superposition prediction are nearly the same as experimental values of staggered arrangement structures. For in-line configurations, the superposition values of film cooling effectiveness are much higher than experimental data. For different hole shapes, the accuracy of superposition predictions on converging-expanding holes is better than cylinder holes and compound angle holes. For two different hole spacing structures in this paper, predictions show good agreement with the experiment results.
Quantum superposition at the half-metre scale.
Kovachy, T; Asenbaum, P; Overstreet, C; Donnelly, C A; Dickerson, S M; Sugarbaker, A; Hogan, J M; Kasevich, M A
2015-12-24
The quantum superposition principle allows massive particles to be delocalized over distant positions. Though quantum mechanics has proved adept at describing the microscopic world, quantum superposition runs counter to intuitive conceptions of reality and locality when extended to the macroscopic scale, as exemplified by the thought experiment of Schrödinger's cat. Matter-wave interferometers, which split and recombine wave packets in order to observe interference, provide a way to probe the superposition principle on macroscopic scales and explore the transition to classical physics. In such experiments, large wave-packet separation is impeded by the need for long interaction times and large momentum beam splitters, which cause susceptibility to dephasing and decoherence. Here we use light-pulse atom interferometry to realize quantum interference with wave packets separated by up to 54 centimetres on a timescale of 1 second. These results push quantum superposition into a new macroscopic regime, demonstrating that quantum superposition remains possible at the distances and timescales of everyday life. The sub-nanokelvin temperatures of the atoms and a compensation of transverse optical forces enable a large separation while maintaining an interference contrast of 28 per cent. In addition to testing the superposition principle in a new regime, large quantum superposition states are vital to exploring gravity with atom interferometers in greater detail. We anticipate that these states could be used to increase sensitivity in tests of the equivalence principle, measure the gravitational Aharonov-Bohm effect, and eventually detect gravitational waves and phase shifts associated with general relativity.
Image quality of mixed convolution kernel in thoracic computed tomography.
Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar
2016-11-01
The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.
Serang, Oliver
2015-08-01
Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.
2001-09-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE...ABSTRACT In this dissertation, the bit error rates for serially concatenated convolutional codes (SCCC) for both BPSK and DPSK modulation with...INTENTIONALLY LEFT BLANK i EXECUTIVE SUMMARY In this dissertation, the bit error rates of serially concatenated convolutional codes
NASA Technical Reports Server (NTRS)
Benjauthrit, B.; Mulhall, B.; Madsen, B. D.; Alberda, M. E.
1976-01-01
The DSN telemetry system performance with convolutionally coded data using the operational maximum-likelihood convolutional decoder (MCD) being implemented in the Network is described. Data rates from 80 bps to 115.2 kbps and both S- and X-band receivers are reported. The results of both one- and two-way radio losses are included.
Enhanced online convolutional neural networks for object tracking
NASA Astrophysics Data System (ADS)
Zhang, Dengzhuo; Gao, Yun; Zhou, Hao; Li, Tianwen
2018-04-01
In recent several years, object tracking based on convolution neural network has gained more and more attention. The initialization and update of convolution filters can directly affect the precision of object tracking effective. In this paper, a novel object tracking via an enhanced online convolution neural network without offline training is proposed, which initializes the convolution filters by a k-means++ algorithm and updates the filters by an error back-propagation. The comparative experiments of 7 trackers on 15 challenging sequences showed that our tracker can perform better than other trackers in terms of AUC and precision.
Thermalization as an invisibility cloak for fragile quantum superpositions
NASA Astrophysics Data System (ADS)
Hahn, Walter; Fine, Boris V.
2017-07-01
We propose a method for protecting fragile quantum superpositions in many-particle systems from dephasing by external classical noise. We call superpositions "fragile" if dephasing occurs particularly fast, because the noise couples very differently to the superposed states. The method consists of letting a quantum superposition evolve under the internal thermalization dynamics of the system, followed by a time-reversal manipulation known as Loschmidt echo. The thermalization dynamics makes the superposed states almost indistinguishable during most of the above procedure. We validate the method by applying it to a cluster of spins ½.
Zhang, Shuqing; Zhou, Luyang; Xue, Changxi; Wang, Lei
2017-09-10
Compound eyes offer a promising field of miniaturized imaging systems. In one application of a compound eye, superposition of compound eye systems forms a composite image by superposing the images produced by different channels. The geometric configuration of superposition compound eye systems is achieved by three micro-lens arrays with different pitches and focal lengths. High resolution is indispensable for the practicability of superposition compound eye systems. In this paper, hybrid diffractive-refractive lenses are introduced into the design of a compound eye system for this purpose. With the help of ZEMAX, two superposition compound eye systems with and without hybrid diffractive-refractive lenses were separately designed. Then, we demonstrate the effectiveness of using a hybrid diffractive-refractive lens to improve the image quality.
Kinematical line broadening and spatially resolved line profiles from AGN.
NASA Astrophysics Data System (ADS)
Schulz, H.; Muecke, A.; Boer, B.; Dresen, M.; Schmidt-Kaler, T.
1995-03-01
We study geometrical effects for emission-line broadening in the optically thin limit by integrating the projected line emissivity along prespecified lines of sight that intersect rotating or expanding disks or cone-like configurations. Analytical expressions are given for the case that emissivity and velocity follow power laws of the radial distance. The results help to interpret spatially resolved spectra and to check the reliability of numerical computations. In the second part we describe a numerical code applicable to any geometrical configuration. Turbulent motions, atmospheric seeing and effects induced by the size of the observing aperture are simulated with appropriate convolution procedures. An application to narrow-line Hα profiles from the central region of the Seyfert galaxy NGC 7469 is presented. The shapes and asymmetries as well as the relative strengths of the Hα lines from different spatial positions can be explained by emission from a nuclear rotating disk of ionized gas, for which the distribution of Hα line emissivity and the rotation curve are derived. Appreciable turbulent line broadening with a Gaussian σ of ~40% of the rotational velocity has to be included to obtain a satisfactory fit.
Safavi-Hemami, Helena; Hu, Hao; Gorasia, Dhana G.; Bandyopadhyay, Pradip K.; Veith, Paul D.; Young, Neil D.; Reynolds, Eric C.; Yandell, Mark; Olivera, Baldomero M.; Purcell, Anthony W.
2014-01-01
Cone snails are highly successful marine predators that use complex venoms to capture prey. At any given time, hundreds of toxins (conotoxins) are synthesized in the secretory epithelial cells of the venom gland, a long and convoluted organ that can measure 4 times the length of the snail's body. In recent years a number of studies have begun to unveil the transcriptomic, proteomic and peptidomic complexity of the venom and venom glands of a number of cone snail species. By using a combination of DIGE, bottom-up proteomics and next-generation transcriptome sequencing the present study identifies proteins involved in envenomation and conotoxin maturation, significantly extending the repertoire of known (poly)peptides expressed in the venom gland of these remarkable animals. We interrogate the molecular and proteomic composition of different sections of the venom glands of 3 specimens of the fish hunter Conus geographus and demonstrate regional variations in gene expression and protein abundance. DIGE analysis identified 1204 gel spots of which 157 showed significant regional differences in abundance as determined by biological variation analysis. Proteomic interrogation identified 342 unique proteins including those that exhibited greatest fold change. The majority of these proteins also exhibited significant changes in their mRNA expression levels validating the reliability of the experimental approach. Transcriptome sequencing further revealed a yet unknown genetic diversity of several venom gland components. Interestingly, abundant proteins that potentially form part of the injected venom mixture, such as echotoxins, phospholipase A2 and con-ikots-ikots, classified into distinct expression clusters with expression peaking in different parts of the gland. Our findings significantly enhance the known repertoire of venom gland polypeptides and provide molecular and biochemical evidence for the compartmentalization of this organ into distinct functional entities. PMID:24478445
Antecedent Synoptic Environments Conducive to North American Polar/Subtropical Jet Superpositions
NASA Astrophysics Data System (ADS)
Winters, A. C.; Keyser, D.; Bosart, L. F.
2017-12-01
The atmosphere often exhibits a three-step pole-to-equator tropopause structure, with each break in the tropopause associated with a jet stream. The polar jet stream (PJ) typically resides in the break between the polar and subtropical tropopause and is positioned atop the strongly baroclinic, tropospheric-deep polar front around 50°N. The subtropical jet stream (STJ) resides in the break between the subtropical and the tropical tropopause and is situated on the poleward edge of the Hadley cell around 30°N. On occasion, the latitudinal separation between the PJ and the STJ can vanish, resulting in a vertical jet superposition. Prior case study work indicates that jet superpositions are often attended by a vigorous transverse vertical circulation that can directly impact the production of extreme weather over North America. Furthermore, this work suggests that there is considerable variability among antecedent environments conducive to the production of jet superpositions. These considerations motivate a comprehensive study to examine the synoptic-dynamic mechanisms that operate within the double-jet environment to produce North American jet superpositions. This study focuses on the identification of North American jet superposition events in the CFSR dataset during November-March 1979-2010. Superposition events will be classified into three characteristic types: "Polar Dominant" events will consist of events during which only the PJ is characterized by a substantial excursion from its climatological latitude band; "Subtropical Dominant" events will consist of events during which only the STJ is characterized by a substantial excursion from its climatological latitude band; and "Hybrid" events will consist of those events characterized by an excursion of both the PJ and STJ from their climatological latitude bands. Following their classification, frequency distributions of jet superpositions will be constructed to highlight the geographical locations most often associated with jet superpositions for each event type. PV inversion and composite analysis will also be performed on each event type in an effort to illustrate the antecedent environments and the dominant synoptic-dynamic mechanisms that favor the production of North American jet superpositions for each event type.
Han, Tao; Mikell, Justin K.; Salehpour, Mohammad; Mourtada, Firas
2011-01-01
Purpose: The deterministic Acuros XB (AXB) algorithm was recently implemented in the Eclipse treatment planning system. The goal of this study was to compare AXB performance to Monte Carlo (MC) and two standard clinical convolution methods: the anisotropic analytical algorithm (AAA) and the collapsed-cone convolution (CCC) method. Methods: Homogeneous water and multilayer slab virtual phantoms were used for this study. The multilayer slab phantom had three different materials, representing soft tissue, bone, and lung. Depth dose and lateral dose profiles from AXB v10 in Eclipse were compared to AAA v10 in Eclipse, CCC in Pinnacle3, and EGSnrc MC simulations for 6 and 18 MV photon beams with open fields for both phantoms. In order to further reveal the dosimetric differences between AXB and AAA or CCC, three-dimensional (3D) gamma index analyses were conducted in slab regions and subregions defined by AAPM Task Group 53. Results: The AXB calculations were found to be closer to MC than both AAA and CCC for all the investigated plans, especially in bone and lung regions. The average differences of depth dose profiles between MC and AXB, AAA, or CCC was within 1.1, 4.4, and 2.2%, respectively, for all fields and energies. More specifically, those differences in bone region were up to 1.1, 6.4, and 1.6%; in lung region were up to 0.9, 11.6, and 4.5% for AXB, AAA, and CCC, respectively. AXB was also found to have better dose predictions than AAA and CCC at the tissue interfaces where backscatter occurs. 3D gamma index analyses (percent of dose voxels passing a 2%∕2 mm criterion) showed that the dose differences between AAA and AXB are significant (under 60% passed) in the bone region for all field sizes of 6 MV and in the lung region for most of field sizes of both energies. The difference between AXB and CCC was generally small (over 90% passed) except in the lung region for 18 MV 10 × 10 cm2 fields (over 26% passed) and in the bone region for 5 × 5 and 10 × 10 cm2 fields (over 64% passed). With the criterion relaxed to 5%∕2 mm, the pass rates were over 90% for both AAA and CCC relative to AXB for all energies and fields, with the exception of AAA 18 MV 2.5 × 2.5 cm2 field, which still did not pass. Conclusions: In heterogeneous media, AXB dose prediction ability appears to be comparable to MC and superior to current clinical convolution methods. The dose differences between AXB and AAA or CCC are mainly in the bone, lung, and interface regions. The spatial distributions of these differences depend on the field sizes and energies. PMID:21776802
Achieving unequal error protection with convolutional codes
NASA Technical Reports Server (NTRS)
Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.
1994-01-01
This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.
Experimental Investigation of Convoluted Contouring for Aircraft Afterbody Drag Reduction
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Hunter, Craig A.
1999-01-01
An experimental investigation was performed in the NASA Langley 16-Foot Transonic Tunnel to determine the aerodynamic effects of external convolutions, placed on the boattail of a nonaxisymmetric nozzle for drag reduction. Boattail angles of 15 and 22 were tested with convolutions placed at a forward location upstream of the boattail curvature, at a mid location along the curvature and at a full location that spanned the entire boattail flap. Each of the baseline nozzle afterbodies (no convolutions) had a parabolic, converging contour with a parabolically decreasing corner radius. Data were obtained at several Mach numbers from static conditions to 1.2 for a range of nozzle pressure ratios and angles of attack. An oil paint flow visualization technique was used to qualitatively assess the effect of the convolutions. Results indicate that afterbody drag reduction by convoluted contouring is convolution location, Mach number, boattail angle, and NPR dependent. The forward convolution location was the most effective contouring geometry for drag reduction on the 22 afterbody, but was only effective for M < 0.95. At M = 0.8, drag was reduced 20 and 36 percent at NPRs of 5.4 and 7, respectively, but drag was increased 10 percent for M = 0.95 at NPR = 7. Convoluted contouring along the 15 boattail angle afterbody was not effective at reducing drag because the flow was minimally separated from the baseline afterbody, unlike the massive separation along the 22 boattail angle baseline afterbody.
Experimental study of current loss and plasma formation in the Z machine post-hole convolute
NASA Astrophysics Data System (ADS)
Gomez, M. R.; Gilgenbach, R. M.; Cuneo, M. E.; Jennings, C. A.; McBride, R. D.; Waisman, E. M.; Hutsel, B. T.; Stygar, W. A.; Rose, D. V.; Maron, Y.
2017-01-01
The Z pulsed-power generator at Sandia National Laboratories drives high energy density physics experiments with load currents of up to 26 MA. Z utilizes a double post-hole convolute to combine the current from four parallel magnetically insulated transmission lines into a single transmission line just upstream of the load. Current loss is observed in most experiments and is traditionally attributed to inefficient convolute performance. The apparent loss current varies substantially for z-pinch loads with different inductance histories; however, a similar convolute impedance history is observed for all load types. This paper details direct spectroscopic measurements of plasma density, temperature, and apparent and actual plasma closure velocities within the convolute. Spectral measurements indicate a correlation between impedance collapse and plasma formation in the convolute. Absorption features in the spectra show the convolute plasma consists primarily of hydrogen, which likely forms from desorbed electrode contaminant species such as H2O , H2 , and hydrocarbons. Plasma densities increase from 1 ×1016 cm-3 (level of detectability) just before peak current to over 1 ×1017 cm-3 at stagnation (tens of ns later). The density seems to be highest near the cathode surface, with an apparent cathode to anode plasma velocity in the range of 35 - 50 cm /μ s . Similar plasma conditions and convolute impedance histories are observed in experiments with high and low losses, suggesting that losses are driven largely by load dynamics, which determine the voltage on the convolute.
Non-classical State via Superposition of Two Opposite Coherent States
NASA Astrophysics Data System (ADS)
Ren, Gang; Du, Jian-ming; Yu, Hai-jun
2018-04-01
We study the non-classical properties of the states generated by superpositions of two opposite coherent states with the arbitrary relative phase factors. We show that the relative phase factors plays an important role in these superpositions. We demonstrate this result by discussing their squeezing properties, quantum statistical properties and fidelity in principle.
Ultrafast creation of large Schrödinger cat states of an atom.
Johnson, K G; Wong-Campos, J D; Neyenhuis, B; Mizrahi, J; Monroe, C
2017-09-26
Mesoscopic quantum superpositions, or Schrödinger cat states, are widely studied for fundamental investigations of quantum measurement and decoherence as well as applications in sensing and quantum information science. The generation and maintenance of such states relies upon a balance between efficient external coherent control of the system and sufficient isolation from the environment. Here we create a variety of cat states of a single trapped atom's motion in a harmonic oscillator using ultrafast laser pulses. These pulses produce high fidelity impulsive forces that separate the atom into widely separated positions, without restrictions that typically limit the speed of the interaction or the size and complexity of the resulting motional superposition. This allows us to quickly generate and measure cat states larger than previously achieved in a harmonic oscillator, and create complex multi-component superposition states in atoms.Generation of mesoscopic quantum superpositions requires both reliable coherent control and isolation from the environment. Here, the authors succeed in creating a variety of cat states of a single trapped atom, mapping spin superpositions into spatial superpositions using ultrafast laser pulses.
2015-12-15
Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network ... Convolutional Neural Networks (CNNs) enable them to outperform conventional techniques on standard object detection and classification tasks, their...detection accuracy and speed on the fine-grained Caltech UCSD bird dataset (Wah et al., 2011). Recently, Convolutional Neural Networks (CNNs), a deep
Xiao, Kai; Chen, Danny Z; Hu, X Sharon; Zhou, Bo
2012-12-01
The three-dimensional digital differential analyzer (3D-DDA) algorithm is a widely used ray traversal method, which is also at the core of many convolution∕superposition (C∕S) dose calculation approaches. However, porting existing C∕S dose calculation methods onto graphics processing unit (GPU) has brought challenges to retaining the efficiency of this algorithm. In particular, straightforward implementation of the original 3D-DDA algorithm inflicts a lot of branch divergence which conflicts with the GPU programming model and leads to suboptimal performance. In this paper, an efficient GPU implementation of the 3D-DDA algorithm is proposed, which effectively reduces such branch divergence and improves performance of the C∕S dose calculation programs running on GPU. The main idea of the proposed method is to convert a number of conditional statements in the original 3D-DDA algorithm into a set of simple operations (e.g., arithmetic, comparison, and logic) which are better supported by the GPU architecture. To verify and demonstrate the performance improvement, this ray traversal method was integrated into a GPU-based collapsed cone convolution∕superposition (CCCS) dose calculation program. The proposed method has been tested using a water phantom and various clinical cases on an NVIDIA GTX570 GPU. The CCCS dose calculation program based on the efficient 3D-DDA ray traversal implementation runs 1.42 ∼ 2.67× faster than the one based on the original 3D-DDA implementation, without losing any accuracy. The results show that the proposed method can effectively reduce branch divergence in the original 3D-DDA ray traversal algorithm and improve the performance of the CCCS program running on GPU. Considering the wide utilization of the 3D-DDA algorithm, various applications can benefit from this implementation method.
Witoonchart, Peerajak; Chongstitvatana, Prabhas
2017-08-01
In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki, Shigenari; Department of Electronics and Electrical Engineering, Keio University, 3-14-1, Hiyoshi, Kohoku-ku, Yokohama, 223-8522; Takeoka, Masahiro
2006-04-15
We present a simple protocol to purify a coherent-state superposition that has undergone a linear lossy channel. The scheme constitutes only a single beam splitter and a homodyne detector, and thus is experimentally feasible. In practice, a superposition of coherent states is transformed into a classical mixture of coherent states by linear loss, which is usually the dominant decoherence mechanism in optical systems. We also address the possibility of producing a larger amplitude superposition state from decohered states, and show that in most cases the decoherence of the states are amplified along with the amplitude.
The principle of superposition and its application in ground-water hydraulics
Reilly, Thomas E.; Franke, O. Lehn; Bennett, Gordon D.
1987-01-01
The principle of superposition, a powerful mathematical technique for analyzing certain types of complex problems in many areas of science and technology, has important applications in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that problem solutions can be added together to obtain composite solutions. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to ground-water hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader.
The principle of superposition and its application in ground-water hydraulics
Reilly, T.E.; Franke, O.L.; Bennett, G.D.
1984-01-01
The principle of superposition, a powerful methematical technique for analyzing certain types of complex problems in many areas of science and technology, has important application in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that solutions to individual problems can be added together to obtain solutions to complex problems. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to groundwater hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader. (USGS)
ERIC Educational Resources Information Center
Umar, A.; Yusau, B.; Ghandi, B. M.
2007-01-01
In this note, we introduce and discuss convolutions of two series. The idea is simple and can be introduced to higher secondary school classes, and has the potential of providing a good background for the well known convolution of function.
A fast complex integer convolution using a hybrid transform
NASA Technical Reports Server (NTRS)
Reed, I. S.; K Truong, T.
1978-01-01
It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.
Performance Analysis of Hybrid ARQ Protocols in a Slotted Code Division Multiple-Access Network
1989-08-01
Convolutional Codes . in Proc Int. Conf. Commun., 21.4.1-21.4.5, 1987. [27] J. Hagenauer. Rate Compatible Punctured Convolutional Codes . in Proc Int. Conf...achieved by using a low rate (r = 0.5), high constraint length (e.g., 32) punctured convolutional code . Code puncturing provides for a variable rate code ...investigated the use of convolutional codes in Type II Hybrid ARQ protocols. The error
2008-09-01
Convolutional Encoder Block Diagram of code rate 1 2 r = and...most commonly used along with block codes . They were introduced in 1955 by Elias [7]. Convolutional codes are characterized by the code rate kr n... convolutional code for 1 2 r = and = 3κ , namely [7 5], is used. Figure 2 Convolutional Encoder Block Diagram of code rate 1 2 r = and
Teleportation of Unknown Superpositions of Collective Atomic Coherent States
NASA Astrophysics Data System (ADS)
Zheng, Shi-Biao
2001-06-01
We propose a scheme to teleport an unknown superposition of two atomic coherent states with different phases. Our scheme is based on resonant and dispersive atom-field interaction. Our scheme provides a possibility of teleporting macroscopic superposition states of many atoms first time. The project supported by National Natural Science Foundation of China under Grant No. 60008003
Student Ability to Distinguish between Superposition States and Mixed States in Quantum Mechanics
ERIC Educational Resources Information Center
Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.
2015-01-01
Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the…
Nonclassical Properties of Q-Deformed Superposition Light Field State
NASA Technical Reports Server (NTRS)
Ren, Min; Shenggui, Wang; Ma, Aiqun; Jiang, Zhuohong
1996-01-01
In this paper, the squeezing effect, the bunching effect and the anti-bunching effect of the superposition light field state which involving q-deformation vacuum state and q-Glauber coherent state are studied, the controllable q-parameter of the squeezing effect, the bunching effect and the anti-bunching effect of q-deformed superposition light field state are obtained.
Liu, Wenbin; Liu, Aimin
2018-01-01
With the exploitation of offshore oil and gas gradually moving to deep water, higher temperature differences and pressure differences are applied to the pipeline system, making the global buckling of the pipeline more serious. For unburied deep-water pipelines, the lateral buckling is the major buckling form. The initial imperfections widely exist in the pipeline system due to manufacture defects or the influence of uneven seabed, and the distribution and geometry features of initial imperfections are random. They can be divided into two kinds based on shape: single-arch imperfections and double-arch imperfections. This paper analyzed the global buckling process of a pipeline with 2 initial imperfections by using a numerical simulation method and revealed how the ratio of the initial imperfection’s space length to the imperfection’s wavelength and the combination of imperfections affects the buckling process. The results show that a pipeline with 2 initial imperfections may suffer the superposition of global buckling. The growth ratios of buckling displacement, axial force and bending moment in the superposition zone are several times larger than no buckling superposition pipeline. The ratio of the initial imperfection’s space length to the imperfection’s wavelength decides whether a pipeline suffers buckling superposition. The potential failure point of pipeline exhibiting buckling superposition is as same as the no buckling superposition pipeline, but the failure risk of pipeline exhibiting buckling superposition is much higher. The shape and direction of two nearby imperfections also affects the failure risk of pipeline exhibiting global buckling superposition. The failure risk of pipeline with two double-arch imperfections is higher than pipeline with two single-arch imperfections. PMID:29554123
On the Use of Material-Dependent Damping in ANSYS for Mode Superposition Transient Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nie, J.; Wei, X.
The mode superposition method is often used for dynamic analysis of complex structures, such as the seismic Category I structures in nuclear power plants, in place of the less efficient full method, which uses the full system matrices for calculation of the transient responses. In such applications, specification of material-dependent damping is usually desirable because complex structures can consist of multiple types of materials that may have different energy dissipation capabilities. A recent review of the ANSYS manual for several releases found that the use of material-dependent damping is not clearly explained for performing a mode superposition transient dynamic analysis.more » This paper includes several mode superposition transient dynamic analyses using different ways to specify damping in ANSYS, in order to determine how material-dependent damping can be specified conveniently in a mode superposition transient dynamic analysis.« less
Testing the quantum superposition principle: matter waves and beyond
NASA Astrophysics Data System (ADS)
Ulbricht, Hendrik
2015-05-01
New technological developments allow to explore the quantum properties of very complex systems, bringing the question of whether also macroscopic systems share such features, within experimental reach. The interest in this question is increased by the fact that, on the theory side, many suggest that the quantum superposition principle is not exact, departures from it being the larger, the more macroscopic the system. Testing the superposition principle intrinsically also means to test suggested extensions of quantum theory, so-called collapse models. We will report on three new proposals to experimentally test the superposition principle with nanoparticle interferometry, optomechanical devices and by spectroscopic experiments in the frequency domain. We will also report on the status of optical levitation and cooling experiments with nanoparticles in our labs, towards an Earth bound matter-wave interferometer to test the superposition principle for a particle mass of one million amu (atomic mass unit).
Protograph-Based Raptor-Like Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.
2014-01-01
Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.
Convolution of large 3D images on GPU and its decomposition
NASA Astrophysics Data System (ADS)
Karas, Pavel; Svoboda, David
2011-12-01
In this article, we propose a method for computing convolution of large 3D images. The convolution is performed in a frequency domain using a convolution theorem. The algorithm is accelerated on a graphic card by means of the CUDA parallel computing model. Convolution is decomposed in a frequency domain using the decimation in frequency algorithm. We pay attention to keeping our approach efficient in terms of both time and memory consumption and also in terms of memory transfers between CPU and GPU which have a significant inuence on overall computational time. We also study the implementation on multiple GPUs and compare the results between the multi-GPU and multi-CPU implementations.
Quantum state engineering by a coherent superposition of photon subtraction and addition
NASA Astrophysics Data System (ADS)
Lee, Su-Yong; Nha, Hyunchul
2011-10-01
We study a coherent superposition tâ+r↠of field annihilation and creation operator acting on continuous variable systems and propose its application for quantum state engineering. We propose an experimental scheme to implement this elementary coherent operation and discuss its usefulness to produce an arbitrary superposition of number states involving up to two photons.
NASA Astrophysics Data System (ADS)
Xiao, Xiazi; Yu, Long
2018-05-01
Linear and square superposition hardening models are compared for the surface nanoindentation of ion-irradiated materials. Hardening mechanisms of both dislocations and defects within the plasticity affected region (PAR) are considered. Four sets of experimental data for ion-irradiated materials are adopted to compare with theoretical results of the two hardening models. It is indicated that both models describe experimental data equally well when the PAR is within the irradiated layer; whereas, when the PAR is beyond the irradiated region, the square superposition hardening model performs better. Therefore, the square superposition model is recommended to characterize the hardening behavior of ion-irradiated materials.
NASA Astrophysics Data System (ADS)
Handlos, Zachary J.
Though considerable research attention has been devoted to examination of the Northern Hemispheric polar and subtropical jet streams, relatively little has been directed toward understanding the circumstances that conspire to produce the relatively rare vertical superposition of these usually separate features. This dissertation investigates the structure and evolution of large-scale environments associated with jet superposition events in the northwest Pacific. An objective identification scheme, using NCEP/NCAR Reanalysis 1 data, is employed to identify all jet superpositions in the west Pacific (30-40°N, 135-175°E) for boreal winters (DJF) between 1979/80 - 2009/10. The analysis reveals that environments conducive to west Pacific jet superposition share several large-scale features usually associated with East Asian Winter Monsoon (EAWM) northerly cold surges, including the presence of an enhanced Hadley Cell-like circulation within the jet entrance region. It is further demonstrated that several EAWM indices are statistically significantly correlated with jet superposition frequency in the west Pacific. The life cycle of EAWM cold surges promotes interaction between tropical convection and internal jet dynamics. Low potential vorticity (PV), high theta e tropical boundary layer air, exhausted by anomalous convection in the west Pacific lower latitudes, is advected poleward towards the equatorward side of the jet in upper tropospheric isentropic layers resulting in anomalous anticyclonic wind shear that accelerates the jet. This, along with geostrophic cold air advection in the left jet entrance region that drives the polar tropopause downward through the jet core, promotes the development of the deep, vertical PV wall characteristic of superposed jets. West Pacific jet superpositions preferentially form within an environment favoring the aforementioned characteristics regardless of EAWM seasonal strength. Post-superposition, it is shown that the west Pacific jet extends eastward and is associated with an upper tropospheric cyclonic (anticyclonic) anomaly in its left (right) exit region. A downstream ridge is present over northwest Canada, and within the strong EAWM environment, a wavier flow over North America is observed relative to the neutral EAWM environment. Preliminary investigation of the two weak EAWM season superpositions reveals a Kona Low type feature post-superposition. This is associated with anomalous convection reminiscent of an atmospheric river southwest of Mexico.
Development and application of deep convolutional neural network in target detection
NASA Astrophysics Data System (ADS)
Jiang, Xiaowei; Wang, Chunping; Fu, Qiang
2018-04-01
With the development of big data and algorithms, deep convolution neural networks with more hidden layers have more powerful feature learning and feature expression ability than traditional machine learning methods, making artificial intelligence surpass human level in many fields. This paper first reviews the development and application of deep convolutional neural networks in the field of object detection in recent years, then briefly summarizes and ponders some existing problems in the current research, and the future development of deep convolutional neural network is prospected.
NASA Astrophysics Data System (ADS)
Sohn, Y.
2011-12-01
Recent studies show that the architecture of hydromagmatic volcanoes is far more complex than formerly expected. A number of external factors, such as paleohydrology and tectonics, in addition to magmatic processes are thought to play a role in controlling the overall characteristics and architecture of these volcanoes. One of the main consequences of these controls is the migration of the active vent during eruption. Case studies of hydromagmatic volcanoes in Korea show that those volcanoes that have undergone vent migration are characterized by superposition or juxtaposition of multiple rim deposits of partial tuff rings and/or tuff cones that have contrasting lithofacies characteristics, bed attitudes, and paleoflow directions. Various causes of vent migration are inferred from these volcanoes. Large-scale collapse of fragile substrate is interpreted to have caused vent migration in the Early Pleistocene volcanoes of Jeju Island, which were built upon still unconsolidated continental shelf sediments. Late Pleistocene to Holocene volcanoes, which were built upon a stack of rigid, shield-forming lava flows, lack features due to large-scale substrate collapse and have generally simple and circular morphologies either of a tuff ring or of a tuff cone. However, ~600 m shift of the eruptive center is inferred from one of these volcanoes (Ilchulbong tuff cone). The vent migration in this volcano is interpreted to have occurred because the eruption was sourced by multiple magma batches with significant eruptive pauses in between. The Yangpori diatreme in a Miocene terrestrial half-graben basin in SE Korea is interpreted to be a subsurface equivalent of a hydromagmatic volcano that has undergone vent migration. The vent migration here is inferred to have had both vertical and lateral components and have been caused by an abrupt tectonic activity near the basin margin. In all these cases, rimbeds or diatreme fills derived from different source vents are bounded by either prominent or subtle, commonly laterally extensive truncation surfaces or stratigraphic discontinuities. Careful documentation of these surfaces and discontinuities thus appears vital to proper interpretation of eruption history, morphologic evolution, and even deep-seated magmatic processes of a hydromagmatic volcano. In this respect, the technique known as 'allostratigraphy' appears useful in mapping, correlation, and interpretation of many hydrovolcanic edifices and sequences.
A spectral nudging method for the ACCESS1.3 atmospheric model
NASA Astrophysics Data System (ADS)
Uhe, P.; Thatcher, M.
2015-06-01
A convolution-based method of spectral nudging of atmospheric fields is developed in the Australian Community Climate and Earth Systems Simulator (ACCESS) version 1.3 which uses the UK Met Office Unified Model version 7.3 as its atmospheric component. The use of convolutions allow for flexibility in application to different atmospheric grids. An approximation using one-dimensional convolutions is applied, improving the time taken by the nudging scheme by 10-30 times compared with a version using a two-dimensional convolution, without measurably degrading its performance. Care needs to be taken in the order of the convolutions and the frequency of nudging to obtain the best outcome. The spectral nudging scheme is benchmarked against a Newtonian relaxation method, nudging winds and air temperature towards ERA-Interim reanalyses. We find that the convolution approach can produce results that are competitive with Newtonian relaxation in both the effectiveness and efficiency of the scheme, while giving the added flexibility of choosing which length scales to nudge.
A spectral nudging method for the ACCESS1.3 atmospheric model
NASA Astrophysics Data System (ADS)
Uhe, P.; Thatcher, M.
2014-10-01
A convolution based method of spectral nudging of atmospheric fields is developed in the Australian Community Climate and Earth Systems Simulator (ACCESS) version 1.3 which uses the UK Met Office Unified Model version 7.3 as its atmospheric component. The use of convolutions allow flexibility in application to different atmospheric grids. An approximation using one-dimensional convolutions is applied, improving the time taken by the nudging scheme by 10 to 30 times compared with a version using a two-dimensional convolution, without measurably degrading its performance. Care needs to be taken in the order of the convolutions and the frequency of nudging to obtain the best outcome. The spectral nudging scheme is benchmarked against a Newtonian relaxation method, nudging winds and air temperature towards ERA-Interim reanalyses. We find that the convolution approach can produce results that are competitive with Newtonian relaxation in both the effectiveness and efficiency of the scheme, while giving the added flexibility of choosing which length scales to nudge.
Litigated Metal Clusters - Structures, Energy and Reactivity
2016-04-01
projection superposition approximation ( PSA ) algorithm through a more careful consideration of how to calculate cross sections for elongated molecules...superposition approximation ( PSA ) is now complete. We have made it available free of charge to the scientific community on a dedicated website at UCSB. We...by AFOSR. We continued to improve the projection superposition approximation ( PSA ) algorithm through a more careful consideration of how to calculate
Multichannel Polarization-Controllable Superpositions of Orbital Angular Momentum States.
Yue, Fuyong; Wen, Dandan; Zhang, Chunmei; Gerardot, Brian D; Wang, Wei; Zhang, Shuang; Chen, Xianzhong
2017-04-01
A facile metasurface approach is shown to realize polarization-controllable multichannel superpositions of orbital angular momentum (OAM) states with various topological charges. By manipulating the polarization state of the incident light, four kinds of superpositions of OAM states are realized using a single metasurface consisting of space-variant arrays of gold nanoantennas. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Splash-cup plants accelerate raindrops to disperse seeds.
Amador, Guillermo J; Yamada, Yasukuni; McCurley, Matthew; Hu, David L
2013-02-01
The conical flowers of splash-cup plants Chrysosplenium and Mazus catch raindrops opportunistically, exploiting the subsequent splash to disperse their seeds. In this combined experimental and theoretical study, we elucidate their mechanism for maximizing dispersal distance. We fabricate conical plant mimics using three-dimensional printing, and use high-speed video to visualize splash profiles and seed travel distance. Drop impacts that strike the cup off-centre achieve the largest dispersal distances of up to 1 m. Such distances are achieved because splash speeds are three to five times faster than incoming drop speeds, and so faster than the traditionally studied splashes occurring upon horizontal surfaces. This anomalous splash speed is because of the superposition of two components of momentum, one associated with a component of the drop's motion parallel to the splash-cup surface, and the other associated with film spreading induced by impact with the splash-cup. Our model incorporating these effects predicts the observed dispersal distance within 6-18% error. According to our experiments, the optimal cone angle for the splash-cup is 40°, a value consistent with the average of five species of splash-cup plants. This optimal angle arises from the competing effects of velocity amplification and projectile launching angle.
Quantitative Laser-Saturated Fluorescence Measurements of Nitric Oxide in a Heptane Spray Flame
NASA Technical Reports Server (NTRS)
Cooper, Clayton S.; Laurendeau, Normand M.; Lee, Chi (Technical Monitor)
1997-01-01
We report spatially resolved laser-saturated fluorescence measurements of NO concentration in a pre-heated, lean-direct injection (LDI) spray flame at atmospheric pressure. The spray is produced by a hollow-cone, pressure-atomized nozzle supplied with liquid heptane. NO is excited via the Q2(26.5) transition of the gamma(0,0) band. Detection is performed in a 2-nm region centered on the gamma(0,1) band. Because of the relatively close spectral spacing between the excitation (226 nm) and detection wavelengths (236 nm), the gamma(0,1) band of NO cannot be isolated from the spectral wings of the Mie scattering signal produced by the spray. To account for the resulting superposition of the fluorescence and scattering signals, a background subtraction method has been developed that utilizes a nearby non-resonant wavelength. Excitation scans have been performed to locate the optimum off-line wavelength. Detection scans have been performed at problematic locations in the flame to determine possible fluorescence interferences from UHCs and PAHs at both the on-line and off-line excitation wavelengths. Quantitative radial NO profiles are presented and analyzed so as to better understand the operation of lean-direct injectors for gas turbine combustors.
Analytical model of a corona discharge from a conical electrode under saturation
NASA Astrophysics Data System (ADS)
Boltachev, G. Sh.; Zubarev, N. M.
2012-11-01
Exact partial solutions are found for the electric field distribution in the outer region of a stationary unipolar corona discharge from an ideal conical needle in the space-charge-limited current mode with allowance for the electric field dependence of the ion mobility. It is assumed that only the very tip of the cone is responsible for the discharge, i.e., that the ionization zone is a point. The solutions are obtained by joining the spherically symmetric potential distribution in the drift space and the self-similar potential distribution in the space-charge-free region. Such solutions are outside the framework of the conventional Deutsch approximation, according to which the space charge insignificantly influences the shape of equipotential surfaces and electric lines of force. The dependence is derived of the corona discharge saturation current on the apex angle of the conical electrode and applied potential difference. A simple analytical model is suggested that describes drift in the point-plane electrode geometry under saturation as a superposition of two exact solutions for the field potential. In terms of this model, the angular distribution of the current density over the massive plane electrode is derived, which agrees well with Warburg's empirical law.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thiyagarajan, Rajesh; Vikraman, S; Karrthick, KP
Purpose: To evaluate the impact of dose calculation algorithm on the dose distribution of biologically optimized Volumatric Modulated Arc Therapy (VMAT) plans for Esophgeal cancer. Methods: Eighteen retrospectively treated patients with carcinoma esophagus were studied. VMAT plans were optimized using biological objectives in Monaco (5.0) TPS for 6MV photon beam (Elekta Infinity). These plans were calculated for final dose using Monte Carlo (MC), Collapsed Cone Convolution (CCC) & Pencil Beam Convolution (PBC) algorithms from Monaco and Oncentra Masterplan TPS. A dose grid of 2mm was used for all algorithms and 1% per plan uncertainty maintained for MC calculation. MC basedmore » calculations were considered as the reference for CCC & PBC. Dose volume histogram (DVH) indices (D95, D98, D50 etc) of Target (PTV) and critical structures were compared to study the impact of all three algorithms. Results: Beam models were consistent with measured data. The mean difference observed in reference with MC calculation for D98, D95, D50 & D2 of PTV were 0.37%, −0.21%, 1.51% & 1.18% respectively for CCC and 3.28%, 2.75%, 3.61% & 3.08% for PBC. Heart D25 mean difference was 4.94% & 11.21% for CCC and PBC respectively. Lung Dmean mean difference was 1.5% (CCC) and 4.1% (PBC). Spinal cord D2 mean difference was 2.35% (CCC) and 3.98% (PBC). Similar differences were observed for liver and kidneys. The overall mean difference found for target and critical structures was 0.71±1.52%, 2.71±3.10% for CCC and 3.18±1.55%, 6.61±5.1% for PBC respectively. Conclusion: We observed a significant overestimate of dose distribution by CCC and PBC as compared to MC. The dose prediction of CCC is closer (<3%) to MC than that of PBC. This can be attributed to poor performance of CCC and PBC in inhomogeneous regions around esophagus. CCC can be considered as an alternate in the absence of MC algorithm.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkoff, T. J., E-mail: adidasty@gmail.com
We motivate and introduce a class of “hierarchical” quantum superposition states of N coupled quantum oscillators. Unlike other well-known multimode photonic Schrödinger-cat states such as entangled coherent states, the hierarchical superposition states are characterized as two-branch superpositions of tensor products of single-mode Schrödinger-cat states. In addition to analyzing the photon statistics and quasiprobability distributions of prominent examples of these nonclassical states, we consider their usefulness for highprecision quantum metrology of nonlinear optical Hamiltonians and quantify their mode entanglement. We propose two methods for generating hierarchical superpositions in N = 2 coupled microwave cavities, exploiting currently existing quantum optical technology formore » generating entanglement between spatially separated electromagnetic field modes.« less
Space-variant polarization patterns of non-collinear Poincaré superpositions
NASA Astrophysics Data System (ADS)
Galvez, E. J.; Beach, K.; Zeosky, J. J.; Khajavi, B.
2015-03-01
We present analysis and measurements of the polarization patterns produced by non-collinear superpositions of Laguerre-Gauss spatial modes in orthogonal polarization states, which are known as Poincaré modes. Our findings agree with predictions (I. Freund Opt. Lett. 35, 148-150 (2010)), that superpositions containing a C-point lead to a rotation of the polarization ellipse in 3-dimensions. Here we do imaging polarimetry of superpositions of first- and zero-order spatial modes at relative beam angles of 0-4 arcmin. We find Poincaré-type polarization patterns showing fringes in polarization orientation, but which preserve the polarization-singularity index for all three cases of C-points: lemons, stars and monstars.
Non-coaxial superposition of vector vortex beams.
Aadhi, A; Vaity, Pravin; Chithrabhanu, P; Reddy, Salla Gangi; Prabakar, Shashi; Singh, R P
2016-02-10
Vector vortex beams are classified into four types depending upon spatial variation in their polarization vector. We have generated all four of these types of vector vortex beams by using a modified polarization Sagnac interferometer with a vortex lens. Further, we have studied the non-coaxial superposition of two vector vortex beams. It is observed that the superposition of two vector vortex beams with same polarization singularity leads to a beam with another kind of polarization singularity in their interaction region. The results may be of importance in ultrahigh security of the polarization-encrypted data that utilizes vector vortex beams and multiple optical trapping with non-coaxial superposition of vector vortex beams. We verified our experimental results with theory.
Sze, Michelle Wynne C; Sugon, Quirino M; McNamara, Daniel J
2010-11-01
In this paper, we use Clifford (geometric) algebra Cl(3,0) to verify if electromagnetic energy-momentum density is still conserved for oblique superposition of two elliptically polarized plane waves with the same frequency. We show that energy-momentum conservation is valid at any time only for the superposition of two counter-propagating elliptically polarized plane waves. We show that the time-average energy-momentum of the superposition of two circularly polarized waves with opposite handedness is conserved regardless of the propagation directions of the waves. And, we show that the resulting momentum density of the superposed waves generally has a vector component perpendicular to the momentum densities of the individual waves.
Cross-Layer Design for Robust and Scalable Video Transmission in Dynamic Wireless Environment
2011-02-01
code rate convolutional codes or prioritized Rate - Compatible Punctured ...34New rate - compatible punctured convolutional codes for Viterbi decoding," IEEE Trans. Communications, Volume 42, Issue 12, pp. 3073-3079, Dec...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise
A Video Transmission System for Severely Degraded Channels
2006-07-01
rate compatible punctured convolutional codes (RCPC) . By separating the SPIHT bitstream...June 2000. 149 [170] J. Hagenauer, Rate - compatible punctured convolutional codes (RCPC codes ) and their applications, IEEE Transactions on...Farvardin [160] used rate compatible convolutional codes . They noticed that for some transmission rates , one of their EEP schemes, which may
There is no MacWilliams identity for convolutional codes. [transmission gain comparison
NASA Technical Reports Server (NTRS)
Shearer, J. B.; Mceliece, R. J.
1977-01-01
An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network
Qu, Xiaobo; He, Yifan
2018-01-01
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods. PMID:29509666
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.
Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di
2018-03-06
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.
Sadeghi, Mohammad Hosein; Sina, Sedigheh; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani
2018-02-01
The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy.
Deep architecture neural network-based real-time image processing for image-guided radiotherapy.
Mori, Shinichiro
2017-08-01
To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Wright, Gavin; Harrold, Natalie; Bownes, Peter
2018-01-01
Aims To compare the accuracies of the convolution and TMR10 Gamma Knife treatment planning algorithms, and assess the impact upon clinical practice of implementing convolution-based treatment planning. Methods Doses calculated by both algorithms were compared against ionisation chamber measurements in homogeneous and heterogeneous phantoms. Relative dose distributions calculated by both algorithms were compared against film-derived 2D isodose plots in a heterogeneous phantom, with distance-to-agreement (DTA) measured at the 80%, 50% and 20% isodose levels. A retrospective planning study compared 19 clinically acceptable metastasis convolution plans against TMR10 plans with matched shot times, allowing novel comparison of true dosimetric parameters rather than total beam-on-time. Gamma analysis and dose-difference analysis were performed on each pair of dose distributions. Results Both algorithms matched point dose measurement within ±1.1% in homogeneous conditions. Convolution provided superior point-dose accuracy in the heterogeneous phantom (-1.1% v 4.0%), with no discernible differences in relative dose distribution accuracy. In our study convolution-calculated plans yielded D99% 6.4% (95% CI:5.5%-7.3%,p<0.001) less than shot matched TMR10 plans. For gamma passing criteria 1%/1mm, 16% of targets had passing rates >95%. The range of dose differences in the targets was 0.2-4.6Gy. Conclusions Convolution provides superior accuracy versus TMR10 in heterogeneous conditions. Implementing convolution would result in increased target doses therefore its implementation may require a revaluation of prescription doses. PMID:29657896
NASA Astrophysics Data System (ADS)
Guo, Minghuan; Wang, Zhifeng; Sun, Feihu
2016-05-01
The optical efficiencies of a solar trough concentrator are important to the whole thermal performance of the solar collector, and the outer surface of the tube absorber is a key interface of energy flux. So it is necessary to simulate and analyze the concentrated solar flux density distributions on the tube absorber of a parabolic trough solar collector for various sun beam incident angles, with main optical errors considered. Since the solar trough concentrators are linear focusing, it is much of interest to investigate the solar flux density distribution on the cross-section profile of the tube absorber, rather than the flux density distribution along the focal line direction. Although a few integral approaches based on the "solar cone" concept were developed to compute the concentrated flux density for some simple trough concentrator geometries, all those integral approaches needed special integration routines, meanwhile, the optical parameters and geometrical properties of collectors also couldn't be changed conveniently. Flexible Monte Carlo ray trace (MCRT) methods are widely used to simulate the more accurate concentrated flux density distribution for compound parabolic solar trough concentrators, while generally they are quite time consuming. In this paper, we first mainly introduce a new backward ray tracing (BRT) method combined with the lumped effective solar cone, to simulate the cross-section flux density on the region of interest of the tube absorber. For BRT, bundles of rays are launched at absorber-surface points of interest, directly go through the glass cover of the absorber, strike on the uniformly sampled mirror segment centers in the close-related surface region of the parabolic reflector, and then direct to the effective solar cone around the incident sun beam direction after the virtual backward reflection. All the optical errors are convoluted into the effective solar cone. The brightness distribution of the effective solar cone is supposed to be circular Gaussian type. Then a parabolic trough solar collector of Euro Trough 150 is used as an example object to apply this BRT method. Euro Trough 150 is composed of RP3 mirror facets, with the focal length of 1.71m, aperture width of 5.77m, outer tube diameter of 0.07m. Also to verify the simulated flux density distributions, we establish a modified MCRT method. For this modified MCRT method, the random rays with weighted energy elements are launched in the close-related rectangle region in the aperture plane of the parabolic concentrator and the optical errors are statistically modeled in the stages of forward ray tracing process. Given the same concentrator geometric parameters and optical error values, the simulated results from these two ray tracing methods are in good consistence. The two highlights of this paper are the new optical simulation method, BRT, and figuring out the close-related mirror surface region for BRT and the close-related aperture region for MCRT in advance to effectively simulate the solar flux distribution on the absorber surface of a parabolic trough collector.
2011-05-01
rate convolutional codes or the prioritized Rate - Compatible Punctured ...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise ratio SSIM... Convolutional (RCPC) codes . The RCPC codes achieve UEP by puncturing off different amounts of coded bits of the parent code . The
Convolution Operation of Optical Information via Quantum Storage
NASA Astrophysics Data System (ADS)
Li, Zhixiang; Liu, Jianji; Fan, Hongming; Zhang, Guoquan
2017-06-01
We proposed a novel method to achieve optical convolution of two input images via quantum storage based on electromagnetically induced transparency (EIT) effect. By placing an EIT media in the confocal Fourier plane of the 4f-imaging system, the optical convolution of the two input images can be achieved in the image plane.
NASA Astrophysics Data System (ADS)
Tachibana, Hideyuki; Suzuki, Takafumi; Mabuchi, Kunihiko
We address an estimation method of isometric muscle tension of fingers, as fundamental research for a neural signal-based prosthesis of fingers. We utilize needle electromyogram (EMG) signals, which have approximately equivalent information to peripheral neural signals. The estimating algorithm comprised two convolution operations. The first convolution is between normal distribution and a spike array, which is detected by needle EMG signals. The convolution estimates the probability density of spike-invoking time in the muscle. In this convolution, we hypothesize that each motor unit in a muscle activates spikes independently based on a same probability density function. The second convolution is between the result of the previous convolution and isometric twitch, viz., the impulse response of the motor unit. The result of the calculation is the sum of all estimated tensions of whole muscle fibers, i.e., muscle tension. We confirmed that there is good correlation between the estimated tension of the muscle and the actual tension, with >0.9 correlation coefficients at 59%, and >0.8 at 89% of all trials.
High Performance Implementation of 3D Convolutional Neural Networks on a GPU.
Lan, Qiang; Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie
2017-01-01
Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version.
High Performance Implementation of 3D Convolutional Neural Networks on a GPU
Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie
2017-01-01
Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version. PMID:29250109
Entanglement and quantum superposition induced by a single photon
NASA Astrophysics Data System (ADS)
Lü, Xin-You; Zhu, Gui-Lei; Zheng, Li-Li; Wu, Ying
2018-03-01
We predict the occurrence of single-photon-induced entanglement and quantum superposition in a hybrid quantum model, introducing an optomechanical coupling into the Rabi model. Originally, it comes from the photon-dependent quantum property of the ground state featured by the proposed hybrid model. It is associated with a single-photon-induced quantum phase transition, and is immune to the A2 term of the spin-field interaction. Moreover, the obtained quantum superposition state is actually a squeezed cat state, which can significantly enhance precision in quantum metrology. This work offers an approach to manipulate entanglement and quantum superposition with a single photon, which might have potential applications in the engineering of new single-photon quantum devices, and also fundamentally broaden the regime of cavity QED.
Convoluted nozzle design for the RL10 derivative 2B engine
NASA Technical Reports Server (NTRS)
1985-01-01
The convoluted nozzle is a conventional refractory metal nozzle extension that is formed with a portion of the nozzle convoluted to show the extendible nozzle within the length of the rocket engine. The convoluted nozzle (CN) was deployed by a system of four gas driven actuators. For spacecraft applications the optimum CN may be self-deployed by internal pressure retained, during deployment, by a jettisonable exit closure. The convoluted nozzle is included in a study of extendible nozzles for the RL10 Engine Derivative 2B for use in an early orbit transfer vehicle (OTV). Four extendible nozzle configurations for the RL10-2B engine were evaluated. Three configurations of the two position nozzle were studied including a hydrogen dump cooled metal nozzle and radiation cooled nozzles of refractory metal and carbon/carbon composite construction respectively.
Sim, K S; Teh, V; Tey, Y C; Kho, T K
2016-11-01
This paper introduces new development technique to improve the Scanning Electron Microscope (SEM) image quality and we name it as sub-blocking multiple peak histogram equalization (SUB-B-MPHE) with convolution operator. By using this new proposed technique, it shows that the new modified MPHE performs better than original MPHE. In addition, the sub-blocking method consists of convolution operator which can help to remove the blocking effect for SEM images after applying this new developed technique. Hence, by using the convolution operator, it effectively removes the blocking effect by properly distributing the suitable pixel value for the whole image. Overall, the SUB-B-MPHE with convolution outperforms the rest of methods. SCANNING 38:492-501, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Granato, Gregory E.; Smith, Kirk P.
1999-01-01
Discrete or composite samples of highway runoff may not adequately represent in-storm water-quality fluctuations because continuous records of water stage, specific conductance, pH, and temperature of the runoff indicate that these properties fluctuate substantially during a storm. Continuous records of water-quality properties can be used to maximize the information obtained about the stormwater runoff system being studied and can provide the context needed to interpret analyses of water samples. Concentrations of the road-salt constituents calcium, sodium, and chloride in highway runoff were estimated from theoretical and empirical relations between specific conductance and the concentrations of these ions. These relations were examined using the analysis of 233 highwayrunoff samples collected from August 1988 through March 1995 at four highway-drainage monitoring stations along State Route 25 in southeastern Massachusetts. Theoretically, the specific conductance of a water sample is the sum of the individual conductances attributed to each ionic species in solution-the product of the concentrations of each ion in milliequivalents per liter (meq/L) multiplied by the equivalent ionic conductance at infinite dilution-thereby establishing the principle of superposition. Superposition provides an estimate of actual specific conductance that is within measurement error throughout the conductance range of many natural waters, with errors of less than ?5 percent below 1,000 microsiemens per centimeter (?S/cm) and ?10 percent between 1,000 and 4,000 ?S/cm if all major ionic constituents are accounted for. A semi-empirical method (adjusted superposition) was used to adjust for concentration effects-superposition-method prediction errors at high and low concentrations-and to relate measured specific conductance to that calculated using superposition. The adjusted superposition method, which was developed to interpret the State Route 25 highway-runoff records, accounts for contributions of constituents other than calcium, sodium, and chloride in dilute waters. The adjusted superposition method also accounts for the attenuation of each constituent's contribution to conductance as ionic strength increases. Use of the adjusted superposition method generally reduced predictive error to within measurement error throughout the range of specific conductance (from 37 to 51,500 ?S/cm) in the highway runoff samples. The effects of pH, temperature, and organic constituents on the relation between concentrations of dissolved constituents and measured specific conductance were examined but these properties did not substantially affect interpretation of the Route 25 data set. Predictive abilities of the adjusted superposition method were similar to results obtained by standard regression techniques, but the adjusted superposition method has several advantages. Adjusted superposition can be applied using available published data about the constituents in precipitation, highway runoff, and the deicing chemicals applied to a highway. This semi-empirical method can be used as a predictive and diagnostic tool before a substantial number of samples are collected, but the power of the regression method is based upon a large number of water-quality analyses that may be affected by a bias in the data.
Commissioning and validation of COMPASS system for VMAT patient specific quality assurance
NASA Astrophysics Data System (ADS)
Pimthong, J.; Kakanaporn, C.; Tuntipumiamorn, L.; Laojunun, P.; Iampongpaiboon, P.
2016-03-01
Pre-treatment patient specific quality assurance (QA) of advanced treatment techniques such as volumetric modulated arc therapy (VMAT) is one of important QA in radiotherapy. The fast and reliable dosimetric device is required. The objective of this study is to commission and validate the performance of COMPASS system for dose verification of VMAT technique. The COMPASS system is composed of an array of ionization detectors (MatriXX) mounted to the gantry using a custom holder and software for the analysis and visualization of QA results. We validated the COMPASS software for basic and advanced clinical application. For the basic clinical study, the simple open field in various field sizes were validated in homogeneous phantom. And the advanced clinical application, the fifteen prostate and fifteen nasopharyngeal cancers VMAT plans were chosen to study. The treatment plans were measured by the MatriXX. The doses and dose-volume histograms (DVHs) reconstructed from the fluence measurements were compared to the TPS calculated plans. And also, the doses and DVHs computed using collapsed cone convolution (CCC) Algorithm were compared with Eclipse TPS calculated plans using Analytical Anisotropic Algorithm (AAA) that according to dose specified in ICRU 83 for PTV.
Sadeghi, Mohammad Hosein; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani
2018-01-01
Purpose The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. Material and methods In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. Results The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. Conclusions According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy. PMID:29619061
Scalable Video Transmission Over Multi-Rate Multiple Access Channels
2007-06-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their ap- plications,” IEEE...source encoded using the MPEG-4 video codec. The source encoded bitstream is then channel encoded with Rate Compatible Punctured Convolutional (RCPC...Clark, and J. M. Geist, “ Punctured convolutional codes or rate (n-1)/n and simplified maximum likelihood decoding,” IEEE Transactions on
Wireless Visual Sensor Network Resource Allocation using Cross-Layer Optimization
2009-01-01
Rate Compatible Punctured Convolutional (RCPC) codes for channel...vol. 44, pp. 2943–2959, November 1998. [22] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE... coding rate for H.264/AVC video compression is determined. At the data link layer, the Rate - Compatible Puctured Convolutional (RCPC) channel coding
The general theory of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Stanley, R. P.
1993-01-01
This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.
Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sales, J. S.; Silva, L. F. da; Almeida, N. G. de
2011-03-15
We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.
Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity
NASA Astrophysics Data System (ADS)
Sales, J. S.; da Silva, L. F.; de Almeida, N. G.
2011-03-01
We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.
GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.
Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd
2018-01-01
In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.
Coherent superposition of propagation-invariant laser beams
NASA Astrophysics Data System (ADS)
Soskind, R.; Soskind, M.; Soskind, Y. G.
2012-10-01
The coherent superposition of propagation-invariant laser beams represents an important beam-shaping technique, and results in new beam shapes which retain the unique property of propagation invariance. Propagation-invariant laser beam shapes depend on the order of the propagating beam, and include Hermite-Gaussian and Laguerre-Gaussian beams, as well as the recently introduced Ince-Gaussian beams which additionally depend on the beam ellipticity parameter. While the superposition of Hermite-Gaussian and Laguerre-Gaussian beams has been discussed in the past, the coherent superposition of Ince-Gaussian laser beams has not received significant attention in literature. In this paper, we present the formation of propagation-invariant laser beams based on the coherent superposition of Hermite-Gaussian, Laguerre-Gaussian, and Ince-Gaussian beams of different orders. We also show the resulting field distributions of the superimposed Ince-Gaussian laser beams as a function of the ellipticity parameter. By changing the beam ellipticity parameter, we compare the various shapes of the superimposed propagation-invariant laser beams transitioning from Laguerre-Gaussian beams at one ellipticity extreme to Hermite-Gaussian beams at the other extreme.
Bäcklund transformations for the Boussinesq equation and merging solitons
NASA Astrophysics Data System (ADS)
Rasin, Alexander G.; Schiff, Jeremy
2017-08-01
The Bäcklund transformation (BT) for the ‘good’ Boussinesq equation and its superposition principles are presented and applied. Unlike other standard integrable equations, the Boussinesq equation does not have a strictly algebraic superposition principle for 2 BTs, but it does for 3. We present this and discuss associated lattice systems. Applying the BT to the trivial solution generates both standard solitons and what we call ‘merging solitons’—solutions in which two solitary waves (with related speeds) merge into a single one. We use the superposition principles to generate a variety of interesting solutions, including superpositions of a merging soliton with 1 or 2 regular solitons, and solutions that develop a singularity in finite time which then disappears at a later finite time. We prove a Wronskian formula for the solutions obtained by applying a general sequence of BTs on the trivial solution. Finally, we obtain the standard conserved quantities of the Boussinesq equation from the BT, and show how the hierarchy of local symmetries follows in a simple manner from the superposition principle for 3 BTs.
A modified homotopy perturbation method and the axial secular frequencies of a non-linear ion trap.
Doroudi, Alireza
2012-01-01
In this paper, a modified version of the homotopy perturbation method, which has been applied to non-linear oscillations by V. Marinca, is used for calculation of axial secular frequencies of a non-linear ion trap with hexapole and octopole superpositions. The axial equation of ion motion in a rapidly oscillating field of an ion trap can be transformed to a Duffing-like equation. With only octopole superposition the resulted non-linear equation is symmetric; however, in the presence of hexapole and octopole superpositions, it is asymmetric. This modified homotopy perturbation method is used for solving the resulting non-linear equations. As a result, the ion secular frequencies as a function of non-linear field parameters are obtained. The calculated secular frequencies are compared with the results of the homotopy perturbation method and the exact results. With only hexapole superposition, the results of this paper and the homotopy perturbation method are the same and with hexapole and octopole superpositions, the results of this paper are much more closer to the exact results compared with the results of the homotopy perturbation method.
NASA Astrophysics Data System (ADS)
Kirchbach, M.; Compean, C. B.
2017-04-01
In the article under discussion the analysis of the spectra of the unflavored mesons lead us to some intriguing insights into the possible geometry of space-time outside the causal Minkowski light cone and into the nature of strong interactions. In applying the potential theory concept of geometrization of interactions, we showed that the meson masses are best described by a confining potential composed by the centrifugal barrier on the three-dimensional spherical space, S3, and of a charge-dipole potential constructed from the Green function to the S3 Laplacian. The dipole potential emerged in view of the fact that S3 does not support single-charges without violation of the Gauss theorem and the superposition principle, thus providing a natural stage for the description of the general phenomenon of confined charge-neutral systems. However, in the original article we did not relate the charge-dipoles on S3 to the color neutral mesons, and did not express the magnitude of the confining dipole potential in terms of the strong coupling αS and the number of colors, Nc, the subject of the addendum. To the amount S3 can be thought of as the unique closed space-like geodesic of a four-dimensional de Sitter space-time, dS4, we hypothesized the space-like region outside the causal Einsteinian light cone (it describes virtual processes, among them interactions) as the (1+4)-dimensional subspace of the conformal (2+4) space-time, foliated with dS4 hyperboloids, and in this way assumed relevance of dS4 special relativity for strong interaction processes. The potential designed in this way predicted meson spectra of conformal degeneracy patterns, and in accord with the experimental observations. We now extract the αs values in the infrared from data on meson masses. The results obtained are compatible with the αs estimates provided by other approaches.
Rose, D. V.; Madrid, E. A.; Welch, D. R.; ...
2015-03-04
Numerical simulations of a vacuum post-hole convolute driven by magnetically insulated vacuum transmission lines (MITLs) are used to study current losses due to charged particle emission from the MITL-convolute-system electrodes. This work builds on the results of a previous study [E.A. Madrid et al. Phys. Rev. ST Accel. Beams 16, 120401 (2013)] and adds realistic power pulses, Ohmic heating of anode surfaces, and a model for the formation and evolution of cathode plasmas. The simulations suggest that modestly larger anode-cathode gaps in the MITLs upstream of the convolute result in significantly less current loss. In addition, longer pulse durations leadmore » to somewhat greater current loss due to cathode-plasma expansion. These results can be applied to the design of future MITL-convolute systems for high-current pulsed-power systems.« less
Classification of urine sediment based on convolution neural network
NASA Astrophysics Data System (ADS)
Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian
2018-04-01
By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.
NASA Astrophysics Data System (ADS)
Chang, Li-Na; Luo, Shun-Long; Sun, Yuan
2017-11-01
The principle of superposition is universal and lies at the heart of quantum theory. Although ever since the inception of quantum mechanics a century ago, superposition has occupied a central and pivotal place, rigorous and systematic studies of the quantification issue have attracted significant interests only in recent years, and many related problems remain to be investigated. In this work we introduce a figure of merit which quantifies superposition from an intuitive and direct perspective, investigate its fundamental properties, connect it to some coherence measures, illustrate it through several examples, and apply it to analyze wave-particle duality. Supported by Science Challenge Project under Grant No. TZ2016002, Laboratory of Computational Physics, Institute of Applied Physics and Computational Mathematics, Beijing, Key Laboratory of Random Complex Structures and Data Science, Chinese Academy of Sciences, Grant under No. 2008DP173182
Linear diffusion-wave channel routing using a discrete Hayami convolution method
Li Wang; Joan Q. Wu; William J. Elliot; Fritz R. Feidler; Sergey Lapin
2014-01-01
The convolution of an input with a response function has been widely used in hydrology as a means to solve various problems analytically. Due to the high computation demand in solving the functions using numerical integration, it is often advantageous to use the discrete convolution instead of the integration of the continuous functions. This approach greatly reduces...
NASA Technical Reports Server (NTRS)
Reichelt, Mark
1993-01-01
In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.
A Geometric Construction of Cyclic Cocycles on Twisted Convolution Algebras
NASA Astrophysics Data System (ADS)
Angel, Eitan
2010-09-01
In this thesis we give a construction of cyclic cocycles on convolution algebras twisted by gerbes over discrete translation groupoids. In his seminal book, Connes constructs a map from the equivariant cohomology of a manifold carrying the action of a discrete group into the periodic cyclic cohomology of the associated convolution algebra. Furthermore, for proper étale groupoids, J.-L. Tu and P. Xu provide a map between the periodic cyclic cohomology of a gerbe twisted convolution algebra and twisted cohomology groups. Our focus will be the convolution algebra with a product defined by a gerbe over a discrete translation groupoid. When the action is not proper, we cannot construct an invariant connection on the gerbe; therefore to study this algebra, we instead develop simplicial notions related to ideas of J. Dupont to construct a simplicial form representing the Dixmier-Douady class of the gerbe. Then by using a JLO formula we define a morphism from a simplicial complex twisted by this simplicial Dixmier-Douady form to the mixed bicomplex of certain matrix algebras. Finally, we define a morphism from this complex to the mixed bicomplex computing the periodic cyclic cohomology of the twisted convolution algebras.
Finite-Length Line Source Superposition Model (FLLSSM)
NASA Astrophysics Data System (ADS)
1980-03-01
A linearized thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high level waste or spent fuel assemblies were represented as finite length line sources in a continuous media. The combined effects of multiple canisters in a representative storage pattern were established at selected points of interest by superposition of the temperature rises calculated for each canister. The methodology is outlined and the computer code FLLSSM which performs required numerical integrations and superposition operations is described.
Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houshmand, Monireh; Hosseini-Khayat, Saied
2011-02-15
Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation andmore » practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.« less
Coset Codes Viewed as Terminated Convolutional Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1996-01-01
In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.
Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms
2007-09-01
punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data
Using convolutional decoding to improve time delay and phase estimation in digital communications
Ormesher, Richard C [Albuquerque, NM; Mason, John J [Albuquerque, NM
2010-01-26
The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.
Towards quantum superposition of a levitated nanodiamond with a NV center
NASA Astrophysics Data System (ADS)
Li, Tongcang
2015-05-01
Creating large Schrödinger's cat states with massive objects is one of the most challenging goals in quantum mechanics. We have previously achieved an important step of this goal by cooling the center-of-mass motion of a levitated microsphere from room temperature to millikelvin temperatures with feedback cooling. To generate spatial quantum superposition states with an optical cavity, however, requires a very strong quadratic coupling that is difficult to achieve. We proposed to optically trap a nanodiamond with a nitrogen-vacancy (NV) center in vacuum, and generate large spatial superposition states using the NV spin-optomechanical coupling in a strong magnetic gradient field. The large spatial superposition states can be used to study objective collapse theories of quantum mechanics. We have optically trapped nanodiamonds in air and are working towards this goal.
Approaches to reducing photon dose calculation errors near metal implants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Jessie Y.; Followill, David S.; Howell, Reb
Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well asmore » two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact reduction methods investigated, the authors found that O-MAR was the most consistent method, resulting in either improved dose calculation accuracy (dental case) or little impact on calculation accuracy (spine case). GSI was unsuccessful at reducing the severe artifacts caused by dental fillings and had very little impact on calculation accuracy. GSI with MARS on the other hand gave mixed results, sometimes introducing metal distortion and increasing calculation errors (titanium rectangular implant and titanium spinal hardware) but other times very successfully reducing artifacts (Cerrobend rectangular implant and dental fillings). Conclusions: Though successful at improving dose calculation accuracy upstream of metal implants, metal kernels were not found to substantially improve accuracy for clinical cases. Of the commercial artifact reduction methods investigated, O-MAR was found to be the most consistent candidate for all-purpose CT simulation imaging. The MARS algorithm for GSI should be used with caution for titanium implants, larger implants, and implants located near heterogeneities as it can distort the size and shape of implants and increase calculation errors.« less
Single image super-resolution based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia
2018-03-01
We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.
Error-trellis Syndrome Decoding Techniques for Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Error-trellis syndrome decoding techniques for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1985-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Molecular graph convolutions: moving beyond fingerprints
NASA Astrophysics Data System (ADS)
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
Molecular graph convolutions: moving beyond fingerprints.
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
NASA Astrophysics Data System (ADS)
McCallum, James L.; Engdahl, Nicholas B.; Ginn, Timothy R.; Cook, Peter. G.
2014-03-01
Residence time distributions (RTDs) have been used extensively for quantifying flow and transport in subsurface hydrology. In geochemical approaches, environmental tracer concentrations are used in conjunction with simple lumped parameter models (LPMs). Conversely, numerical simulation techniques require large amounts of parameterization and estimated RTDs are certainly limited by associated uncertainties. In this study, we apply a nonparametric deconvolution approach to estimate RTDs using environmental tracer concentrations. The model is based only on the assumption that flow is steady enough that the observed concentrations are well approximated by linear superposition of the input concentrations with the RTD; that is, the convolution integral holds. Even with large amounts of environmental tracer concentration data, the entire shape of an RTD remains highly nonunique. However, accurate estimates of mean ages and in some cases prediction of young portions of the RTD may be possible. The most useful type of data was found to be the use of a time series of tritium. This was due to the sharp variations in atmospheric concentrations and a short half-life. Conversely, the use of CFC compounds with smoothly varying atmospheric concentrations was more prone to nonuniqueness. This work highlights the benefits and limitations of using environmental tracer data to estimate whole RTDs with either LPMs or through numerical simulation. However, the ability of the nonparametric approach developed here to correct for mixing biases in mean ages appears promising.
Shen, Jiajian; Liu, Wei; Stoker, Joshua; Ding, Xiaoning; Anand, Aman; Hu, Yanle; Herman, Michael G; Bues, Martin
2016-12-01
To find an efficient method to configure the proton fluence for a commercial proton pencil beam scanning (PBS) treatment planning system (TPS). An in-water dose kernel was developed to mimic the dose kernel of the pencil beam convolution superposition algorithm, which is part of the commercial proton beam therapy planning software, eclipse™ (Varian Medical Systems, Palo Alto, CA). The field size factor (FSF) was calculated based on the spot profile reconstructed by the in-house dose kernel. The workflow of using FSFs to find the desirable proton fluence is presented. The in-house derived spot profile and FSF were validated by a direct comparison with those calculated by the eclipse TPS. The validation included 420 comparisons of the FSFs from 14 proton energies, various field sizes from 2 to 20 cm and various depths from 20% to 80% of proton range. The relative in-water lateral profiles between the in-house calculation and the eclipse TPS agree very well even at the level of 10 -4 . The FSFs between the in-house calculation and the eclipse TPS also agree well. The maximum deviation is within 0.5%, and the standard deviation is less than 0.1%. The authors' method significantly reduced the time to find the desirable proton fluences of the clinical energies. The method is extensively validated and can be applied to any proton centers using PBS and the eclipse TPS.
NASA Astrophysics Data System (ADS)
Sanchez-Parcerisa, D.; Cortés-Giraldo, M. A.; Dolney, D.; Kondrla, M.; Fager, M.; Carabe, A.
2016-02-01
In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm-1) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.
Sanchez-Parcerisa, D; Cortés-Giraldo, M A; Dolney, D; Kondrla, M; Fager, M; Carabe, A
2016-02-21
In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm(-1)) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.
Kinematics of Cone-In-Cone Growth, with Implications for Timing and Formation Mechanism
NASA Astrophysics Data System (ADS)
Hooker, J. N.; Cartwright, J. A.
2015-12-01
Cone-in-cone is an enigmatic structure. Similar to many fibrous calcite veins, cone-in-cone is generally formed of calcite and present in bedding-parallel vein-like accumulations within fine-grained rocks. Unlike most fibrous veins, cone-in-cone contains conical inclusions of host-rock material, creating nested, parallel cones throughout. A long-debated aspect of cone-in-cone structures is whether the calcite precipitated with its conical form (primary cone-in-cone), or whether the cones formed afterwards (secondary cone-in-cone). Trace dolomite within a calcite cone-in-cone structure from the Cretaceous of Jordan supports the primary hypothesis. The host sediment is a siliceous mud containing abundant rhombohedral dolomite grains. Dolomite rhombohedra are also distributed throughout the cone-in-cone. The rhombohedra within the cones are randomly oriented yet locally have dolomite overgrowths having boundaries that are aligned with calcite fibers. Evidence that dolomite co-precipitated with calcite, and did not replace calcite, includes (i) preferential downward extension of dolomite overgrowths, in the presumed growth-direction of the cone-in-cone, and (ii) planar, vertical borders between dolomite crystals and calcite fibers. Because dolomite overgrows host-sediment rhombohedra and forms fibers within the cones, it follows that the host-sediment was included within the growing cone-in-cone as the calcite precipitated, and not afterward. The host-sediment was not injected into the cone-in-cone along fractures, as the secondary-origin hypothesis suggests. This finding implies that cone-in-cone in general does not form over multiple stages, and thus has greater potential to preserve the chemical signature of its original precipitation. Because cone-in-cone likely forms before complete lithification of the host, and because the calcite displaces the host material against gravity, this chemical signature can preserve information about early overpressures in fine-grained sediments.
Meszlényi, Regina J.; Buza, Krisztian; Vidnyánszky, Zoltán
2017-01-01
Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network. PMID:29089883
Face recognition: a convolutional neural-network approach.
Lawrence, S; Giles, C L; Tsoi, A C; Back, A D
1997-01-01
We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.
Meszlényi, Regina J; Buza, Krisztian; Vidnyánszky, Zoltán
2017-01-01
Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network.
NASA Astrophysics Data System (ADS)
Schanz, Martin; Ye, Wenjing; Xiao, Jinyou
2016-04-01
Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.
ASIC-based architecture for the real-time computation of 2D convolution with large kernel size
NASA Astrophysics Data System (ADS)
Shao, Rui; Zhong, Sheng; Yan, Luxin
2015-12-01
Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.
2007-06-01
17 Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code ...Hamming distance between all pairs of non-zero paths. Table 2 lists the best rate r=2/3, punctured convolutional code information weight structure dB...Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code information weight structure. (From: [12]). K freed freeB
A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE
NASA Technical Reports Server (NTRS)
Truong, T. K.
1994-01-01
This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.
NASA Technical Reports Server (NTRS)
Asbury, Scott C.; Hunter, Craig A.
1999-01-01
An investigation was conducted in the model preparation area of the Langley 16-Foot Transonic Tunnel to determine the effects of convoluted divergent-flap contouring on the internal performance of a fixed-geometry, nonaxisymmetric, convergent-divergent exhaust nozzle. Testing was conducted at static conditions using a sub-scale nozzle model with one baseline and four convoluted configurations. All tests were conducted with no external flow at nozzle pressure ratios from 1.25 to approximately 9.50. Results indicate that baseline nozzle performance was dominated by unstable, shock-induced, boundary-layer separation at overexpanded conditions. Convoluted configurations were found to significantly reduce, and in some cases totally alleviate separation at overexpanded conditions. This result was attributed to the ability of convoluted contouring to energize and improve the condition of the nozzle boundary layer. Separation alleviation offers potential for installed nozzle aeropropulsive (thrust-minus-drag) performance benefits by reducing drag at forward flight speeds, even though this may reduce nozzle thrust ratio as much as 6.4% at off-design conditions. At on-design conditions, nozzle thrust ratio for the convoluted configurations ranged from 1% to 2.9% below the baseline configuration; this was a result of increased skin friction and oblique shock losses inside the nozzle.
Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L
2018-04-01
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Góźdź, A., E-mail: andrzej.gozdz@umcs.lublin.pl; Góźdź, M., E-mail: mgozdz@kft.umcs.lublin.pl
The theory of neutrino oscillations rests on the assumption, that the interaction basis and the physical (mass) basis of neutrino states are different. Therefore neutrino is produced in a certain welldefined superposition of three mass eigenstates, which propagate separately and may be detected as a different superposition. This is called flavor oscillations. It is, however, not clear why neutrinos behave this way, i.e., what is the underlying mechanism which leads to the production of a superposition of physical states in a single reaction. In this paper we argue, that one of the reasons may be connected with the temporal structuremore » of the process. In order to discuss the role of time in processes on the quantum level, we use a special formulation of the quantum mechanics, which is based on the projection time evolution. We arrive at the conclusion, that for short reaction times the formation of a superposition of states of similar masses is natural.« less
Transient Response of Shells of Revolution by Direct Integration and Modal Superposition Methods
NASA Technical Reports Server (NTRS)
Stephens, W. B.; Adelman, H. M.
1974-01-01
The results of an analytical effort to obtain and evaluate transient response data for a cylindrical and a conical shell by use of two different approaches: direct integration and modal superposition are described. The inclusion of nonlinear terms is more important than the inclusion of secondary linear effects (transverse shear deformation and rotary inertia) although there are thin-shell structures where these secondary effects are important. The advantages of the direct integration approach are that geometric nonlinear and secondary effects are easy to include and high-frequency response may be calculated. In comparison to the modal superposition technique the computer storage requirements are smaller. The advantages of the modal superposition approach are that the solution is independent of the previous time history and that once the modal data are obtained, the response for repeated cases may be efficiently computed. Also, any admissible set of initial conditions can be applied.
Experimental superposition of orders of quantum gates
Procopio, Lorenzo M.; Moqanaki, Amir; Araújo, Mateus; Costa, Fabio; Alonso Calafell, Irati; Dowd, Emma G.; Hamel, Deny R.; Rozema, Lee A.; Brukner, Časlav; Walther, Philip
2015-01-01
Quantum computers achieve a speed-up by placing quantum bits (qubits) in superpositions of different states. However, it has recently been appreciated that quantum mechanics also allows one to ‘superimpose different operations'. Furthermore, it has been shown that using a qubit to coherently control the gate order allows one to accomplish a task—determining if two gates commute or anti-commute—with fewer gate uses than any known quantum algorithm. Here we experimentally demonstrate this advantage, in a photonic context, using a second qubit to control the order in which two gates are applied to a first qubit. We create the required superposition of gate orders by using additional degrees of freedom of the photons encoding our qubits. The new resource we exploit can be interpreted as a superposition of causal orders, and could allow quantum algorithms to be implemented with an efficiency unlikely to be achieved on a fixed-gate-order quantum computer. PMID:26250107
Splash-cup plants accelerate raindrops to disperse seeds
Amador, Guillermo J.; Yamada, Yasukuni; McCurley, Matthew; Hu, David L.
2013-01-01
The conical flowers of splash-cup plants Chrysosplenium and Mazus catch raindrops opportunistically, exploiting the subsequent splash to disperse their seeds. In this combined experimental and theoretical study, we elucidate their mechanism for maximizing dispersal distance. We fabricate conical plant mimics using three-dimensional printing, and use high-speed video to visualize splash profiles and seed travel distance. Drop impacts that strike the cup off-centre achieve the largest dispersal distances of up to 1 m. Such distances are achieved because splash speeds are three to five times faster than incoming drop speeds, and so faster than the traditionally studied splashes occurring upon horizontal surfaces. This anomalous splash speed is because of the superposition of two components of momentum, one associated with a component of the drop's motion parallel to the splash-cup surface, and the other associated with film spreading induced by impact with the splash-cup. Our model incorporating these effects predicts the observed dispersal distance within 6–18% error. According to our experiments, the optimal cone angle for the splash-cup is 40°, a value consistent with the average of five species of splash-cup plants. This optimal angle arises from the competing effects of velocity amplification and projectile launching angle. PMID:23235266
Magnetic antenna excitation of whistler modes. III. Group and phase velocities of wave packets
NASA Astrophysics Data System (ADS)
Urrutia, J. M.; Stenzel, R. L.
2015-07-01
The properties of whistler modes excited by single and multiple magnetic loop antennas have been investigated in a large laboratory plasma. A single loop excites a wavepacket, but an array of loops across the ambient magnetic field B0 excites approximate plane whistler modes. The single loop data are measured. The array patterns are obtained by linear superposition of experimental data shifted in space and time, which is valid in a uniform plasma and magnetic field for small amplitude waves. Phasing the array changes the angle of wave propagation. The antennas are excited by an rf tone burst whose propagating envelope and oscillations yield group and phase velocities. A single loop antenna with dipole moment across B0 excites wave packets whose topology resembles m = 1 helicon modes, but without radial boundaries. The phase surfaces are conical with propagation characteristics of Gendrin modes. The cones form near the antenna with comparable parallel and perpendicular phase velocities. A physical model for the wave excitation is given. When a wave burst is applied to a phased antenna array, the wave front propagates both along the array and into the plasma forming a "whistler wing" at the front. These laboratory observations may be relevant for excitation and detection of whistler modes in space plasmas.
Focazio, M.J.; Speiran, G.K.
1993-01-01
The groundwater-flow system of the Virginia Coastal Plain consists of areally extensive and interconnected aquifers. Large, regionally coalescing cones of depression that are caused by large withdrawals of water are found in these aquifers. Local groundwater systems are affected by regional pumping, because of the interactions within the system of aquifers. Accordingly, these local systems are affected by regional groundwater flow and by spatial and temporal differences in withdrawals by various users. A geographic- information system was used to refine a regional groundwater-flow model around selected withdrawal centers. A method was developed in which drawdown maps that were simulated by the regional groundwater-flow model and the principle of superposition could be used to estimate drawdown at local sites. The method was applied to create drawdown maps in the Brightseat/Upper Potomac Aquifer for periods of 3, 6, 9, and 12 months for Chesapeake, Newport News, Norfolk, Portsmouth, Suffolk, and Virginia Beach, Virginia. Withdrawal rates were supplied by the individual localities and remained constant for each simulation period. This provides an efficient method by which the individual local groundwater users can determine the amount of drawdown produced by their wells in a groundwater system that is a water source for multiple users and that is affected by regional-flow systems.
NASA Astrophysics Data System (ADS)
Russo, G.; Attili, A.; Battistoni, G.; Bertrand, D.; Bourhaleb, F.; Cappucci, F.; Ciocca, M.; Mairani, A.; Milian, F. M.; Molinelli, S.; Morone, M. C.; Muraro, S.; Orts, T.; Patera, V.; Sala, P.; Schmitt, E.; Vivaldo, G.; Marchetto, F.
2016-01-01
The calculation algorithm of a modern treatment planning system for ion-beam radiotherapy should ideally be able to deal with different ion species (e.g. protons and carbon ions), to provide relative biological effectiveness (RBE) evaluations and to describe different beam lines. In this work we propose a new approach for ion irradiation outcomes computations, the beamlet superposition (BS) model, which satisfies these requirements. This model applies and extends the concepts of previous fluence-weighted pencil-beam algorithms to quantities of radiobiological interest other than dose, i.e. RBE- and LET-related quantities. It describes an ion beam through a beam-line specific, weighted superposition of universal beamlets. The universal physical and radiobiological irradiation effect of the beamlets on a representative set of water-like tissues is evaluated once, coupling the per-track information derived from FLUKA Monte Carlo simulations with the radiobiological effectiveness provided by the microdosimetric kinetic model and the local effect model. Thanks to an extension of the superposition concept, the beamlet irradiation action superposition is applicable for the evaluation of dose, RBE and LET distributions. The weight function for the beamlets superposition is derived from the beam phase space density at the patient entrance. A general beam model commissioning procedure is proposed, which has successfully been tested on the CNAO beam line. The BS model provides the evaluation of different irradiation quantities for different ions, the adaptability permitted by weight functions and the evaluation speed of analitical approaches. Benchmarking plans in simple geometries and clinical plans are shown to demonstrate the model capabilities.
A separable two-dimensional discrete Hartley transform
NASA Technical Reports Server (NTRS)
Watson, A. B.; Poirson, A.
1985-01-01
Bracewell has proposed the Discrete Hartley Transform (DHT) as a substitute for the Discrete Fourier Transform (DFT), particularly as a means of convolution. Here, it is shown that the most natural extension of the DHT to two dimensions fails to be separate in the two dimensions, and is therefore inefficient. An alternative separable form is considered, corresponding convolution theorem is derived. That the DHT is unlikely to provide faster convolution than the DFT is also discussed.
Iterative deep convolutional encoder-decoder network for medical image segmentation.
Jung Uk Kim; Hak Gu Kim; Yong Man Ro
2017-07-01
In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.
Reconfigurable Gabor Filter For Fingerprint Recognition Using FPGA Verilog
NASA Astrophysics Data System (ADS)
Rosshidi, H. T.; Hadi, A. R.
2009-06-01
This paper present the implementations of Gabor filter for fingerprint recognition using Verilog HDL. This work demonstrates the application of Gabor Filter technique to enhance the fingerprint image. The incoming signal in form of image pixel will be filter out or convolute by the Gabor filter to define the ridge and valley regions of fingerprint. This is done with the application of a real time convolve based on Field Programmable Gate Array (FPGA) to perform the convolution operation. The main characteristic of the proposed approach are the usage of memory to store the incoming image pixel and the coefficient of the Gabor filter before the convolution matrix take place. The result was the signal convoluted with the Gabor coefficient.
Convolutional neural network for road extraction
NASA Astrophysics Data System (ADS)
Li, Junping; Ding, Yazhou; Feng, Fajie; Xiong, Baoyu; Cui, Weihong
2017-11-01
In this paper, the convolution neural network with large block input and small block output was used to extract road. To reflect the complex road characteristics in the study area, a deep convolution neural network VGG19 was conducted for road extraction. Based on the analysis of the characteristics of different sizes of input block, output block and the extraction effect, the votes of deep convolutional neural networks was used as the final road prediction. The study image was from GF-2 panchromatic and multi-spectral fusion in Yinchuan. The precision of road extraction was 91%. The experiments showed that model averaging can improve the accuracy to some extent. At the same time, this paper gave some advice about the choice of input block size and output block size.
Foltz, T M; Welsh, B M
1999-01-01
This paper uses the fact that the discrete Fourier transform diagonalizes a circulant matrix to provide an alternate derivation of the symmetric convolution-multiplication property for discrete trigonometric transforms. Derived in this manner, the symmetric convolution-multiplication property extends easily to multiple dimensions using the notion of block circulant matrices and generalizes to multidimensional asymmetric sequences. The symmetric convolution of multidimensional asymmetric sequences can then be accomplished by taking the product of the trigonometric transforms of the sequences and then applying an inverse trigonometric transform to the result. An example is given of how this theory can be used for applying a two-dimensional (2-D) finite impulse response (FIR) filter with nonlinear phase which models atmospheric turbulence.
Molecular graph convolutions: moving beyond fingerprints
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-01-01
Molecular “fingerprints” encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1977-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
NASA Technical Reports Server (NTRS)
Lee, L. N.
1976-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
A digital pixel cell for address event representation image convolution processing
NASA Astrophysics Data System (ADS)
Camunas-Mesa, Luis; Acosta-Jimenez, Antonio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2005-06-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number of neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate events according to their information levels. Neurons with more information (activity, derivative of activities, contrast, motion, edges,...) generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. AER technology has been used and reported for the implementation of various type of image sensors or retinae: luminance with local agc, contrast retinae, motion retinae,... Also, there has been a proposal for realizing programmable kernel image convolution chips. Such convolution chips would contain an array of pixels that perform weighted addition of events. Once a pixel has added sufficient event contributions to reach a fixed threshold, the pixel fires an event, which is then routed out of the chip for further processing. Such convolution chips have been proposed to be implemented using pulsed current mode mixed analog and digital circuit techniques. In this paper we present a fully digital pixel implementation to perform the weighted additions and fire the events. This way, for a given technology, there is a fully digital implementation reference against which compare the mixed signal implementations. We have designed, implemented and tested a fully digital AER convolution pixel. This pixel will be used to implement a full AER convolution chip for programmable kernel image convolution processing.
SUPERPOSITION OF POLYTROPES IN THE INNER HELIOSHEATH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livadiotis, G., E-mail: glivadiotis@swri.edu
2016-03-15
This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density–temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log–log scale is now generalized to a concave-downward parabola that is able to describe themore » observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ∼ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubrovsky, V. G.; Topovsky, A. V.
New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u{sup (n)}, n= 1, Horizontal-Ellipsis , N are constructed via Zakharov and Manakov {partial_derivative}-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u{sup (n)} and calculated by {partial_derivative}-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schroedinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums ofmore » special solutions u{sup (n)}. It is shown that the sums u=u{sup (k{sub 1})}+...+u{sup (k{sub m})}, 1 Less-Than-Or-Slanted-Equal-To k{sub 1} < k{sub 2} < Horizontal-Ellipsis < k{sub m} Less-Than-Or-Slanted-Equal-To N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schroedinger equation and can serve as model potentials for electrons in planar structures of modern electronics.« less
Superposition of Polytropes in the Inner Heliosheath
NASA Astrophysics Data System (ADS)
Livadiotis, G.
2016-03-01
This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density-temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log-log scale is now generalized to a concave-downward parabola that is able to describe the observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ˜ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.
On sufficient statistics of least-squares superposition of vector sets.
Konagurthu, Arun S; Kasarapu, Parthan; Allison, Lloyd; Collier, James H; Lesk, Arthur M
2015-06-01
The problem of superposition of two corresponding vector sets by minimizing their sum-of-squares error under orthogonal transformation is a fundamental task in many areas of science, notably structural molecular biology. This problem can be solved exactly using an algorithm whose time complexity grows linearly with the number of correspondences. This efficient solution has facilitated the widespread use of the superposition task, particularly in studies involving macromolecular structures. This article formally derives a set of sufficient statistics for the least-squares superposition problem. These statistics are additive. This permits a highly efficient (constant time) computation of superpositions (and sufficient statistics) of vector sets that are composed from its constituent vector sets under addition or deletion operation, where the sufficient statistics of the constituent sets are already known (that is, the constituent vector sets have been previously superposed). This results in a drastic improvement in the run time of the methods that commonly superpose vector sets under addition or deletion operations, where previously these operations were carried out ab initio (ignoring the sufficient statistics). We experimentally demonstrate the improvement our work offers in the context of protein structural alignment programs that assemble a reliable structural alignment from well-fitting (substructural) fragment pairs. A C++ library for this task is available online under an open-source license.
2006-12-01
Convolutional encoder of rate 1/2 (From [10]). Table 3 shows the puncturing patterns used to derive the different code rates . X precedes Y in the order... convolutional code with puncturing configuration (From [10])......11 Table 4. Mandatory channel coding per modulation (From [10...a concatenation of a Reed– Solomon outer code and a rate -adjustable convolutional inner code . At the transmitter, data shall first be encoded with
Synchronization Analysis and Simulation of a Standard IEEE 802.11G OFDM Signal
2004-03-01
Figure 26 Convolutional Encoder Parameters. Figure 27 Puncturing Parameters. As per Table 3, the required code rate is 3 4r = which requires...to achieve the higher data rates required by the Standard 802.11b was accomplished by using packet binary convolutional coding (PBCC). Essentially...higher data rates are achieved by using convolutional coding combined with BPSK or QPSK modulation. The data is first encoded with a rate one-half
Design and System Implications of a Family of Wideband HF Data Waveforms
2010-09-01
code rates (i.e. 8/9, 9/10) will be used to attain the highest data rates for surface wave links. Very high puncturing of convolutional codes can...Communication Links”, Edition 1, North Atlantic Treaty Organization, 2009. [14] Yasuda, Y., Kashiki, K., Hirata, Y. “High- Rate Punctured Convolutional Codes ...length 7 convolutional code that has been used for over two decades in 110A. In addition, repetition coding and puncturing was
Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.
Huang, Yan; Wang, Wei; Wang, Liang
2018-04-01
Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.
Further Developments in the Communication Link and Error Analysis (CLEAN) Simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1995-01-01
During the period 1 July 1993 - 30 June 1994, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed. Many of these were reported in the Semi-Annual report dated December 1993 which has been included in this report in Appendix A. Since December 1993, a number of additional modules have been added involving Unit-Memory Convolutional codes (UMC). These are: (1) Unit-Memory Convolutional Encoder module (UMCEncd); (2) Hard decision Unit-Memory Convolutional Decoder using the Viterbi decoding algorithm (VitUMC); and (3) a number of utility modules designed to investigate the performance of LTMC's such as LTMC column distance function (UMCdc), UMC free distance function (UMCdfree), UMC row distance function (UMCdr), and UMC Transformation (UMCTrans). The study of UMC's was driven, in part, by the desire to investigate high-rate convolutional codes which are better suited as inner codes for a concatenated coding scheme. A number of high-rate LTMC's were found which are good candidates for inner codes. Besides the further developments of the simulation, a study was performed to construct a table of the best known Unit-Memory Convolutional codes. Finally, a preliminary study of the usefulness of the Periodic Convolutional Interleaver (PCI) was completed and documented in a Technical note dated March 17, 1994. This technical note has also been included in this final report.
The effects of kinesio taping on the color intensity of superficial skin hematomas: A pilot study.
Vercelli, Stefano; Colombo, Claudio; Tolosa, Francesca; Moriondo, Andrea; Bravini, Elisabetta; Ferriero, Giorgio; Francesco, Sartorio
2017-01-01
To analyze the effects of kinesio taping (KT) -applied with three different strains that induced or not the formation of skin creases (called convolutions)- on color intensity of post-surgical superficial hematomas. Single-blind paired study. Rehabilitation clinic. A convenience sample of 13 inpatients with post-surgical superficial hematomas. The tape was applied for 24 consecutive hours. Three tails of KT were randomly applied with different degrees of strain: none (SN); light (SL); and full longitudinal stretch (SF). We expected to obtain correct formation of convolutions with SL, some convolutions with SN, and no convolutions with SF. The change in color intensity of hematomas, measured by means of polar coordinates CIE L*a*b* using a validated and standardized digital images system. Applying KT to hematomas did not significantly change the color intensity in the central area under the tape (p > 0.05). There was a significant treatment effect (p < 0.05) under the edges of the tape, independently of the formation of convolutions (p > 0.05). The changes observed along the edges of the tape could be related to the formation of a pressure gradient between the KT and the adjacent area, but were not dependent on the formation of skin convolutions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter
2017-11-01
Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.
Towards dropout training for convolutional neural networks.
Wu, Haibing; Gu, Xiaodong
2015-11-01
Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage. Copyright © 2015 Elsevier Ltd. All rights reserved.
Frame prediction using recurrent convolutional encoder with residual learning
NASA Astrophysics Data System (ADS)
Yue, Boxuan; Liang, Jun
2018-05-01
The prediction for the frame of a video is difficult but in urgent need in auto-driving. Conventional methods can only predict some abstract trends of the region of interest. The boom of deep learning makes the prediction for frames possible. In this paper, we propose a novel recurrent convolutional encoder and DE convolutional decoder structure to predict frames. We introduce the residual learning in the convolution encoder structure to solve the gradient issues. The residual learning can transform the gradient back propagation to an identity mapping. It can reserve the whole gradient information and overcome the gradient issues in Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN). Besides, compared with the branches in CNNs and the gated structures in RNNs, the residual learning can save the training time significantly. In the experiments, we use UCF101 dataset to train our networks, the predictions are compared with some state-of-the-art methods. The results show that our networks can predict frames fast and efficiently. Furthermore, our networks are used for the driving video to verify the practicability.
A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system
NASA Astrophysics Data System (ADS)
Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan
2018-01-01
This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1992-01-01
Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.
Efficient airport detection using region-based fully convolutional neural networks
NASA Astrophysics Data System (ADS)
Xin, Peng; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Lv, Chao
2018-04-01
This paper presents a model for airport detection using region-based fully convolutional neural networks. To achieve fast detection with high accuracy, we shared the conv layers between the region proposal procedure and the airport detection procedure and used graphics processing units (GPUs) to speed up the training and testing time. For lack of labeled data, we transferred the convolutional layers of ZF net pretrained by ImageNet to initialize the shared convolutional layers, then we retrained the model using the alternating optimization training strategy. The proposed model has been tested on an airport dataset consisting of 600 images. Experiments show that the proposed method can distinguish airports in our dataset from similar background scenes almost real-time with high accuracy, which is much better than traditional methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oyewale, S; Pokharel, S; Rana, S
Purpose: To compare the percentage depth dose (PDD) computational accuracy of Adaptive Convolution (AC) and Collapsed Cone Convolution (CCC) algorithms in the presence of air gaps. Methods: A 30×30×30 cm{sup 3} solid water phantom with two 5cm air gaps was scanned with a CT simulator unit and exported into the Phillips Pinnacle™ treatment planning system. PDDs were computed using the AC and CCC algorithms. Photon energy of 6 MV was used with field sizes of 3×3 cm{sup 2}, 5×5 cm{sup 2}, 10×10 cm{sup 2}, 15×15 cm{sup 2}, and 20×20 cm{sup 2}. Ionization chamber readings were taken at different depths inmore » water for all the field sizes. The percentage differences in the PDDs were computed with normalization to the depth of maximum dose (dmax). The calculated PDDs were then compared with measured PDDs. Results: In the first buildup region, both algorithms overpredicted the dose for all field sizes and under-predicted for all other subsequent buildup regions. After dmax in the three water media, AC under-predicted the dose for field sizes 3×3 and 5×5 cm{sup 2} and overpredicted for larger field sizes, whereas CCC under-predicted for all field sizes. Upon traversing the first air gap, AC showed maximum differences of –3.9%, −1.4%, 2.0%, 2.5%, 2.9% and CCC had maximum differences of −3.9%, −3.0%,–3.1%, −2.7%, −1.8% for field sizes 3×3, 5×5, 10×10, 15×15, and 20×20 cm{sup 2} respectively. Conclusion: The effect of air gaps causes a significant difference in the PDDs computed by both the AC and CCC algorithms in secondary build-up regions. AC computed larger values for the PDDs except at smaller field sizes. For CCC, the size of the errors in prediction of the PDDs has an inverse relationship with respect to field size. These effects should be considered in treatment planning where significant air gaps are encountered.« less
A clinical study of lung cancer dose calculation accuracy with Monte Carlo simulation.
Zhao, Yanqun; Qi, Guohai; Yin, Gang; Wang, Xianliang; Wang, Pei; Li, Jian; Xiao, Mingyong; Li, Jie; Kang, Shengwei; Liao, Xiongfei
2014-12-16
The accuracy of dose calculation is crucial to the quality of treatment planning and, consequently, to the dose delivered to patients undergoing radiation therapy. Current general calculation algorithms such as Pencil Beam Convolution (PBC) and Collapsed Cone Convolution (CCC) have shortcomings in regard to severe inhomogeneities, particularly in those regions where charged particle equilibrium does not hold. The aim of this study was to evaluate the accuracy of the PBC and CCC algorithms in lung cancer radiotherapy using Monte Carlo (MC) technology. Four treatment plans were designed using Oncentra Masterplan TPS for each patient. Two intensity-modulated radiation therapy (IMRT) plans were developed using the PBC and CCC algorithms, and two three-dimensional conformal therapy (3DCRT) plans were developed using the PBC and CCC algorithms. The DICOM-RT files of the treatment plans were exported to the Monte Carlo system to recalculate. The dose distributions of GTV, PTV and ipsilateral lung calculated by the TPS and MC were compared. For 3DCRT and IMRT plans, the mean dose differences for GTV between the CCC and MC increased with decreasing of the GTV volume. For IMRT, the mean dose differences were found to be higher than that of 3DCRT. The CCC algorithm overestimated the GTV mean dose by approximately 3% for IMRT. For 3DCRT plans, when the volume of the GTV was greater than 100 cm(3), the mean doses calculated by CCC and MC almost have no difference. PBC shows large deviations from the MC algorithm. For the dose to the ipsilateral lung, the CCC algorithm overestimated the dose to the entire lung, and the PBC algorithm overestimated V20 but underestimated V5; the difference in V10 was not statistically significant. PBC substantially overestimates the dose to the tumour, but the CCC is similar to the MC simulation. It is recommended that the treatment plans for lung cancer be developed using an advanced dose calculation algorithm other than PBC. MC can accurately calculate the dose distribution in lung cancer and can provide a notably effective tool for benchmarking the performance of other dose calculation algorithms within patients.
Normal Perceptual Sensitivity Arising From Weakly Reflective Cone Photoreceptors
Bruce, Kady S.; Harmening, Wolf M.; Langston, Bradley R.; Tuten, William S.; Roorda, Austin; Sincich, Lawrence C.
2015-01-01
Purpose To determine the light sensitivity of poorly reflective cones observed in retinas of normal subjects, and to establish a relationship between cone reflectivity and perceptual threshold. Methods Five subjects (four male, one female) with normal vision were imaged longitudinally (7–26 imaging sessions, representing 82–896 days) using adaptive optics scanning laser ophthalmoscopy (AOSLO) to monitor cone reflectance. Ten cones with unusually low reflectivity, as well as 10 normally reflective cones serving as controls, were targeted for perceptual testing. Cone-sized stimuli were delivered to the targeted cones and luminance increment thresholds were quantified. Thresholds were measured three to five times per session for each cone in the 10 pairs, all located 2.2 to 3.3° from the center of gaze. Results Compared with other cones in the same retinal area, three of 10 monitored dark cones were persistently poorly reflective, while seven occasionally manifested normal reflectance. Tested psychophysically, all 10 dark cones had thresholds comparable with those from normally reflecting cones measured concurrently (P = 0.49). The variation observed in dark cone thresholds also matched the wide variation seen in a large population (n = 56 cone pairs, six subjects) of normal cones; in the latter, no correlation was found between cone reflectivity and threshold (P = 0.0502). Conclusions Low cone reflectance cannot be used as a reliable indicator of cone sensitivity to light in normal retinas. To improve assessment of early retinal pathology, other diagnostic criteria should be employed along with imaging and cone-based microperimetry. PMID:26193919
Ma, Hongwei; Thapa, Arjun; Morris, Lynsie; Redmond, T. Michael; Baehr, Wolfgang; Ding, Xi-Qin
2014-01-01
Cone phototransduction and survival of cones in the human macula is essential for color vision and for visual acuity. Progressive cone degeneration in age-related macular degeneration, Stargardt disease, and recessive cone dystrophies is a major cause of blindness. Thyroid hormone (TH) signaling, which regulates cell proliferation, differentiation, and apoptosis, plays a central role in cone opsin expression and patterning in the retina. Here, we investigated whether TH signaling affects cone viability in inherited retinal degeneration mouse models. Retinol isomerase RPE65-deficient mice [a model of Leber congenital amaurosis (LCA) with rapid cone loss] and cone photoreceptor function loss type 1 mice (severe recessive achromatopsia) were used to determine whether suppressing TH signaling with antithyroid treatment reduces cone death. Further, cone cyclic nucleotide-gated channel B subunit-deficient mice (moderate achromatopsia) and guanylate cyclase 2e-deficient mice (LCA with slower cone loss) were used to determine whether triiodothyronine (T3) treatment (stimulating TH signaling) causes deterioration of cones. We found that cone density in retinol isomerase RPE65-deficient and cone photoreceptor function loss type 1 mice increased about sixfold following antithyroid treatment. Cone density in cone cyclic nucleotide-gated channel B subunit-deficient and guanylate cyclase 2e-deficient mice decreased about 40% following T3 treatment. The effect of TH signaling on cone viability appears to be independent of its regulation on cone opsin expression. This work demonstrates that suppressing TH signaling in retina dystrophy mouse models is protective of cones, providing insights into cone preservation and therapeutic interventions. PMID:24550448
NASA Astrophysics Data System (ADS)
Zheng, Guangdi; Pan, Mingbo; Liu, Wei; Wu, Xuetong
2018-03-01
The target identification of the sea battlefield is the prerequisite for the judgment of the enemy in the modern naval battle. In this paper, a collaborative identification method based on convolution neural network is proposed to identify the typical targets of sea battlefields. Different from the traditional single-input/single-output identification method, the proposed method constructs a multi-input/single-output co-identification architecture based on optimized convolution neural network and weighted D-S evidence theory. The simulation results show that
A convolution model for computing the far-field directivity of a parametric loudspeaker array.
Shi, Chuang; Kajikawa, Yoshinobu
2015-02-01
This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost.
NASA Astrophysics Data System (ADS)
Masunun, P.; Tangboonduangjit, P.; Dumrongkijudom, N.
2016-03-01
The purpose of this study is to compare the build-up region doses on breast Rando phantom surface with the bolus covered, the doses in breast Rando phantom and also the doses in a lung that is the heterogeneous region by two algorithms. The AAA in Eclipse TPS and the collapsed cone convolution algorithm in Pinnacle treatment planning system were used to plan in tangential field technique with 6 MV photon beam at 200 cGy total doses in Breast Rando phantom with bolus covered (5 mm and 10 mm). TLDs were calibrated with Cobalt-60 and used to measure the doses in irradiation process. The results in treatment planning show that the doses in build-up region and the doses in breast phantom were closely matched in both algorithms which are less than 2% differences. However, overestimate of doses in a lung (L2) were found in AAA with 13.78% and 6.06% differences at 5 mm and 10 mm bolus thickness, respectively when compared with CCC algorithm. The TLD measurements show the underestimate in buildup region and in breast phantom but the doses in a lung (L2) were overestimated when compared with the doses in the two plannings at both thicknesses of the bolus.
Using Musical Intervals to Demonstrate Superposition of Waves and Fourier Analysis
ERIC Educational Resources Information Center
LoPresto, Michael C.
2013-01-01
What follows is a description of a demonstration of superposition of waves and Fourier analysis using a set of four tuning forks mounted on resonance boxes and oscilloscope software to create, capture and analyze the waveforms and Fourier spectra of musical intervals.
Aerodynamic Analysis of the Truss-Braced Wing Aircraft Using Vortex-Lattice Superposition Approach
NASA Technical Reports Server (NTRS)
Ting, Eric Bi-Wen; Reynolds, Kevin Wayne; Nguyen, Nhan T.; Totah, Joseph J.
2014-01-01
The SUGAR Truss-BracedWing (TBW) aircraft concept is a Boeing-developed N+3 aircraft configuration funded by NASA ARMD FixedWing Project. This future generation transport aircraft concept is designed to be aerodynamically efficient by employing a high aspect ratio wing design. The aspect ratio of the TBW is on the order of 14 which is significantly greater than those of current generation transport aircraft. This paper presents a recent aerodynamic analysis of the TBW aircraft using a conceptual vortex-lattice aerodynamic tool VORLAX and an aerodynamic superposition approach. Based on the underlying linear potential flow theory, the principle of aerodynamic superposition is leveraged to deal with the complex aerodynamic configuration of the TBW. By decomposing the full configuration of the TBW into individual aerodynamic lifting components, the total aerodynamic characteristics of the full configuration can be estimated from the contributions of the individual components. The aerodynamic superposition approach shows excellent agreement with CFD results computed by FUN3D, USM3D, and STAR-CCM+.
Superposition-Based Analysis of First-Order Probabilistic Timed Automata
NASA Astrophysics Data System (ADS)
Fietzke, Arnaud; Hermanns, Holger; Weidenbach, Christoph
This paper discusses the analysis of first-order probabilistic timed automata (FPTA) by a combination of hierarchic first-order superposition-based theorem proving and probabilistic model checking. We develop the overall semantics of FPTAs and prove soundness and completeness of our method for reachability properties. Basically, we decompose FPTAs into their time plus first-order logic aspects on the one hand, and their probabilistic aspects on the other hand. Then we exploit the time plus first-order behavior by hierarchic superposition over linear arithmetic. The result of this analysis is the basis for the construction of a reachability equivalent (to the original FPTA) probabilistic timed automaton to which probabilistic model checking is finally applied. The hierarchic superposition calculus required for the analysis is sound and complete on the first-order formulas generated from FPTAs. It even works well in practice. We illustrate the potential behind it with a real-life DHCP protocol example, which we analyze by means of tool chain support.
Audo, Denis; Haug, Joachim T; Haug, Carolin; Charbonnier, Sylvain; Schweigert, Günter; Müller, Carsten H G; Harzsch, Steffen
2016-01-01
Modern representatives of Polychelida (Polychelidae) are considered to be entirely blind and have largely reduced eyes, possibly as an adaptation to deep-sea environments. Fossil species of Polychelida, however, appear to have well-developed compound eyes preserved as anterior bulges with distinct sculpturation. We documented the shapes and sizes of eyes and ommatidia based upon exceptionally preserved fossil polychelidans from Binton (Hettangian, United-Kingdom), Osteno (Sinemurian, Italy), Posidonia Shale (Toarcian, Germany), La Voulte-sur-Rhône (Callovian, France), and Solnhofen-type plattenkalks (Kimmeridgian-Tithonian, Germany). For purposes of comparison, sizes of the eyes of several other polychelidans without preserved ommatidia were documented. Sizes of ommatidia and eyes were statistically compared against carapace length, taxonomic group, and outcrop. Nine species possess eyes with square facets; Rosenfeldia oppeli (Woodward, 1866), however, displays hexagonal facets. The sizes of eyes and ommatidia are a function of carapace length. No significant differences were discerned between polychelidans from different outcrops; Eryonidae, however, have significantly smaller eyes than other groups. Fossil eyes bearing square facets are similar to the reflective superposition eyes found in many extant decapods. As such, they are the earliest example of superposition eyes. As reflective superposition is considered plesiomorphic for Reptantia, this optic type was probably retained in Polychelida. The two smallest specimens, a Palaeopentacheles roettenbacheri (Münster, 1839) and a Hellerocaris falloti (Van Straelen, 1923), are interpreted as juveniles. Both possess square-shaped facets, a typical post-larval feature. The eye morphology of these small specimens, which are far smaller than many extant eryoneicus larvae, suggests that Jurassic polychelidans did not develop via giant eryoneicus larvae. In contrast, another species we examined, Rosenfeldia oppeli (Woodward, 1866), did not possess square-shaped facets, but rather hexagonal ones, which suggests that this species did not possess reflective superposition eyes. The hexagonal facets may indicate either another type of superposition eye (refractive or parabolic superposition), or an apposition eye. As decapod larvae possess apposition eyes with hexagonal facets, it is most parsimonious to consider eyes of R. oppeli as apposition eyes evolved through paedomorphic heterochrony. Polychelidan probably originally had reflective superposition. R. oppeli, however, probably gained apposition eyes through paedomorphosis.
NASA Astrophysics Data System (ADS)
Daoud, M.; Ahl Laamara, R.
2012-07-01
We give the explicit expressions of the pairwise quantum correlations present in superpositions of multipartite coherent states. A special attention is devoted to the evaluation of the geometric quantum discord. The dynamics of quantum correlations under a dephasing channel is analyzed. A comparison of geometric measure of quantum discord with that of concurrence shows that quantum discord in multipartite coherent states is more resilient to dissipative environments than is quantum entanglement. To illustrate our results, we consider some special superpositions of Weyl-Heisenberg, SU(2) and SU(1,1) coherent states which interpolate between Werner and Greenberger-Horne-Zeilinger states.
Programmable superpositions of Ising configurations
NASA Astrophysics Data System (ADS)
Sieberer, Lukas M.; Lechner, Wolfgang
2018-05-01
We present a framework to prepare superpositions of bit strings, i.e., many-body spin configurations, with deterministic programmable probabilities. The spin configurations are encoded in the degenerate ground states of the lattice-gauge representation of an all-to-all connected Ising spin glass. The ground-state manifold is invariant under variations of the gauge degrees of freedom, which take the form of four-body parity constraints. Our framework makes use of these degrees of freedom by individually tuning them to dynamically prepare programmable superpositions. The dynamics combines an adiabatic protocol with controlled diabatic transitions. We derive an effective model that allows one to determine the control parameters efficiently even for large system sizes.
Application of the superposition principle to solar-cell analysis
NASA Technical Reports Server (NTRS)
Lindholm, F. A.; Fossum, J. G.; Burgess, E. L.
1979-01-01
The superposition principle of differential-equation theory - which applies if and only if the relevant boundary-value problems are linear - is used to derive the widely used shifting approximation that the current-voltage characteristic of an illuminated solar cell is the dark current-voltage characteristic shifted by the short-circuit photocurrent. Analytical methods are presented to treat cases where shifting is not strictly valid. Well-defined conditions necessary for superposition to apply are established. For high injection in the base region, the method of analysis accurately yields the dependence of the open-circuit voltage on the short-circuit current (or the illumination level).
Kwon, Yea-Hoon; Shin, Sae-Byuk; Kim, Shin-Dug
2018-04-30
The purpose of this study is to improve human emotional classification accuracy using a convolution neural networks (CNN) model and to suggest an overall method to classify emotion based on multimodal data. We improved classification performance by combining electroencephalogram (EEG) and galvanic skin response (GSR) signals. GSR signals are preprocessed using by the zero-crossing rate. Sufficient EEG feature extraction can be obtained through CNN. Therefore, we propose a suitable CNN model for feature extraction by tuning hyper parameters in convolution filters. The EEG signal is preprocessed prior to convolution by a wavelet transform while considering time and frequency simultaneously. We use a database for emotion analysis using the physiological signals open dataset to verify the proposed process, achieving 73.4% accuracy, showing significant performance improvement over the current best practice models.
Efficient convolutional sparse coding
Wohlberg, Brendt
2017-06-20
Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.
Multithreaded implicitly dealiased convolutions
NASA Astrophysics Data System (ADS)
Roberts, Malcolm; Bowman, John C.
2018-03-01
Implicit dealiasing is a method for computing in-place linear convolutions via fast Fourier transforms that decouples work memory from input data. It offers easier memory management and, for long one-dimensional input sequences, greater efficiency than conventional zero-padding. Furthermore, for convolutions of multidimensional data, the segregation of data and work buffers can be exploited to reduce memory usage and execution time significantly. This is accomplished by processing and discarding data as it is generated, allowing work memory to be reused, for greater data locality and performance. A multithreaded implementation of implicit dealiasing that accepts an arbitrary number of input and output vectors and a general multiplication operator is presented, along with an improved one-dimensional Hermitian convolution that avoids the loop dependency inherent in previous work. An alternate data format that can accommodate a Nyquist mode and enhance cache efficiency is also proposed.
Detecting atrial fibrillation by deep convolutional neural networks.
Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui
2018-02-01
Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.
... grade cone biopsy; High-grade cone biopsy; Carcinoma in situ-cone biopsy; CIS - cone biopsy; ASCUS - cone biopsy; ... marked dysplasia CIN III -- severe dysplasia to carcinoma in situ Abnormal results may also be due to cervical ...
Statistical properties of superimposed stationary spike trains.
Deger, Moritz; Helias, Moritz; Boucsein, Clemens; Rotter, Stefan
2012-06-01
The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities-like the count variability, inter-spike interval (ISI) variability and ISI correlations-and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.
The Evolution and Development of Neural Superposition
Agi, Egemen; Langen, Marion; Altschuler, Steven J.; Wu, Lani F.; Zimmermann, Timo
2014-01-01
Visual systems have a rich history as model systems for the discovery and understanding of basic principles underlying neuronal connectivity. The compound eyes of insects consist of up to thousands of small unit eyes that are connected by photoreceptor axons to set up a visual map in the brain. The photoreceptor axon terminals thereby represent neighboring points seen in the environment in neighboring synaptic units in the brain. Neural superposition is a special case of such a wiring principle, where photoreceptors from different unit eyes that receive the same input converge upon the same synaptic units in the brain. This wiring principle is remarkable, because each photoreceptor in a single unit eye receives different input and each individual axon, among thousands others in the brain, must be sorted together with those few axons that have the same input. Key aspects of neural superposition have been described as early as 1907. Since then neuroscientists, evolutionary and developmental biologists have been fascinated by how such a complicated wiring principle could evolve, how it is genetically encoded, and how it is developmentally realized. In this review article, we will discuss current ideas about the evolutionary origin and developmental program of neural superposition. Our goal is to identify in what way the special case of neural superposition can help us answer more general questions about the evolution and development of genetically “hard-wired” synaptic connectivity in the brain. PMID:24912630
Impact of chemical plant start-up emissions on ambient ozone concentration
NASA Astrophysics Data System (ADS)
Ge, Sijie; Wang, Sujing; Xu, Qiang; Ho, Thomas
2017-09-01
Flare emissions, especially start-up flare emissions, during chemical plant operations generate large amounts of ozone precursors that may cause highly localized and transient ground-level ozone increment. Such an adverse ozone impact could be aggravated by the synergies of multiple plant start-ups in an industrial zone. In this paper, a systematic study on ozone increment superposition due to chemical plant start-up emissions has been performed. It employs dynamic flaring profiles of two olefin plants' start-ups to investigate the superposition of the regional 1-hr ozone increment. It also summaries the superposition trend by manipulating the starting time (00:00-10:00) of plant start-up operations and the plant distance (4-32 km). The study indicates that the ozone increment induced by simultaneous start-up emissions from multiple chemical plants generally does not follow the linear superposition of the ozone increment induced by individual plant start-ups. Meanwhile, the trend of such nonlinear superposition related to the temporal (starting time and operating hours of plant start-ups) and spatial (plant distance) factors is also disclosed. This paper couples dynamic simulations of chemical plant start-up operations with air-quality modeling and statistical methods to examine the regional ozone impact. It could be helpful for technical decision support for cost-effective air-quality and industrial flare emission controls.
The evolution and development of neural superposition.
Agi, Egemen; Langen, Marion; Altschuler, Steven J; Wu, Lani F; Zimmermann, Timo; Hiesinger, Peter Robin
2014-01-01
Visual systems have a rich history as model systems for the discovery and understanding of basic principles underlying neuronal connectivity. The compound eyes of insects consist of up to thousands of small unit eyes that are connected by photoreceptor axons to set up a visual map in the brain. The photoreceptor axon terminals thereby represent neighboring points seen in the environment in neighboring synaptic units in the brain. Neural superposition is a special case of such a wiring principle, where photoreceptors from different unit eyes that receive the same input converge upon the same synaptic units in the brain. This wiring principle is remarkable, because each photoreceptor in a single unit eye receives different input and each individual axon, among thousands others in the brain, must be sorted together with those few axons that have the same input. Key aspects of neural superposition have been described as early as 1907. Since then neuroscientists, evolutionary and developmental biologists have been fascinated by how such a complicated wiring principle could evolve, how it is genetically encoded, and how it is developmentally realized. In this review article, we will discuss current ideas about the evolutionary origin and developmental program of neural superposition. Our goal is to identify in what way the special case of neural superposition can help us answer more general questions about the evolution and development of genetically "hard-wired" synaptic connectivity in the brain.
Spectral characteristics of light sources for S-cone stimulation.
Schlegelmilch, F; Nolte, R; Schellhorn, K; Husar, P; Henning, G; Tornow, R P
2002-11-01
Electrophysiological investigations of the short-wavelength sensitive pathway of the human eye require the use of a suitable light source as a S-cone stimulator. Different light sources with their spectral distribution properties were investigated and compared with the ideal S-cone stimulator. First, the theoretical background of the calculation of relative cone energy absorption from the spectral distribution function of the light source is summarized. From the results of the calculation, the photometric properties of the ideal S-cone stimulator will be derived. The calculation procedure was applied to virtual light sources (computer generated spectral distribution functions with different medium wavelengths and spectrum widths) and to real light sources (blue and green light emitting diodes, blue phosphor of CRT-monitor, multimedia projector, LCD monitor and notebook display). The calculated relative cone absorbencies are compared to the conditions of an ideal S-cone stimulator. Monochromatic light sources with wavelengths of less than 456 nm are close to the conditions of an ideal S-cone stimulator. Spectrum widths up to 21 nm do not affect the S-cone activation significantly (S-cone activation change < 0.2%). Blue light emitting diodes with peak wavelength at 448 nm and spectrum bandwidth of 25 nm are very useful for S-cone stimulation (S-cone activation approximately 95%). A suitable display for S-cone stimulation is the Trinitron computer monitor (S-cone activation approximately 87%). The multimedia projector has a S-cone activation up to 91%, but their spectral distribution properties depends on the selected intensity. LCD monitor and notebook displays have a lower S-cone activation (< or = 74%). Carefully selecting the blue light source for S-cone stimulation can reduce the unwanted L-and M-cone activation down to 4% for M-cones and 1.5% for L-cones.
Off-resonance artifacts correction with convolution in k-space (ORACLE).
Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne
2012-06-01
Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.
Urtnasan, Erdenebayar; Park, Jong-Uk; Joo, Eun-Yeon; Lee, Kyoung-Joung
2018-04-23
In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F 1 -score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
Convolutional virtual electric field for image segmentation using active contours.
Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden
2014-01-01
Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images.
NASA Technical Reports Server (NTRS)
Doland, G. D.
1970-01-01
Convolutional coding, used to upgrade digital data transmission under adverse signal conditions, has been improved by a method which ensures data transitions, permitting bit synchronizer operation at lower signal levels. Method also increases decoding ability by removing ambiguous condition.
Feed-back modulation of cone synapses by L-horizontal cells of turtle retina.
Gerschenfeld, H M; Piccolino, M; Neyton, J
1980-12-01
Light stimulation of the periphery of the receptive field of turtle cones can evoke both transient and sustained increases of the cone Ca2+ conductance, which may become regenerative. Such increase in the cone Ca2+ conductance evoked by peripheral illumination results from the activation of a polysynaptic pathway involving a feed-back connexion from the L-horizontal cells (L-HC) to the cones. Thus the hyperpolarization of a L-HC by inward current injection can evoke a Ca2+ conductance increase in neighbouring cones. The cone Ca2+ channels thus activated are likely located at its synaptic endings and probably intervene in the cone transmitter release. Therefore the feed-back connexion between L-HC and cones by modifying the Ca2+ conductance of cones could actually modulate the transmitter release from cone synapses. Such feed-back modulation of cone synapses plays a role in the organization of the colour-coded responses of the chromaticity type-horizontal cells and probably of other second order neurones, post-synaptic to the cones. The mechanisms operating the feed-back connexion from L-HC to cones are discussed.
Jiménez-López, Manuel; Alburquerque-Béjar, Juan J.; Nieto-López, Leticia; García-Ayuso, Diego; Villegas-Pérez, Maria P.; Vidal-Sanz, Manuel; Agudo-Barriuso, Marta
2014-01-01
We purpose here to analyze and compare the population and topography of cone photoreceptors in two mouse strains using automated routines, and to design a method of retinal sampling for their accurate manual quantification. In whole-mounted retinas from pigmented C57/BL6 and albino Swiss mice, the longwave-sensitive (L) and the shortwave-sensitive (S) opsins were immunodetected to analyze the population of each cone type. In another group of retinas both opsins were detected with the same fluorophore to quantify all cones. In a third set of retinas, L-opsin and Brn3a were immunodetected to determine whether L-opsin+cones and retinal ganglion cells (RGCs) have a parallel distribution. Cones and RGCs were automatically quantified and their topography illustrated with isodensity maps. Our results show that pigmented mice have a significantly higher number of total cones (all-cones) and of L-opsin+cones than albinos which, in turn, have a higher population of S-opsin+cones. In pigmented animals 40% of cones are dual (cones that express both opsins), 34% genuine-L (cones that only express the L-opsin), and 26% genuine-S (cones that only express the S-opsin). In albinos, 23% of cones are genuine-S and the proportion of dual cones increases to 76% at the expense of genuine-L cones. In both strains, L-opsin+cones are denser in the central than peripheral retina, and all-cones density increases dorso-ventrally. In pigmented animals S-opsin+cones are scarce in the dorsal retina and very numerous in the ventral retina, being densest in its nasal aspect. In albinos, S-opsin+cones are abundant in the dorsal retina, although their highest densities are also ventral. Based on the densities of each cone population, we propose a sampling method to manually quantify and infer their total population. In conclusion, these data provide the basis to study cone degeneration and its prevention in pathologic conditions. PMID:25029531
Design of convolutional tornado code
NASA Astrophysics Data System (ADS)
Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu
2017-09-01
As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.
1992-12-01
views expressed in this thesis are those of the author end do net reflect olicsia policy or pokletsm of the Deperteaset of Defame or the US...utempl u v= cncd (2,1,6,G64,u,zeros(l,12));%Convolutional encoding mm=bm(2,v); %Binary to M-ary conversion clear v u; mm=inter(50,200,mm);%Interleaving (50...save result err B. CNCD.X (CONVOLUTIONAL ENCODER FUNCTION) function (v,vr] - cncd (n,k,m,Gr,u,r) % CONVOLUTIONAL ENCODER % Paul H. Moose % Naval
Time history solution program, L225 (TEV126). Volume 1: Engineering and usage
NASA Technical Reports Server (NTRS)
Kroll, R. I.; Tornallyay, A.; Clemmons, R. E.
1979-01-01
Volume 1 of a two volume document is presented. The usage of the convolution program L225 (TEV 126) is described. The program calculates the time response of a linear system by convoluting the impulsive response function with the time-dependent excitation function. The convolution is performed as a multiplication in the frequency domain. Fast Fourier transform techniques are used to transform the product back into the time domain to obtain response time histories. A brief description of the analysis used is presented.
Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram
NASA Technical Reports Server (NTRS)
Lee, P. J.
1984-01-01
The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.
Simulation of ICD-9 to ICD-10-CM Transition for Family Medicine: Simple or Convoluted?
Grief, Samuel N; Patel, Jesal; Kochendorfer, Karl M; Green, Lee A; Lussier, Yves A; Li, Jianrong; Burton, Michael; Boyd, Andrew D
2016-01-01
The objective of this study was to examine the impact of the transition from International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM), to Interactional Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM), on family medicine and to identify areas where additional training might be required. Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million in claims). Using the science of networks, we evaluated each ICD-9-CM code used by family medicine physicians to determine whether the transition was simple or convoluted. A simple transition is defined as 1 ICD-9-CM code mapping to 1 ICD-10-CM code, or 1 ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is nonreciprocal and complex, with multiple codes for which definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Of the 1635 diagnosis codes used by family medicine physicians, 70% of the codes were categorized as simple, 27% of codes were convoluted, and 3% had no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims was similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only <0.1% of the overall diagnosis codes. The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, and for which additional resources need to be invested to ensure a successful transition to ICD-10-CM. © Copyright 2016 by the American Board of Family Medicine.
Simulation of ICD-9 to ICD-10-CM transition for family medicine: simple or convoluted?
Grief, Samuel N.; Patel, Jesal; Lussier, Yves A.; Li, Jianrong; Burton, Michael; Boyd, Andrew D.
2017-01-01
Objectives The objective of this study was to examine the impact of the transition from International Classification of Disease Version Nine Clinical Modification (ICD-9-CM) to Interactional Classification of Disease Version Ten Clinical Modification (ICD-10-CM) on family medicine and identify areas where additional training might be required. Methods Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million dollars in claims). Using the science of networks we evaluated each ICD-9-CM code used by family medicine physicians to determine if the transition was simple or convoluted.1 A simple translation is defined as one ICD-9-CM code mapping to one ICD-10-CM code or one ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is non-reciprocal and complex with multiple codes where definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Results Of the 1635 diagnosis codes used by the family medicine physicians, 70% of the codes were categorized as simple, 27% of the diagnosis codes were convoluted and 3% were found to have no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims were similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only < 0.1% of the overall diagnosis codes. Conclusions The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, where additional resources need to be invested to ensure a successful transition to ICD-10-CM. PMID:26769875
Marcos, S; Tornow, R P; Elsner, A E; Navarro, R
1997-07-01
Foveal cone spacing was measured in vivo using an objective technique: ocular speckle interferometry. Cone packing density was computed from cone spacing data. Foveal cone photopigment density difference was measured in the same subjects using retinal densitometry with a scanning laser ophthalmoscope. Both the cone packing density and cone photopigment density difference decreased sharply with increasing retinal eccentricity. From the comparison of both sets of measurements, the computed amounts of photopigment per cone increased slightly with increasing retinal eccentricity. Consistent with previous results, decreases in cone outer segment length are over-compensated by an increase in the outer segment area, at least in retinal eccentricities up to 1 deg.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1994-01-01
Brief summaries of research in the following areas are presented: (1) construction of optimum geometrically uniform trellis codes; (2) a statistical approach to constructing convolutional code generators; and (3) calculating the exact performance of a convolutional code.
a Novel Deep Convolutional Neural Network for Spectral-Spatial Classification of Hyperspectral Data
NASA Astrophysics Data System (ADS)
Li, N.; Wang, C.; Zhao, H.; Gong, X.; Wang, D.
2018-04-01
Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.
Detection of prostate cancer on multiparametric MRI
NASA Astrophysics Data System (ADS)
Seah, Jarrel C. Y.; Tang, Jennifer S. N.; Kitchen, Andy
2017-03-01
In this manuscript, we describe our approach and methods to the ProstateX challenge, which achieved an overall AUC of 0.84 and the runner-up position. We train a deep convolutional neural network to classify lesions marked on multiparametric MRI of the prostate as clinically significant or not. We implement a novel addition to the standard convolutional architecture described as auto-windowing which is clinically inspired and designed to overcome some of the difficulties faced in MRI interpretation, where high dynamic ranges and low contrast edges may cause difficulty for traditional convolutional neural networks trained on high contrast natural imagery. We demonstrate that this system can be trained end to end and outperforms a similar architecture without such additions. Although a relatively small training set was provided, we use extensive data augmentation to prevent overfitting and transfer learning to improve convergence speed, showing that deep convolutional neural networks can be feasibly trained on small datasets.
No-reference image quality assessment based on statistics of convolution feature maps
NASA Astrophysics Data System (ADS)
Lv, Xiaoxin; Qin, Min; Chen, Xiaohui; Wei, Guo
2018-04-01
We propose a Convolutional Feature Maps (CFM) driven approach to accurately predict image quality. Our motivation bases on the finding that the Nature Scene Statistic (NSS) features on convolution feature maps are significantly sensitive to distortion degree of an image. In our method, a Convolutional Neural Network (CNN) is trained to obtain kernels for generating CFM. We design a forward NSS layer which performs on CFM to better extract NSS features. The quality aware features derived from the output of NSS layer is effective to describe the distortion type and degree an image suffered. Finally, a Support Vector Regression (SVR) is employed in our No-Reference Image Quality Assessment (NR-IQA) model to predict a subjective quality score of a distorted image. Experiments conducted on two public databases demonstrate the promising performance of the proposed method is competitive to state of the art NR-IQA methods.
Sensitivity Kernels for the Cross-Convolution Measure: Eliminate the Source in Waveform Tomography
NASA Astrophysics Data System (ADS)
Menke, W. H.
2017-12-01
We use the adjoint method to derive sensitivity kernels for the cross-convolution measure, a goodness-of-fit criterion that is applicable to seismic data containing closely-spaced multiple arrivals, such as reverberating compressional waves and split shear waves. In addition to a general formulation, specific expressions for sensitivity with respect to density, Lamé parameter and shear modulus are derived for a isotropic elastic solid. As is typical of adjoint methods, the kernels depend upon an adjoint field, the source of which, in this case, is the reference displacement field, pre-multiplied by a matrix of cross-correlations of components of the observed field. We use a numerical simulation to evaluate the resolving power of a topographic inversion that employs the cross-convolution measure. The estimated resolving kernel shows is point-like, indicating that the cross-convolution measure will perform well in waveform tomography settings.
Bloemen-van Gurp, Esther J; Mijnheer, Ben J; Verschueren, Tom A M; Lambin, Philippe
2007-11-15
To predict the three-dimensional dose distribution of our total body irradiation technique, using a commercial treatment planning system (TPS). In vivo dosimetry, using metal oxide field effect transistors (MOSFETs) and thermoluminescence detectors (TLDs), was used to verify the calculated dose distributions. A total body computed tomography scan was performed and loaded into our TPS, and a three-dimensional-dose distribution was generated. In vivo dosimetry was performed at five locations on the patient. Entrance and exit dose values were converted to midline doses using conversion factors, previously determined with phantom measurements. The TPS-predicted dose values were compared with the MOSFET and TLD in vivo dose values. The MOSFET and TLD dose values agreed within 3.0% and the MOSFET and TPS data within 0.5%. The convolution algorithm of the TPS, which is routinely applied in the clinic, overestimated the dose in the lung region. Using a superposition algorithm reduced the calculated lung dose by approximately 3%. The dose inhomogeneity, as predicted by the TPS, can be reduced using a simple intensity-modulated radiotherapy technique. The use of a TPS to calculate the dose distributions in individual patients during total body irradiation is strongly recommended. Using a TPS gives good insight of the over- and underdosage in a patient and the influence of patient positioning on dose homogeneity. MOSFETs are suitable for in vivo dosimetry purposes during total body irradiation, when using appropriate conversion factors. The MOSFET, TLD, and TPS results agreed within acceptable margins.
Stern, Robin L; Heaton, Robert; Fraser, Martin W; Goddu, S Murty; Kirby, Thomas H; Lam, Kwok Leung; Molineu, Andrea; Zhu, Timothy C
2011-01-01
The requirement of an independent verification of the monitor units (MU) or time calculated to deliver the prescribed dose to a patient has been a mainstay of radiation oncology quality assurance. The need for and value of such a verification was obvious when calculations were performed by hand using look-up tables, and the verification was achieved by a second person independently repeating the calculation. However, in a modern clinic using CT/MR/PET simulation, computerized 3D treatment planning, heterogeneity corrections, and complex calculation algorithms such as convolution/superposition and Monte Carlo, the purpose of and methodology for the MU verification have come into question. In addition, since the verification is often performed using a simpler geometrical model and calculation algorithm than the primary calculation, exact or almost exact agreement between the two can no longer be expected. Guidelines are needed to help the physicist set clinically reasonable action levels for agreement. This report addresses the following charges of the task group: (1) To re-evaluate the purpose and methods of the "independent second check" for monitor unit calculations for non-IMRT radiation treatment in light of the complexities of modern-day treatment planning. (2) To present recommendations on how to perform verification of monitor unit calculations in a modern clinic. (3) To provide recommendations on establishing action levels for agreement between primary calculations and verification, and to provide guidance in addressing discrepancies outside the action levels. These recommendations are to be used as guidelines only and shall not be interpreted as requirements.
Expression of the vesicular glutamate transporter vGluT2 in a subset of cones of the mouse retina.
Wässle, Heinz; Regus-Leidig, Hanna; Haverkamp, Silke
2006-06-01
Cone photoreceptors have a continuous release of glutamate that is modulated by light. Vesicular glutamate transporters (vGluT) play an essential role for sustaining this release by loading synaptic vesicles in the cone synapse, the so-called cone pedicle. In the present study mouse retinas were immunostained for vGluT1 and vGluT2. vGluT1 was localized to all cone pedicles and rod spherules, whereas vGluT2 was found in only 10% of the cone pedicles. The vGluT2-expressing cones were characterized in more detail. They are distributed in a regular array, suggesting they are a distinct type. Their proportion does not differ between dorsal (L-cone-dominated) and ventral (S-cone-dominated) retina, and they are not the genuine blue cones of the mouse retina. During development, vGluT1 and vGluT2 expression in cones starts at around P0 and right from the beginning vGluT2 is only expressed in a subset of cones. Bipolar cells contact the vGluT2-expressing cones and other cones nonselectively. The possible functional role of vGluT2 expression in a small fraction of cones is discussed.
NASA Astrophysics Data System (ADS)
Prado, F. O.; de Almeida, N. G.; Duzzioni, E. I.; Moussa, M. H. Y.; Villas-Boas, C. J.
2011-07-01
In this paper we detail some results advanced in a recent letter [Prado , Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.102.073008 102, 073008 (2009).] showing how to engineer reservoirs for two-level systems at absolute zero by means of a time-dependent master equation leading to a nonstationary superposition equilibrium state. We also present a general recipe showing how to build nonadiabatic coherent evolutions of a fermionic system interacting with a bosonic mode and investigate the influence of thermal reservoirs at finite temperature on the fidelity of the protected superposition state. Our analytical results are supported by numerical analysis of the full Hamiltonian model.
NASA Astrophysics Data System (ADS)
Guérin, Philippe Allard; Feix, Adrien; Araújo, Mateus; Brukner, Časlav
2016-09-01
In communication complexity, a number of distant parties have the task of calculating a distributed function of their inputs, while minimizing the amount of communication between them. It is known that with quantum resources, such as entanglement and quantum channels, one can obtain significant reductions in the communication complexity of some tasks. In this work, we study the role of the quantum superposition of the direction of communication as a resource for communication complexity. We present a tripartite communication task for which such a superposition allows for an exponential saving in communication, compared to one-way quantum (or classical) communication; the advantage also holds when we allow for protocols with bounded error probability.
Ding, Xi-Qin; Matveev, Alexander; Singh, Anil; Komori, Naoka; Matsumoto, Hiroyuki
2012-01-01
Cone vision mediated by photoreceptor cyclic nucleotide-gated (CNG) channel is essential for central and color vision and visual acuity. Cone CNG channel is composed of two structurally related subunit types, CNGA3 and CNGB3. Naturally occurring mutations in cone CNG channel are associated with a variety of cone diseases including achromatopsia, progressive cone dystrophy, and some maculopathies. Nevertheless, our understanding of the structure of cone CNG channel is quite limited. This is, in part, due to the challenge of studying cones in a rod-dominant mammalian retina. We have demonstrated a robust expression of cone CNG channel and lack of rod CNG channel in the cone-dominant Nrl−/− retina and shown that the Nrl−/− mouse line is a valuable model to study cone CNG channel. This work examined the complex structure of cone CNG channel using infrared fluorescence Western detection combined with chemical cross-linking and blue native-PAGE. Our results suggest that the native cone CNG channel is a heterotetrameric complex likely at a stoichiometry of three CNGA3 and one CNGB3. PMID:22183405
NASA Astrophysics Data System (ADS)
Liu, Wanjun; Liang, Xuejian; Qu, Haicheng
2017-11-01
Hyperspectral image (HSI) classification is one of the most popular topics in remote sensing community. Traditional and deep learning-based classification methods were proposed constantly in recent years. In order to improve the classification accuracy and robustness, a dimensionality-varied convolutional neural network (DVCNN) was proposed in this paper. DVCNN was a novel deep architecture based on convolutional neural network (CNN). The input of DVCNN was a set of 3D patches selected from HSI which contained spectral-spatial joint information. In the following feature extraction process, each patch was transformed into some different 1D vectors by 3D convolution kernels, which were able to extract features from spectral-spatial data. The rest of DVCNN was about the same as general CNN and processed 2D matrix which was constituted by by all 1D data. So that the DVCNN could not only extract more accurate and rich features than CNN, but also fused spectral-spatial information to improve classification accuracy. Moreover, the robustness of network on water-absorption bands was enhanced in the process of spectral-spatial fusion by 3D convolution, and the calculation was simplified by dimensionality varied convolution. Experiments were performed on both Indian Pines and Pavia University scene datasets, and the results showed that the classification accuracy of DVCNN improved by 32.87% on Indian Pines and 19.63% on Pavia University scene than spectral-only CNN. The maximum accuracy improvement of DVCNN achievement was 13.72% compared with other state-of-the-art HSI classification methods, and the robustness of DVCNN on water-absorption bands noise was demonstrated.
NASA Astrophysics Data System (ADS)
Kereszturi, Gábor; Németh, Károly; Cronin, Shane J.; Agustín-Flores, Javier; Smith, Ian E. M.; Lindsay, Jan
2013-10-01
Monogenetic basaltic volcanism is characterised by a complex array of behaviours in the spatial distribution of magma output and also temporal variability in magma flux and eruptive frequency. Investigating this in detail is hindered by the difficulty in evaluating ages of volcanic events as well as volumes erupted in each volcano. Eruptive volumes are an important input parameter for volcanic hazard assessment and may control eruptive scenarios, especially transitions between explosive and effusive behaviour and the length of eruptions. Erosion, superposition and lack of exposure limit the accuracy of volume determination, even for very young volcanoes. In this study, a systematic volume estimation model is developed and applied to the Auckland Volcanic Field in New Zealand. In this model, a basaltic monogenetic volcano is categorised in six parts. Subsurface portions of volcanoes, such as diatremes beneath phreatomagmatic volcanoes, or crater infills, are approximated by geometrical considerations, based on exposed analogue volcanoes. Positive volcanic landforms, such as scoria/spatter cones, tephras rings and lava flow, were defined by using a Light Detection and Ranging (LiDAR) survey-based Digital Surface Model (DSM). Finally, the distal tephra associated with explosive eruptions was approximated using published relationships that relate original crater size to ejecta volumes. Considering only those parts with high reliability, the overall magma output (converted to Dense Rock Equivalent) for the post-250 ka active Auckland Volcanic Field in New Zealand is a minimum of 1.704 km3. This is made up of 1.329 km3 in lava flows, 0.067 km3 in phreatomagmatic crater lava infills, 0.090 km3 within tephra/tuff rings, 0.112 km3 inside crater lava infills, and 0.104 km3 within scoria cones. Using the minimum eruptive volumes, the spatial and temporal magma fluxes are estimated at 0.005 km3/km2 and 0.007 km3/ka. The temporal-volumetric evolution of Auckland is characterised by an increasing magma flux in the last 40 ky, which is inferred to be triggered by plate tectonics processes (e.g. increased asthenospheric shearing and backarc spreading of underneath the Auckland region).
Earth Observations taken by the Expedition 17 Crew
2008-10-21
ISS017-E-020538 (21 Oct. 2008) --- Arkenu Craters 1 and 2 in Libya are featured in this image photographed by an Expedition 17 crewmember on the International Space Station. Geologists often study features on Earth, such as impact craters, to gain insight into processes that occur on other planets. On Earth, more than 150 impact craters have been identified on the continents, but only a few of these are classified as double impact craters. One such example, the Arkenu Craters in northern Africa, is shown in this image. Arkenu 1 and 2 are double impact structures located in eastern Libya (22.04 degrees north latitude and 23.45 degrees east longitude) in the Sahara desert, with diameters of approximately 6.8 kilometers and 10.3 kilometers, respectively. The craters are unusual in that they both exhibit concentric annular ridge structures (gray circles in the image indicate the position of the outermost visible ridges). In many terrestrial complex craters these features are highly eroded and no longer visible. While the circular structure of these features had been noted, the impact origin hypothesis was strengthened in December 2003 when a field team observed shatter cones -- conical-shaped features in rocks created by the high shock pressures generated during impact. Large outcrops of impact breccias -- a jumble of rock fragments generated at the impact site that are now cemented together into an identifiable rock layer -- were also observed by the field team. Two impactors, each approximately 500 meters in diameter, are thought to have created the craters. According to scientists, the age of the impact event has been dated as occurring less than 140 million years ago. While the presence of shatter cones and impact breccias is generally considered to be strong evidence for meteor impact, some scientists now question the interpretation of these features observed at the Arkenu structures and suggest that they were caused by erosive and volcanic processes. At present, both craters are being crossed by linear dunes extending northeast-southwest -- the superposition of the dunes across the annular ridges indicates that they are much younger than the craters.
ERIC Educational Resources Information Center
Bowen, J. Philip; Sorensen, Jennifer B.; Kirschner, Karl N.
2007-01-01
The analysis explains the basis set superposition error (BSSE) and fragment relaxation involved in calculating the interaction energies using various first principle theories. Interacting the correlated fragment and increasing the size of the basis set can help in decreasing the BSSE to a great extent.
NASA Astrophysics Data System (ADS)
An, Nguyen Ba
2009-04-01
Three novel probabilistic yet conclusive schemes are proposed to teleport a general two-mode coherent-state superposition via attenuated quantum channels with ideal and/or threshold detectors. The calculated total success probability is highest (lowest) when only ideal (threshold) detectors are used.
The principle of superposition in human prehension.
Zatsiorsky, Vladimir M; Latash, Mark L; Gao, Fan; Shim, Jae Kun
2004-03-01
The experimental evidence supports the validity of the principle of superposition for multi-finger prehension in humans. Forces and moments of individual digits are defined by two independent commands: "Grasp the object stronger/weaker to prevent slipping" and "Maintain the rotational equilibrium of the object". The effects of the two commands are summed up.
Simulated Holograms: A Simple Introduction to Holography.
ERIC Educational Resources Information Center
Dittmann, H.; Schneider, W. B.
1992-01-01
Describes a project that uses a computer and a dot matrix printer to simulate the holographic recording process of simple object structures. The process' four steps are (1) superposition of waves; (2) representing the superposition of a plane reference wave on the monitor screen; (3) photographic reduction of the images; and (4) reconstruction of…
Measurement of the Mutual Interference Between Independent Bluetooth Devices
NASA Astrophysics Data System (ADS)
Schoof, Adrien; Ter Haseborg, Jan Luiken
In this paper the field superposition of commercial Bluetooth transmitters is examined. The superposition is measured for miscellaneous analyzer filter bandwidths, transmitter combinations and numbers. Also the commonness of the collisions is measured. Finally the spatial field distributions of standalone and Bluetooth equipped devices are measured and will be presented and discussed.
Classification of ligand molecules in PDB with graph match-based structural superposition.
Shionyu-Mitsuyama, Clara; Hijikata, Atsushi; Tsuji, Toshiyuki; Shirai, Tsuyoshi
2016-12-01
The fast heuristic graph match algorithm for small molecules, COMPLIG, was improved by adding a structural superposition process to verify the atom-atom matching. The modified method was used to classify the small molecule ligands in the Protein Data Bank (PDB) by their three-dimensional structures, and 16,660 types of ligands in the PDB were classified into 7561 clusters. In contrast, a classification by a previous method (without structure superposition) generated 3371 clusters from the same ligand set. The characteristic feature in the current classification system is the increased number of singleton clusters, which contained only one ligand molecule in a cluster. Inspections of the singletons in the current classification system but not in the previous one implied that the major factors for the isolation were differences in chirality, cyclic conformations, separation of substructures, and bond length. Comparisons between current and previous classification systems revealed that the superposition-based classification was effective in clustering functionally related ligands, such as drugs targeted to specific biological processes, owing to the strictness of the atom-atom matching.
Multiple quantum coherence spectroscopy.
Mathew, Nathan A; Yurs, Lena A; Block, Stephen B; Pakoulev, Andrei V; Kornau, Kathryn M; Wright, John C
2009-08-20
Multiple quantum coherences provide a powerful approach for studies of complex systems because increasing the number of quantum states in a quantum mechanical superposition state increases the selectivity of a spectroscopic measurement. We show that frequency domain multiple quantum coherence multidimensional spectroscopy can create these superposition states using different frequency excitation pulses. The superposition state is created using two excitation frequencies to excite the symmetric and asymmetric stretch modes in a rhodium dicarbonyl chelate and the dynamic Stark effect to climb the vibrational ladders involving different overtone and combination band states. A monochromator resolves the free induction decay of different coherences comprising the superposition state. The three spectral dimensions provide the selectivity required to observe 19 different spectral features associated with fully coherent nonlinear processes involving up to 11 interactions with the excitation fields. The different features act as spectroscopic probes of the diagonal and off-diagonal parts of the molecular potential energy hypersurface. This approach can be considered as a coherent pump-probe spectroscopy where the pump is a series of excitation pulses that prepares a multiple quantum coherence and the probe is another series of pulses that creates the output coherence.
Sagnac interferometry with coherent vortex superposition states in exciton-polariton condensates
NASA Astrophysics Data System (ADS)
Moxley, Frederick Ira; Dowling, Jonathan P.; Dai, Weizhong; Byrnes, Tim
2016-05-01
We investigate prospects of using counter-rotating vortex superposition states in nonequilibrium exciton-polariton Bose-Einstein condensates for the purposes of Sagnac interferometry. We first investigate the stability of vortex-antivortex superposition states, and show that they survive at steady state in a variety of configurations. Counter-rotating vortex superpositions are of potential interest to gyroscope and seismometer applications for detecting rotations. Methods of improving the sensitivity are investigated by targeting high momentum states via metastable condensation, and the application of periodic lattices. The sensitivity of the polariton gyroscope is compared to its optical and atomic counterparts. Due to the large interferometer areas in optical systems and small de Broglie wavelengths for atomic BECs, the sensitivity per detected photon is found to be considerably less for the polariton gyroscope than with competing methods. However, polariton gyroscopes have an advantage over atomic BECs in a high signal-to-noise ratio, and have other practical advantages such as room-temperature operation, area independence, and robust design. We estimate that the final sensitivities including signal-to-noise aspects are competitive with existing methods.
Primate Short-Wavelength Cones Share Molecular Markers with Rods
Craft, Cheryl M.; Huang, Jing; Possin, Daniel E.; Hendrickson, Anita
2015-01-01
Macaca, Callithrix jacchus marmoset monkey, Pan troglodytes chim- panzee and human retinas were examined to define if short wavelength (S) cones share molecular markers with L&M cone or rod photoreceptors. S cones showed consistent differences in their immunohistochemical staining and expression levels compared to L&M cones for “rod” Arrestin1 (S-Antigen), “cone” Arrestin4, cone alpha transducin, and Calbindin. Our data verify a similar pattern of expression in these primate retinas and provide clues to the structural divergence of rods and S cones versus L&M cones, suggesting S cone retinal function is “intermediate” between them. PMID:24664680
An Analysis Model for Water Cone Subsidence in Bottom Water Drive Reservoirs
NASA Astrophysics Data System (ADS)
Wang, Jianjun; Xu, Hui; Wu, Shucheng; Yang, Chao; Kong, lingxiao; Zeng, Baoquan; Xu, Haixia; Qu, Tailai
2017-12-01
Water coning in bottom water drive reservoirs, which will result in earlier water breakthrough, rapid increase in water cut and low recovery level, has drawn tremendous attention in petroleum engineering field. As one simple and effective method to inhibit bottom water coning, shut-in coning control is usually preferred in oilfield to control the water cone and furthermore to enhance economic performance. However, most of the water coning researchers just have been done on investigation of the coning behavior as it grows up, the reported studies for water cone subsidence are very scarce. The goal of this work is to present an analytical model for water cone subsidence to analyze the subsidence of water cone when the well shut in. Based on Dupuit critical oil production rate formula, an analytical model is developed to estimate the initial water cone shape at the point of critical drawdown. Then, with the initial water cone shape equation, we propose an analysis model for water cone subsidence in bottom water reservoir reservoirs. Model analysis and several sensitivity studies are conducted. This work presents accurate and fast analytical model to perform the water cone subsidence in bottom water drive reservoirs. To consider the recent interests in development of bottom drive reservoirs, our approach provides a promising technique for better understanding the subsidence of water cone.
Li, Xia; Li, Wensheng; Dai, Xufeng; Kong, Fansheng; Zheng, Qinxiang; Zhou, Xiangtian; Lü, Fan; Chang, Bo; Rohrer, Bärbel; Hauswirth, William. W.; Qu, Jia; Pang, Ji-jing
2011-01-01
Purpose. RPE65 function is necessary in the retinal pigment epithelium (RPE) to generate chromophore for all opsins. Its absence results in vision loss and rapid cone degeneration. Recent Leber congenital amaurosis type 2 (LCA with RPE65 mutations) phase I clinical trials demonstrated restoration of vision on RPE65 gene transfer into RPE cells overlying cones. In the rd12 mouse, a naturally occurring model of RPE65-LCA early cone degeneration was observed; however, some peripheral M-cones remained. A prior study showed that AAV-mediated RPE65 expression can prevent early cone degeneration. The present study was conducted to test whether the remaining cones in older rd12 mice can be rescued. Methods. Subretinal treatment with the scAAV5-smCBA-hRPE65 vector was initiated at postnatal day (P)14 and P90. After 2 months, electroretinograms were recorded, and cone morphology was analyzed by using cone-specific peanut agglutinin and cone opsin–specific antibodies. Results. Cone degeneration started centrally and spread ventrally, with cells losing cone-opsin staining before that for the PNA-lectin–positive cone sheath. Gene therapy starting at P14 resulted in almost wild-type M- and S-cone function and morphology. Delaying gene-replacement rescued the remaining M-cones, and most important, more M-cone opsin–positive cells were identified than were present at the onset of gene therapy, suggesting that opsin expression could be reinitiated in cells with cone sheaths. Conclusions. The results support and extend those of the previous study that gene therapy can stop early cone degeneration, and, more important, they provide proof that delayed treatment can restore the function and morphology of the remaining cones. These results have important implications for the ongoing LCA2 clinical trials. PMID:21169527
NASA Astrophysics Data System (ADS)
Kervyn, M.; Ernst, G. G. J.; Carracedo, J.-C.; Jacobs, P.
2012-01-01
Volcanic cones are the most common volcanic constructs on Earth. Their shape can be quantified using two morphometric ratios: the crater/cone base ratio (W cr/W co) and the cone height/width ratio (H co/W co). The average values for these ratios obtained over entire cone fields have been explained by the repose angle of loose granular material (i.e. scoria) controlling cone slopes. The observed variability in these ratios between individual cones has been attributed to the effect of erosional processes or contrasting eruptive conditions on cone morphometry. Using a GIS-based approach, high spatial resolution Digital Elevation Models and airphotos, two new geomorphometry datasets for cone fields at Mauna Kea (Hawaii, USA) and Lanzarote (Canary Islands, Spain) are extracted and analyzed here. The key observation in these datasets is the great variability in morphometric ratios, even for simple-shape and well-preserved cones. Simple analog experiments are presented to analyze factors influencing the morphometric ratios. The formation of a crater is simulated within an analog cone (i.e. a sand pile) by opening a drainage conduit at the cone base. Results from experiments show that variability in the morphometric ratios can be attributed to variations in the width, height and horizontal offset of the drainage point relative to the cone symmetry axis, to the dip of the underlying slope or to the influence of a small proportion of fine cohesive material. GIS analysis and analog experiments, together with specific examples of cones documented in the field, suggest that the morphometric ratios for well-preserved volcanic cones are controlled by a combination of 1) the intrinsic cone material properties, 2) time-dependent eruption conditions, 3) the local setting, and 4) the method used to estimate the cone height. Implications for interpreting cone morphometry solely as either an age or as an eruption condition indicator are highlighted.
Ahnelt, P K; Hokoç, J N; Röhlich, P
1995-01-01
The retinas of placental mammals appear to lack the large number and morphological diversity of cone subtypes found in diurnal reptiles. We have now studied the photoreceptor layer of a South American marsupial (Didelphis marsupialis aurita) by peanut agglutinin labeling of the cone sheath and by labeling of cone outer segments with monoclonal anti-visual pigment antibodies that have been proven to consistently label middle-to-long wavelength (COS-1) and short-wavelength (OS-2) cone subpopulations in placental mammals. Besides a dominant rod population (max. = 400,000/mm2) four subtypes of cones (max. = 3000/mm2) were identified. The outer segments of three cone subtypes were labeled by COS-1: a double cone with a principal cone containing a colorless oil droplet, a single cone with oil droplet, and another single cone. A second group of single cones lacking oil droplets was labeled by OS-2 antibody. The topography of these cone subtypes showed striking anisotropies. The COS-1 labeled single cones without oil droplets were found all over the retina and constituted the dominant population in the area centralis located in the temporal quadrant of the upper, tapetal hemisphere. The population of OS-2 labeled cones was also ubiquitous although slightly higher in the upper hemisphere (200/mm2). The COS-1 labeled cones bearing an oil droplet, including the principal member of double cones, were concentrated (800/mm2) in the inferior, non-tapetal half of the retina. The two spectral types of single cones resemble those of dichromatic photopic systems in most placental mammals. The additional set of COS-1 labeled cones is a distinct marsupial feature. The presence of oil droplets in this cone subpopulation, its absence in the area centralis, and the correlation with the non-tapetal inferior hemisphere suggest a functional specialization, possibly for mesopic conditions. Thus, sauropsid features have been retained but probably with a modified function.
UAS Collision Avoidance Algorithm that Minimizes the Impact on Route Surveillance
2009-03-01
Appendix A: Collision Avoidance Algorithm/Virtual Cockpit Interface .......................124 Appendix B : Collision Cone Boundary Rates... b ) Split Cone (c) Multiple Intruders, Single and Split Cones [27] ........................................................ 27 3-3: Collision Cone...Approach in the Vertical Plane (a) Single Cone ( b ) Multiple Intruders, Single and Split Cone [27
Zou, Leilei; Zhu, Xiaoyu; Liu, Rui; Ma, Fei; Yu, Manrong
2018-01-01
Purpose To analyze the changes of refraction and metabolism of the retinal cones under monochromatic lights in guinea pigs. Methods Sixty guinea pigs were randomly divided into a short-wavelength light (SL) group, a middle-wavelength light (ML) group, and a white light (WL) group. Refraction and axial length were measured before and after 10-week illumination. The densities of S-cones and M-cones were determined by retinal cone immunocytochemistry, and the expressions of S-opsins and M-opsins were determined by real-time PCR and Western blot. Results After 10-week illumination, the guinea pigs developed relative hyperopia in the SL group and relative myopia in the ML group. Compared with the WL group, the density of S-cones and S-opsins increased while M-cones and M-opsins decreased in the SL group (all, p < 0.05); conversely, the density of S-cones and S-opsins decreased while M-cones and M-opsins increased in the ML group (all, p < 0.05). Increased S-cones/opsins and decreased M-cones/opsins were induced by short-wavelength lights. Decreased S-cones/opsins and increased M-cones/opsins were induced by middle-wavelength lights. Conclusions Altered retinal cones/opsins induced by monochromatic lights might be involved in the refractive development in guinea pigs. PMID:29675275
Toward quantum superposition of living organisms
NASA Astrophysics Data System (ADS)
Romero-Isart, Oriol; Juan, Mathieu L.; Quidant, Romain; Cirac, J. Ignacio
2010-03-01
The most striking feature of quantum mechanics is the existence of superposition states, where an object appears to be in different situations at the same time. The existence of such states has been previously tested with small objects, such as atoms, ions, electrons and photons (Zoller et al 2005 Eur. Phys. J. D 36 203-28), and even with molecules (Arndt et al 1999 Nature 401 680-2). More recently, it has been shown that it is possible to create superpositions of collections of photons (Deléglise et al 2008 Nature 455 510-14), atoms (Hammerer et al 2008 arXiv:0807.3358) or Cooper pairs (Friedman et al 2000 Nature 406 43-6). Very recent progress in optomechanical systems may soon allow us to create superpositions of even larger objects, such as micro-sized mirrors or cantilevers (Marshall et al 2003 Phys. Rev. Lett. 91 130401; Kippenberg and Vahala 2008 Science 321 1172-6 Marquardt and Girvin 2009 Physics 2 40; Favero and Karrai 2009 Nature Photon. 3 201-5), and thus to test quantum mechanical phenomena at larger scales. Here we propose a method to cool down and create quantum superpositions of the motion of sub-wavelength, arbitrarily shaped dielectric objects trapped inside a high-finesse cavity at a very low pressure. Our method is ideally suited for the smallest living organisms, such as viruses, which survive under low-vacuum pressures (Rothschild and Mancinelli 2001 Nature 406 1092-101) and optically behave as dielectric objects (Ashkin and Dziedzic 1987 Science 235 1517-20). This opens up the possibility of testing the quantum nature of living organisms by creating quantum superposition states in very much the same spirit as the original Schrödinger's cat 'gedanken' paradigm (Schrödinger 1935 Naturwissenschaften 23 807-12, 823-8, 844-9). We anticipate that our paper will be a starting point for experimentally addressing fundamental questions, such as the role of life and consciousness in quantum mechanics.
Rapid scatter estimation for CBCT using the Boltzmann transport equation
NASA Astrophysics Data System (ADS)
Sun, Mingshan; Maslowski, Alex; Davis, Ian; Wareing, Todd; Failla, Gregory; Star-Lack, Josh
2014-03-01
Scatter in cone-beam computed tomography (CBCT) is a significant problem that degrades image contrast, uniformity and CT number accuracy. One means of estimating and correcting for detected scatter is through an iterative deconvolution process known as scatter kernel superposition (SKS). While the SKS approach is efficient, clinically significant errors on the order 2-4% (20-40 HU) still remain. We have previously shown that the kernel method can be improved by perturbing the kernel parameters based on reference data provided by limited Monte Carlo simulations of a first-pass reconstruction. In this work, we replace the Monte Carlo modeling with a deterministic Boltzmann solver (AcurosCTS) to generate the reference scatter data in a dramatically reduced time. In addition, the algorithm is improved so that instead of adjusting kernel parameters, we directly perturb the SKS scatter estimates. Studies were conducted on simulated data and on a large pelvis phantom scanned on a tabletop system. The new method reduced average reconstruction errors (relative to a reference scan) from 2.5% to 1.8%, and significantly improved visualization of low contrast objects. In total, 24 projections were simulated with an AcurosCTS execution time of 22 sec/projection using an 8-core computer. We have ported AcurosCTS to the GPU, and current run-times are approximately 4 sec/projection using two GPU's running in parallel.
A national dosimetry audit for stereotactic ablative radiotherapy in lung.
Distefano, Gail; Lee, Jonny; Jafari, Shakardokht; Gouldstone, Clare; Baker, Colin; Mayles, Helen; Clark, Catharine H
2017-03-01
A UK national dosimetry audit was carried out to assess the accuracy of Stereotactic Ablative Body Radiotherapy (SABR) lung treatment delivery. This mail-based audit used an anthropomorphic thorax phantom containing nine alanine pellets positioned in the lung region for dosimetry, as well as EBT3 film in the axial plane for isodose comparison. Centres used their local planning protocol/technique, creating 27 SABR plans. A range of delivery techniques including conformal, volumetric modulated arc therapy (VMAT) and Cyberknife (CK) were used with six different calculation algorithms (collapsed cone, superposition, pencil-beam (PB), AAA, Acuros and Monte Carlo). The mean difference between measured and calculated dose (excluding PB results) was 0.4±1.4% for alanine and 1.4±3.4% for film. PB differences were -6.1% and -12.9% respectively. The median of the absolute maximum isodose-to-isodose distances was 3mm (-6mm to 7mm) and 5mm (-10mm to +19mm) for the 100% and 50% isodose lines respectively. Alanine and film is an effective combination for verifying dosimetric and geometric accuracy. There were some differences across dose algorithms, and geometric accuracy was better for VMAT and CK compared with conformal techniques. The alanine dosimetry results showed that planned and delivered doses were within ±3.0% for 25/27 SABR plans. Copyright © 2017 Elsevier B.V. All rights reserved.
Variability in human cone topography assessed by adaptive optics scanning laser ophthalmoscopy
Zhang, Tianjiao; Godara, Pooja; Blanco, Ernesto R.; Griffin, Russell L; Wang, Xiaolin; Curcio, Christine A.; Zhang, Yuhua
2015-01-01
Purpose To assess between- and within-individual variability of macular cone topography in the eyes of young adults. Design Observational case series. Methods Cone photoreceptors in 40 eyes of 20 subjects aged 19–29 years with normal maculae were imaged using a research adaptive optics scanning laser ophthalmoscope. Refractive errors ranged from −3.0 D to 0.63 D and differed by <0.50 D in fellow eyes. Cone density was assessed on a two-dimensional sampling grid over the central 2.4 mm × 2.4 mm. Between-individual variability was evaluated by coefficient of variation (CV). Within-individual variability was quantified by maximum difference and root-mean-square (RMS). Cones were cumulated over increasing eccentricity. Results Peak densities of foveal cones are 168,162 ± 23,529 cones/mm2 (mean ± SD) (CV = 0.14). The number of cones within the cone-dominated foveola (0.8–0.9 mm diameter) is 38,311 ± 2,319 (CV = 0.06). The RMS cone density difference between fellow eyes is 6.78%, and the maximum difference is 23.6%. Mixed model statistical analysis found no difference in the association between eccentricity and cone density in the superior/nasal (p=0.8503), superior/temporal (p=0.1551), inferior/nasal (p=0.8609), and inferior/temporal (p=0.6662) quadrants of fellow eyes. Conclusions New instrumentation imaged the smallest foveal cones, thus allowing accurate assignment of foveal centers and assessment of variability in macular cone density in a large sample of eyes. Though cone densities vary significantly in the fovea, the total number of foveolar cones are very similar both between- and within-subjects. Thus, the total number of foveolar cones may be an important measure of cone degeneration and loss. PMID:25935100
NASA Technical Reports Server (NTRS)
Clark, R. T.; Mccallister, R. D.
1982-01-01
The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.
A unitary convolution approximation for the impact-parameter dependent electronic energy loss
NASA Astrophysics Data System (ADS)
Schiwietz, G.; Grande, P. L.
1999-06-01
In this work, we propose a simple method to calculate the impact-parameter dependence of the electronic energy loss of bare ions for all impact parameters. This perturbative convolution approximation (PCA) is based on first-order perturbation theory, and thus, it is only valid for fast particles with low projectile charges. Using Bloch's stopping-power result and a simple scaling, we get rid of the restriction to low charge states and derive the unitary convolution approximation (UCA). Results of the UCA are then compared with full quantum-mechanical coupled-channel calculations for the impact-parameter dependent electronic energy loss.
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.
1976-01-01
The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.
NASA Technical Reports Server (NTRS)
Truong, T. K.; Lipes, R.; Reed, I. S.; Wu, C.
1980-01-01
A fast algorithm is developed to compute two dimensional convolutions of an array of d sub 1 X d sub 2 complex number points, where d sub 2 = 2(M) and d sub 1 = 2(m-r+) for some 1 or = r or = m. This algorithm requires fewer multiplications and about the same number of additions as the conventional fast fourier transform method for computing the two dimensional convolution. It also has the advantage that the operation of transposing the matrix of data can be avoided.
Cascaded K-means convolutional feature learner and its application to face recognition
NASA Astrophysics Data System (ADS)
Zhou, Daoxiang; Yang, Dan; Zhang, Xiaohong; Huang, Sheng; Feng, Shu
2017-09-01
Currently, considerable efforts have been devoted to devise image representation. However, handcrafted methods need strong domain knowledge and show low generalization ability, and conventional feature learning methods require enormous training data and rich parameters tuning experience. A lightened feature learner is presented to solve these problems with application to face recognition, which shares similar topology architecture as a convolutional neural network. Our model is divided into three components: cascaded convolution filters bank learning layer, nonlinear processing layer, and feature pooling layer. Specifically, in the filters learning layer, we use K-means to learn convolution filters. Features are extracted via convoluting images with the learned filters. Afterward, in the nonlinear processing layer, hyperbolic tangent is employed to capture the nonlinear feature. In the feature pooling layer, to remove the redundancy information and incorporate the spatial layout, we exploit multilevel spatial pyramid second-order pooling technique to pool the features in subregions and concatenate them together as the final representation. Extensive experiments on four representative datasets demonstrate the effectiveness and robustness of our model to various variations, yielding competitive recognition results on extended Yale B and FERET. In addition, our method achieves the best identification performance on AR and labeled faces in the wild datasets among the comparative methods.
NASA Astrophysics Data System (ADS)
Wu, Leyuan
2018-01-01
We present a brief review of gravity forward algorithms in Cartesian coordinate system, including both space-domain and Fourier-domain approaches, after which we introduce a truly general and efficient algorithm, namely the convolution-type Gauss fast Fourier transform (Conv-Gauss-FFT) algorithm, for 2D and 3D modeling of gravity potential and its derivatives due to sources with arbitrary geometry and arbitrary density distribution which are defined either by discrete or by continuous functions. The Conv-Gauss-FFT algorithm is based on the combined use of a hybrid rectangle-Gaussian grid and the fast Fourier transform (FFT) algorithm. Since the gravity forward problem in Cartesian coordinate system can be expressed as continuous convolution-type integrals, we first approximate the continuous convolution by a weighted sum of a series of shifted discrete convolutions, and then each shifted discrete convolution, which is essentially a Toeplitz system, is calculated efficiently and accurately by combining circulant embedding with the FFT algorithm. Synthetic and real model tests show that the Conv-Gauss-FFT algorithm can obtain high-precision forward results very efficiently for almost any practical model, and it works especially well for complex 3D models when gravity fields on large 3D regular grids are needed.
A convolutional neural network to filter artifacts in spectroscopic MRI.
Gurbani, Saumya S; Schreibmann, Eduard; Maudsley, Andrew A; Cordova, James Scott; Soher, Brian J; Poptani, Harish; Verma, Gaurav; Barker, Peter B; Shim, Hyunsuk; Cooper, Lee A D
2018-03-09
Proton MRSI is a noninvasive modality capable of generating volumetric maps of in vivo tissue metabolism without the need for ionizing radiation or injected contrast agent. Magnetic resonance spectroscopic imaging has been shown to be a viable imaging modality for studying several neuropathologies. However, a key hurdle in the routine clinical adoption of MRSI is the presence of spectral artifacts that can arise from a number of sources, possibly leading to false information. A deep learning model was developed that was capable of identifying and filtering out poor quality spectra. The core of the model used a tiled convolutional neural network that analyzed frequency-domain spectra to detect artifacts. When compared with a panel of MRS experts, our convolutional neural network achieved high sensitivity and specificity with an area under the curve of 0.95. A visualization scheme was implemented to better understand how the convolutional neural network made its judgement on single-voxel or multivoxel MRSI, and the convolutional neural network was embedded into a pipeline capable of producing whole-brain spectroscopic MRI volumes in real time. The fully automated method for assessment of spectral quality provides a valuable tool to support clinical MRSI or spectroscopic MRI studies for use in fields such as adaptive radiation therapy planning. © 2018 International Society for Magnetic Resonance in Medicine.
Baczewski, Andrew David; Vikram, Melapudi; Shanker, Balasubramaniam; ...
2010-08-27
Diffusion, lossy wave, and Klein–Gordon equations find numerous applications in practical problems across a range of diverse disciplines. The temporal dependence of all three Green’s functions are characterized by an infinite tail. This implies that the cost complexity of the spatio-temporal convolutions, associated with evaluating the potentials, scales as O(N s 2N t 2), where N s and N t are the number of spatial and temporal degrees of freedom, respectively. In this paper, we discuss two new methods to rapidly evaluate these spatio-temporal convolutions by exploiting their block-Toeplitz nature within the framework of accelerated Cartesian expansions (ACE). The firstmore » scheme identifies a convolution relation in time amongst ACE harmonics and the fast Fourier transform (FFT) is used for efficient evaluation of these convolutions. The second method exploits the rank deficiency of the ACE translation operators with respect to time and develops a recursive numerical compression scheme for the efficient representation and evaluation of temporal convolutions. It is shown that the cost of both methods scales as O(N sN tlog 2N t). Furthermore, several numerical results are presented for the diffusion equation to validate the accuracy and efficacy of the fast algorithms developed here.« less
Ding, Xi-Qin; Thapa, Arjun; Ma, Hongwei; Xu, Jianhua; Elliott, Michael H.; Rodgers, Karla K.; Smith, Marci L.; Wang, Jin-Shan; Pittler, Steven J.; Kefalov, Vladimir J.
2016-01-01
Cone photoreceptor cyclic nucleotide-gated (CNG) channels play a pivotal role in cone phototransduction, which is a process essential for daylight vision, color vision, and visual acuity. Mutations in the cone channel subunits CNGA3 and CNGB3 are associated with human cone diseases, including achromatopsia, cone dystrophies, and early onset macular degeneration. Mutations in CNGB3 alone account for 50% of reported cases of achromatopsia. This work investigated the role of CNGB3 in cone light response and cone channel structural stability. As cones comprise only 2–3% of the total photoreceptor population in the wild-type mouse retina, we used Cngb3−/−/Nrl−/− mice with CNGB3 deficiency on a cone-dominant background in our study. We found that, in the absence of CNGB3, CNGA3 was able to travel to the outer segments, co-localize with cone opsin, and form tetrameric complexes. Electroretinogram analyses revealed reduced cone light response amplitude/sensitivity and slower response recovery in Cngb3−/−/Nrl−/− mice compared with Nrl−/− mice. Absence of CNGB3 expression altered the adaptation capacity of cones and severely compromised function in bright light. Biochemical analysis demonstrated that CNGA3 channels lacking CNGB3 were more resilient to proteolysis than CNGA3/CNGB3 channels, suggesting a hindered structural flexibility. Thus, CNGB3 regulates cone light response kinetics and the channel structural flexibility. This work advances our understanding of the biochemical and functional role of CNGB3 in cone photoreceptors. PMID:26893377
Faizan, Ahmad; Bhowmik-Stoker, Manoshi; Alipit, Vincent; Kirk, Amanda E; Krebs, Viktor E; Harwin, Steven F; Meneghini, R Michael
2017-06-01
Porous metaphyseal cones are widely used in revision knee arthroplasty. A new system of porous titanium metaphyseal cones has been designed based on the femoral and tibial morphology derived from a computed tomography-based anatomical database. The purpose of this study is to evaluate the initial mechanical stability of the new porous titanium revision cone system by measuring the micromotion under physiologic loading compared with a widely-used existing porous tantalum metaphyseal cone system. The new cones were designed to precisely fit the femoral and tibial anatomy, and 3D printing technology was used to manufacture these porous titanium cones. The stability of the new titanium cones and the widely-used tantalum cones were compared under physiologic loading conditions in bench top test model. The stability of the new titanium cones was either equivalent or better than the tantalum cones. The new titanium femoral cone construct had significantly less micromotion compared with the traditional femoral cone construct in 5 of the 12 directions measured (P < .05), whereas no statistical difference was found in 7 directions. The new porous titanium metaphyseal tibial cones demonstrated less micromotion in medial varus/valgus (P = .004) and posterior compressive micromotion (P = .002) compared with the traditional porous tantalum system. The findings of this biomechanical study demonstrate satisfactory mechanical stability of an anatomical-based porous titanium metaphyseal cone system for femoral and tibial bone loss as measured by micromotion under physiologic loading. The new cone design, in combination with instrumentation that facilitates surgical efficiency, is encouraging. Long-term clinical follow-up is warranted. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Németh, Karoly; Risso, Corina; Nullo, Francisco; Kereszturi, Gabor
2011-06-01
Payún Matru Volcanic Field is a Quaternary monogenetic volcanic field that hosts scoria cones with perfect to breached morphologies. Los Morados complex is a group of at least four closely spaced scoria cones (Los Morados main cone and the older Cones A, B, and C). Los Morados main cone was formed by a long lived eruption of months to years. After an initial Hawaiian-style stage, the eruption changed to a normal Strombolian, conebuilding style, forming a cone over 150 metres high on a northward dipping (˜4°) surface. An initial cone gradually grew until a lava flow breached the cone's base and rafted an estimated 10% of the total volume. A sudden sector collapse initiated a dramatic decompression in the upper part of the feeding conduit and triggered violent a Strombolian style eruptive stage. Subsequently, the eruption became more stable, and changed to a regular Strombolian style that partially rebuilt the cone. A likely increase in magma flux coupled with the gradual growth of a new cone caused another lava flow outbreak at the structurally weakened earlier breach site. For a second time, the unstable flank of the cone was rafted, triggering a second violent Strombolian eruptive stage which was followed by a Hawaiian style lava fountain stage. The lava fountaining was accompanied by a steady outpour of voluminous lava emission accompanied by constant rafting of the cone flank, preventing the healing of the cone. Santa Maria is another scoria cone built on a nearly flat pre-eruption surface. Despite this it went through similar stages as Los Morados main cone, but probably not in as dramatic a manner as Los Morados. In contrast to these examples of large breached cones, volumetrically smaller cones, associated to less extensive lava flows, were able to heal raft/collapse events, due to the smaller magma output and flux rates. Our evidence shows that scoria cone growth is a complex process, and is a consequence of the magma internal parameters (e.g. volatile content, magma flux, recharge, output volume) and external conditions such as inclination of the pre-eruptive surface where they grew and thus gravitational instability.
Enhanced line integral convolution with flow feature detection
DOT National Transportation Integrated Search
1995-01-01
Prepared ca. 1995. The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain [Cabral & Leedom '93]. The method produces a flow texture imag...
Commissioning results of an automated treatment planning verification system
Mason, Bryan E.; Robinson, Ronald C.; Kisling, Kelly D.; Kirsner, Steven M.
2014-01-01
A dose calculation verification system (VS) was acquired and commissioned as a second check on the treatment planning system (TPS). This system reads DICOM CT datasets, RT plans, RT structures, and RT dose from the TPS and automatically, using its own collapsed cone superposition/convolution algorithm, computes dose on the same CT dataset. The system was commissioned by extracting basic beam parameters for simple field geometries and dose verification for complex treatments. Percent depth doses (PDD) and profiles were extracted for field sizes using jaw settings 3 × 3 cm2 ‐ 40 × 40 cm2 and compared to measured data, as well as our TPS model. Smaller fields of 1 × 1 cm2 and 2 × 2 cm2 generated using the multileaf collimator (MLC) were analyzed in the same fashion as the open fields. In addition, 40 patient plans consisting of both IMRT and VMAT were computed and the following comparisons were made: 1) TPS to the VS, 2) VS to measured data, and 3) TPS to measured data where measured data is both ion chamber (IC) and film measurements. Our results indicated for all field sizes using jaw settings PDD errors for the VS on average were less than 0.87%, 1.38%, and 1.07% for 6x, 15x, and 18x, respectively, relative to measured data. PDD errors for MLC field sizes were less than 2.28%, 1.02%, and 2.23% for 6x, 15x, and 18x, respectively. The infield profile analysis yielded results less than 0.58% for 6x, 0.61% for 15x, and 0.77% for 18x for the VS relative to measured data. Analysis of the penumbra region yields results ranging from 66.5% points, meeting the DTA criteria to 100% of the points for smaller field sizes for all energies. Analysis of profile data for field sizes generated using the MLC saw agreement with infield DTA analysis ranging from 68.8%–100% points passing the 1.5%/1.5 mm criteria. Results from the dose verification for IMRT and VMAT beams indicated that, on average, the ratio of TPS to IC and VS to IC measurements was 100.5 ± 1.9% and 100.4 ± 1.3%, respectively, while our TPS to VS was 100.1 ± 1.0%. When comparing the TPS and VS to film measurements, the average percentage pixels passing a 3%/3 mm criteria based gamma analysis were 96.6 ± 4.2% and 97 ± 5.6%, respectively. When the VS was compared to the TPS, on average 98.1 ± 5.3% of pixels passed the gamma analysis. Based upon these preliminary results, the VS system should be able to calculate dose adequately as a verification tool of our TPS. PACS number: 87.55.km PMID:25207567
The principle of superposition in human prehension
Zatsiorsky, Vladimir M.; Latash, Mark L.; Gao, Fan; Shim, Jae Kun
2010-01-01
SUMMARY The experimental evidence supports the validity of the principle of superposition for multi-finger prehension in humans. Forces and moments of individual digits are defined by two independent commands: “Grasp the object stronger/weaker to prevent slipping” and “Maintain the rotational equilibrium of the object”. The effects of the two commands are summed up. PMID:20186284
Are Cloned Quantum States Macroscopic?
NASA Astrophysics Data System (ADS)
Fröwis, F.; Dür, W.
2012-10-01
We study quantum states produced by optimal phase covariant quantum cloners. We argue that cloned quantum superpositions are not macroscopic superpositions in the spirit of Schrödinger’s cat, despite their large particle number. This is indicated by calculating several measures for macroscopic superpositions from the literature, as well as by investigating the distinguishability of the two superposed cloned states. The latter rapidly diminishes when considering imperfect detectors or noisy states and does not increase with the system size. In contrast, we find that cloned quantum states themselves are macroscopic, in the sense of both proposed measures and their usefulness in quantum metrology with an optimal scaling in system size. We investigate the applicability of cloned states for parameter estimation in the presence of different kinds of noise.
Superposition and detection of two helical beams for optical orbital angular momentum communication
NASA Astrophysics Data System (ADS)
Liu, Yi-Dong; Gao, Chunqing; Gao, Mingwei; Qi, Xiaoqing; Weber, Horst
2008-07-01
A loop-like system with a Dove prism is used to generate a collinear superposition of two helical beams with different azimuthal quantum numbers in this manuscript. After the generation of the helical beams distributed on the circle centered at the optical axis by using a binary amplitude grating, the diffractive field is separated into two polarized ones with the same distribution. Rotated by the Dove prism in the loop-like system in counter directions and combined together, the two fields will generate the collinear superposition of two helical beams in certain direction. The experiment shows consistency with the theoretical analysis. This method has potential applications in optical communication by using orbital angular momentum of laser beams (optical vortices).
Optical information encryption based on incoherent superposition with the help of the QR code
NASA Astrophysics Data System (ADS)
Qin, Yi; Gong, Qiong
2014-01-01
In this paper, a novel optical information encryption approach is proposed with the help of QR code. This method is based on the concept of incoherent superposition which we introduce for the first time. The information to be encrypted is first transformed into the corresponding QR code, and thereafter the QR code is further encrypted into two phase only masks analytically by use of the intensity superposition of two diffraction wave fields. The proposed method has several advantages over the previous interference-based method, such as a higher security level, a better robustness against noise attack, a more relaxed work condition, and so on. Numerical simulation results and actual smartphone collected results are shown to validate our proposal.
NASA Technical Reports Server (NTRS)
Barnett, Alan R.; Ibrahim, Omar M.; Abdallah, Ayman A.; Sullivan, Timothy L.
1993-01-01
By utilizing MSC/NASTRAN DMAP (Direct Matrix Abstraction Program) in an existing NASA Lewis Research Center coupled loads methodology, solving modal equations of motion with initial conditions is possible using either coupled (Newmark-Beta) or uncoupled (exact mode superposition) integration available within module TRD1. Both the coupled and newly developed exact mode superposition methods have been used to perform transient analyses of various space systems. However, experience has shown that in most cases, significant time savings are realized when the equations of motion are integrated using the uncoupled solver instead of the coupled solver. Through the results of a real-world engineering analysis, advantages of using the exact mode superposition methodology are illustrated.
Near-field interferometry of a free-falling nanoparticle from a point-like source
NASA Astrophysics Data System (ADS)
Bateman, James; Nimmrichter, Stefan; Hornberger, Klaus; Ulbricht, Hendrik
2014-09-01
Matter-wave interferometry performed with massive objects elucidates their wave nature and thus tests the quantum superposition principle at large scales. Whereas standard quantum theory places no limit on particle size, alternative, yet untested theories—conceived to explain the apparent quantum to classical transition—forbid macroscopic superpositions. Here we propose an interferometer with a levitated, optically cooled and then free-falling silicon nanoparticle in the mass range of one million atomic mass units, delocalized over >150 nm. The scheme employs the near-field Talbot effect with a single standing-wave laser pulse as a phase grating. Our analysis, which accounts for all relevant sources of decoherence, indicates that this is a viable route towards macroscopic high-mass superpositions using available technology.
Slowing Quantum Decoherence by Squeezing in Phase Space
NASA Astrophysics Data System (ADS)
Le Jeannic, H.; Cavaillès, A.; Huang, K.; Filip, R.; Laurat, J.
2018-02-01
Non-Gaussian states, and specifically the paradigmatic cat state, are well known to be very sensitive to losses. When propagating through damping channels, these states quickly lose their nonclassical features and the associated negative oscillations of their Wigner function. However, by squeezing the superposition states, the decoherence process can be qualitatively changed and substantially slowed down. Here, as a first example, we experimentally observe the reduced decoherence of squeezed optical coherent-state superpositions through a lossy channel. To quantify the robustness of states, we introduce a combination of a decaying value and a rate of decay of the Wigner function negativity. This work, which uses squeezing as an ancillary Gaussian resource, opens new possibilities to protect and manipulate quantum superpositions in phase space.
Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; LeCun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2012-01-01
Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons. PMID:22518097
Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; Lecun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2012-01-01
Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons.
Linear diffusion model dating of cinder cones in Central Anatolia, Turkey
NASA Astrophysics Data System (ADS)
O'Sadnick, L. G.; Reid, M. R.; Cline, M. L.; Cosca, M. A.; Kuscu, G.
2013-12-01
The progressive decrease in slope angle, cone height and cone height/width ratio over time provides the basis for geomorphic dating of cinder cones using linear diffusion models. Previous research using diffusion models to date cinder cones has focused on the cone height/width ratio as the basis for dating cones of unknown age [1,2]. Here we apply linear diffusion models to dating cinder cones. A suite of 16 cinder cones from the Hasandağ volcano area of the Neogene-Quaternary Central Anatolian Volcanic Zone, for which samples are available, were selected for morphologic dating analysis. New 40Ar/39Ar dates for five of these cones range from 62 × 4 to 517 × 9 ka. Linear diffusion models were used to model the erosional degradation of each cone. Diffusion coefficients (κ) for the 5 cinder cones with known ages were constrained by comparing various modeled slope profiles to the current slope profile. The resulting κ is 7.5×0.5 m2kyr-1. Using this κ value, eruption ages were modeled for the remaining 11 cinder cones and range from 53×3 to 455×30 ka. These ages are within the range of ages previously reported for cinder cones in the Hasandağ region. The linear diffusion model-derived ages are being compared to additional new 40Ar/39Ar dates in order to further assess the applicability of morphological dating to constrain the ages of cinder cones. The relatively well-constrained κ value we obtained by applying the linear diffusion model to cinder cones that range in age by nearly 500 ka suggests that this model can be used to date cinder cones. This κ value is higher than the well-established value of κ =3.9 for a cinder cone in a similar climate [3]. Therefore our work confirms the importance of determining appropriate κ values from nearby cones with known ages. References 1. C.A. Wood, J. Volcanol. Geotherm. Res. 8, 137 (1980) 2. D.M. Wood, M.F. Sheridan, J. Volcanol. Geotherm. Res. 83, 241 (1998) 3. J.D. Pelletier, M.L. Cline, Geology 35, 1067 (2007)
Whiskers, cones and pyramids created in sputtering by ion bombardment
NASA Technical Reports Server (NTRS)
Wehner, G. K.
1979-01-01
A thorough study of the role which foreign atoms play in cone formation during sputtering of metals revealed many experimental facts. Two types of cone formation were distinquished, deposit cones and seed cones. Twenty-six combinations of metals for seed cone formation were tested. The sputtering yield variations with composition for combinations which form seed cones were measured. It was demonstrated that whisker growth becomes a common occurrence when low melting point material is sputter deposited on a hot nonsputtered high melting point electrode.
Litts, Katie M; Messinger, Jeffrey D; Freund, K Bailey; Zhang, Yuhua; Curcio, Christine A
2015-04-01
To quantify impressions of mitochondrial translocation in degenerating cones and to determine the nature of accumulated material in the subretinal space with apparent inner segment (IS)-like features by examining cone IS ultrastructure. Human donor eyes with advanced age-related macular degeneration (AMD) were screened for outer retinal tubulation (ORT) in macula-wide, high-resolution digital sections. Degenerating cones inside ORT (ORT cones) and outside ORT (non-ORT cones) from AMD eyes and unaffected cones in age-matched control eyes were imaged using transmission electron microscopy. The distances of mitochondria to the external limiting membrane (ELM), cone IS length, and cone IS width at the ELM were measured. Outer retinal tubulation and non-ORT cones lose outer segments (OS), followed by shortening of IS and mitochondria. In non-ORT cones, IS broaden. Outer retinal tubulation and non-ORT cone IS myoids become undetectable due to mitochondria redistribution toward the nucleus. Some ORT cones were found lacking IS and containing mitochondria in the outer fiber (between soma and ELM). Unlike long, thin IS mitochondria in control cones, ORT and non-ORT IS mitochondria are ovoid or reniform. Shed IS, some containing mitochondria, were found in the subretinal space. In AMD, macula cones exhibit loss of detectable myoid due to IS shortening in addition to OS loss, as described. Mitochondria shrink and translocate toward the nucleus. As reflectivity sources, translocating mitochondria may be detectable using in vivo imaging to monitor photoreceptor degeneration in retinal disorders. These results improve the knowledge basis for interpreting high-resolution clinical retinal imaging.
Determination of HCME 3-D parameters using a full ice-cream cone model
NASA Astrophysics Data System (ADS)
Na, Hyeonock; Moon, Yong-Jae; Lee, Harim
2016-05-01
It is very essential to determine three dimensional parameters (e.g., radial speed, angular width, source location) of Coronal Mass Ejections (CMEs) for space weather forecast. Several cone models (e.g., an elliptical cone model, an ice-cream cone model, an asymmetric cone model) have been examined to estimate these parameters. In this study, we investigate which cone type is close to a halo CME morphology using 26 CMEs: halo CMEs by one spacecraft (SOHO or STEREO-A or B) and as limb CMEs by the other ones. From cone shape parameters of these CMEs such as their front curvature, we find that near full ice-cream cone type CMEs are much closer to observations than shallow ice-cream cone type CMEs. Thus we develop a new cone model in which a full ice-cream cone consists of many flat cones with different heights and angular widths. This model is carried out by the following steps: (1) construct a cone for given height and angular width, (2) project the cone onto the sky plane, (3) select points comprising the outer boundary, and (4) minimize the difference between the estimated projection speeds with the observed ones. By applying this model to 12 SOHO/LASCO halo CMEs, we find that 3-D parameters from our method are similar to those from other stereoscopic methods (a geometrical triangulation method and a Graduated Cylindrical Shell model) based on multi-spacecraft data. We are developing a general ice-cream cone model whose front shape is a free parameter determined by observations.
The decoding of majority-multiplexed signals by means of dyadic convolution
NASA Astrophysics Data System (ADS)
Losev, V. V.
1980-09-01
The maximum likelihood method can often not be used for the decoding of majority-multiplexed signals because of the large number of computations required. This paper describes a fast dyadic convolution transform which can be used to reduce the number of computations.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.
2014-01-01
This Essay traces the centuries-long history of the phenomenological disciplines of directional radiometry and radiative transfer in turbid media, discusses their fundamental weaknesses, and outlines the convoluted process of their conversion into legitimate branches of physical optics.
The uniqueness of the solution of cone-like inversion models for halo CMEs
NASA Astrophysics Data System (ADS)
Zhao, X. P.
2006-12-01
Most of elliptic halo CMEs are believed to be formed by the Thompson scattering of the photospheric light by the 3-D cone-like shell of the CME plasma. To obtain the real propagation direction and angular width of the halo CMEs, such cone-like inversion models as the circular cone, the elliptic cone and the ice-cream cone models have been suggested recently. Because the number of given parameters that are used to characterize 2-D elliptic halo CMEs observed by one spacecraft are less than the number of unknown parameters that are used to characterize the 3-D elliptic cone model, the solution of the elliptic cone model is not unique. Since it is difficult to determine whether or not an observed halo CME is formed by an circular cone or elliptic cone shell, the solution of circular cone model may often be not unique too. To fix the problem of the uniqueness of the solution of various 3-D cone-like inversion models, this work tries to develop the algorithm for using the data from multi-spacecraft, such as the STEREO A and B, and the Solar Sentinels.
[Application of numerical convolution in in vivo/in vitro correlation research].
Yue, Peng
2009-01-01
This paper introduced the conception and principle of in vivo/in vitro correlation (IVIVC) and convolution/deconvolution methods, and elucidated in details the convolution strategy and method for calculating the in vivo absorption performance of the pharmaceutics according to the their pharmacokinetic data in Excel, then put the results forward to IVIVC research. Firstly, the pharmacokinetic data ware fitted by mathematical software to make up the lost points. Secondly, the parameters of the optimal fitted input function were defined by trail-and-error method according to the convolution principle in Excel under the hypothesis that all the input functions fit the Weibull functions. Finally, the IVIVC between in vivo input function and the in vitro dissolution was studied. In the examples, not only the application of this method was demonstrated in details but also its simplicity and effectiveness were proved by comparing with the compartment model method and deconvolution method. It showed to be a powerful tool for IVIVC research.
DeepFix: A Fully Convolutional Neural Network for Predicting Human Eye Fixations.
Kruthiventi, Srinivas S S; Ayush, Kumar; Babu, R Venkatesh
2017-09-01
Understanding and predicting the human visual attention mechanism is an active area of research in the fields of neuroscience and computer vision. In this paper, we propose DeepFix, a fully convolutional neural network, which models the bottom-up mechanism of visual attention via saliency prediction. Unlike classical works, which characterize the saliency map using various hand-crafted features, our model automatically learns features in a hierarchical fashion and predicts the saliency map in an end-to-end manner. DeepFix is designed to capture semantics at multiple scales while taking global context into account, by using network layers with very large receptive fields. Generally, fully convolutional nets are spatially invariant-this prevents them from modeling location-dependent patterns (e.g., centre-bias). Our network handles this by incorporating a novel location-biased convolutional layer. We evaluate our model on multiple challenging saliency data sets and show that it achieves the state-of-the-art results.
Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network.
Yoon, Jaehong; Lee, Jungnyun; Whang, Mincheol
2018-01-01
Feature of event-related potential (ERP) has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain-computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects' ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.
Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network
2018-01-01
Feature of event-related potential (ERP) has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain–computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects' ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.
NASA Astrophysics Data System (ADS)
Liu, Miaofeng
2017-07-01
In recent years, deep convolutional neural networks come into use in image inpainting and super-resolution in many fields. Distinct to most of the former methods requiring to know beforehand the local information for corrupted pixels, we propose a 20-depth fully convolutional network to learn an end-to-end mapping a dataset of damaged/ground truth subimage pairs realizing non-local blind inpainting and super-resolution. As there often exist image with huge corruptions or inpainting on a low-resolution image that the existing approaches unable to perform well, we also share parameters in local area of layers to achieve spatial recursion and enlarge the receptive field. To avoid the difficulty of training this deep neural network, skip-connections between symmetric convolutional layers are designed. Experimental results shows that the proposed method outperforms state-of-the-art methods for diverse corrupting and low-resolution conditions, it works excellently when realizing super-resolution and image inpainting simultaneously
Convolutional encoding of self-dual codes
NASA Technical Reports Server (NTRS)
Solomon, G.
1994-01-01
There exist almost complete convolutional encodings of self-dual codes, i.e., block codes of rate 1/2 with weights w, w = 0 mod 4. The codes are of length 8m with the convolutional portion of length 8m-2 and the nonsystematic information of length 4m-1. The last two bits are parity checks on the two (4m-1) length parity sequences. The final information bit complements one of the extended parity sequences of length 4m. Solomon and van Tilborg have developed algorithms to generate these for the Quadratic Residue (QR) Codes of lengths 48 and beyond. For these codes and reasonable constraint lengths, there are sequential decodings for both hard and soft decisions. There are also possible Viterbi-type decodings that may be simple, as in a convolutional encoding/decoding of the extended Golay Code. In addition, the previously found constraint length K = 9 for the QR (48, 24;12) Code is lowered here to K = 8.
NASA Astrophysics Data System (ADS)
Mukherjee, Nandini; Dong, Wenrui; Perreault, William; Zare, Richard
2017-04-01
We prepare a large ensemble of rovibrationally excited (v = 1, J = 2) H2 molecules in a coherent superposition of M-states using Stark-induced adiabatic Raman passage (SARP) with linearly polarized single mode pump (532 nm) and Stokes (699 nm) laser pulses of duration 6 ns and 4 ns. A biaxial superposition state, | ψ〉 = 1/ √2 [ | v = 1, J = 2, M = -2〉- | v = 1, J = 2, M = + 2〉], is prepared using SARP with a sequence of a pump laser pulse partially overlapping with a cross polarized Stokes laser pulse co-propagating along the quantization z-axis. The degree of phase coherence is measured by recording interference fringes in the ion signal produced using the O(2) line of 2 +1 resonance enhanced multiphoton ionization (REMPI) from the rovibrationally excited (v = 1, J = 2) level as a function of REMPI laser polarization angle. The ion signal is measured using a time-of-flight mass spectrometer. Nearly 60% population transfer from H2 (v = 0, J = 0) ground state to the superposition state in H2 (v = 1, J = 2) is measured from the depletion of Q(0) REMPI signal of the (v = 0, J = 0) ground state. The M-state superposition behaves much like a multi-slit interferometer where the number of slits, i.e. the number of M-states, and their separations, i.e. the relative phase, can be varied experimentally. This work has been supported by the U.S. Army Research Office.
Han, Xinhai; Wang, Guanzhong; Jie, Jiansheng; Choy, Wallace C H; Luo, Yi; Yuk, T I; Hou, J G
2005-02-24
Novel ZnO cone arrays with controllable morphologies have been synthesized on silicon (100) substrates by thermal evaporation of metal Zn powder at a low temperature of 570 degrees C without a metal catalyst. Clear structure evolutions were observed using scanning electron microscopy: well-aligned ZnO nanocones, double-cones with growing head cones attached by stem cones, and cones with straight hexagonal pillar were obtained as the distance between the source and the substrates was increased. X-ray diffraction shows that all cone arrays grow along the c-axis. Raman and photoluminescence spectra reveal that the optical properties of the buffer layer between the ZnO cone arrays and the silicon substrates are better than those of the ZnO cone arrays due to high concentration of Zn in the heads of the ZnO cone arrays and higher growth temperature of the buffer layer. The growth of ZnO arrays reveals that the cone arrays are synthesized through a self-catalyzed vapor-liquid-solid (VLS) process.
Variability in bleach kinetics and amount of photopigment between individual foveal cones.
Bedggood, Phillip; Metha, Andrew
2012-06-20
To study the bleaching dynamics of individual foveal cone photoreceptors using an adaptive optics ophthalmoscope. After dark adaptation, cones were progressively bleached and imaged by a series of flashes of 545-nm to 570-nm light at 12 Hz. Intensity measurements were made within the foveal avascular zone (FAZ) to avoid confounding signals from the inner retinal blood supply. Over 1300 cones in this region were identified and tracked through the imaging sequences. A single subject was used who demonstrated the necessary steady fixation, wide FAZ, and resolvability of cones close to the foveal center. The mean intensity of all cones was well-described by first-order kinetics. Individual cones showed marked differences from the mean, both in rate of bleach and amount of photopigment; there was an inverse correlation between these two parameters. A subset of the cones showed large oscillations in intensity consistent with interference from light scattered within the cone outer segment. These cones also bleached more quickly, implying that rapid bleaching induces greater amounts of scatter. Neighboring cones in the fovea display high variability in their optical properties.
Spectral interpolation - Zero fill or convolution. [image processing
NASA Technical Reports Server (NTRS)
Forman, M. L.
1977-01-01
Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.
NASA Technical Reports Server (NTRS)
Mccallister, R. D.; Crawford, J. J.
1981-01-01
It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.
Langenbucher, Frieder
2003-11-01
Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.
ERIC Educational Resources Information Center
Sengoren, Serap Kaya; Tanel, Rabia; Kavcar, Nevzat
2006-01-01
The superposition principle is used to explain many phenomena in physics. Incomplete knowledge about this topic at a basic level leads to physics students having problems in the future. As long as prospective physics teachers have difficulties in the subject, it is inevitable that high school students will have the same difficulties. The aim of…
Optimal Superpositioning of Flexible Molecule Ensembles
Gapsys, Vytautas; de Groot, Bert L.
2013-01-01
Analysis of the internal dynamics of a biological molecule requires the successful removal of overall translation and rotation. Particularly for flexible or intrinsically disordered peptides, this is a challenging task due to the absence of a well-defined reference structure that could be used for superpositioning. In this work, we started the analysis with a widely known formulation of an objective for the problem of superimposing a set of multiple molecules as variance minimization over an ensemble. A negative effect of this superpositioning method is the introduction of ambiguous rotations, where different rotation matrices may be applied to structurally similar molecules. We developed two algorithms to resolve the suboptimal rotations. The first approach minimizes the variance together with the distance of a structure to a preceding molecule in the ensemble. The second algorithm seeks for minimal variance together with the distance to the nearest neighbors of each structure. The newly developed methods were applied to molecular-dynamics trajectories and normal-mode ensembles of the Aβ peptide, RS peptide, and lysozyme. These new (to our knowledge) superpositioning methods combine the benefits of variance and distance between nearest-neighbor(s) minimization, providing a solution for the analysis of intrinsic motions of flexible molecules and resolving ambiguous rotations. PMID:23332072
Nano confinement effects on dynamic and viscoelastic properties of Selenium Films
NASA Astrophysics Data System (ADS)
Yoon, Heedong; McKenna, Gregory
2015-03-01
In current study, we use a novel nano bubble inflation technique to study nano confinement effects on the dynamic and viscoelastic properties of physical vapor deposited Selenium films. Film thicknesses ranged from 60 to 260 nm. Creep experiments were performed for the temperatures ranging from Tg,macroscopic-14 °C to Tg,\\ macroscopic + 19 °C. Time temperature superposition and time thickness superposition were applied to create reduced creep curves, and those were compared with macroscopic data [J. Non-Cryst. Solids. 2002, 307, 790-801]. The results showed that the time temperature superposition was applicable in the glassy relaxation regime to the steady-state plateau regime. However in the long time response of the creep compliance, time thickness superposition failed due to the thickness dependence on the steady-state plateau. It was observed that the steady state compliance increased with film thickness. The thickness dependence on the plateau stiffening followed a power law of DPlateau ~ h2.46, which is greater than observed in organic polymers where the exponents observed range from 0.83 to 2.0 [Macromolecules. 2012, 45 (5), 2453-2459]. National Science Foundation Grant No. CHE 1112416 and John R. Bradford Endowment at Texas Tech
Acral melanoma detection using a convolutional neural network for dermoscopy images.
Yu, Chanki; Yang, Sejung; Kim, Wonoh; Jung, Jinwoong; Chung, Kee-Yang; Lee, Sang Wook; Oh, Byungho
2018-01-01
Acral melanoma is the most common type of melanoma in Asians, and usually results in a poor prognosis due to late diagnosis. We applied a convolutional neural network to dermoscopy images of acral melanoma and benign nevi on the hands and feet and evaluated its usefulness for the early diagnosis of these conditions. A total of 724 dermoscopy images comprising acral melanoma (350 images from 81 patients) and benign nevi (374 images from 194 patients), and confirmed by histopathological examination, were analyzed in this study. To perform the 2-fold cross validation, we split them into two mutually exclusive subsets: half of the total image dataset was selected for training and the rest for testing, and we calculated the accuracy of diagnosis comparing it with the dermatologist's and non-expert's evaluation. The accuracy (percentage of true positive and true negative from all images) of the convolutional neural network was 83.51% and 80.23%, which was higher than the non-expert's evaluation (67.84%, 62.71%) and close to that of the expert (81.08%, 81.64%). Moreover, the convolutional neural network showed area-under-the-curve values like 0.8, 0.84 and Youden's index like 0.6795, 0.6073, which were similar score with the expert. Although further data analysis is necessary to improve their accuracy, convolutional neural networks would be helpful to detect acral melanoma from dermoscopy images of the hands and feet.
Annunziata, Roberto; Trucco, Emanuele
2016-11-01
Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.
Tanikake, Yohei; Hayashi, Koji; Ogawa, Munehiro; Inagaki, Yusuke; Kawate, Kenji; Tomita, Tetsuya; Tanaka, Yasuhito
2016-12-01
A 72-year-old male patient underwent mobile-bearing posterior-stabilized total knee arthroplasty for osteoarthritis. He experienced a nontraumatic polyethylene tibial insert cone fracture 27 months after surgery. Scanning electron microscopy of the fracture surface of the tibial insert cone suggested progress of ductile breaking from the posterior toward the anterior of the cone due to repeated longitudinal bending stress, leading to fatigue breaking at the anterior side of the cone, followed by the tibial insert cone fracture at the anterior side of the cone, resulting in fracture at the base of the cone. This analysis shows the risk of tibial insert cone fracture due to longitudinal stress in mobile-bearing posterior-stabilized total knee arthroplasty in which an insert is designed to highly conform to the femoral component.
Spatiochromatic Interactions between Individual Cone Photoreceptors in the Human Retina
Sabesan, Ramkumar; Sincich, Lawrence C.
2017-01-01
A remarkable feature of human vision is that the retina and brain have evolved circuitry to extract useful spatial and spectral information from signals originating in a photoreceptor mosaic with trichromatic constituents that vary widely in their relative numbers and local spatial configurations. A critical early transformation applied to cone signals is horizontal-cell-mediated lateral inhibition, which imparts a spatially antagonistic surround to individual cone receptive fields, a signature inherited by downstream neurons and implicated in color signaling. In the peripheral retina, the functional connectivity of cone inputs to the circuitry that mediates lateral inhibition is not cone-type specific, but whether these wiring schemes are maintained closer to the fovea remains unsettled, in part because central retinal anatomy is not easily amenable to direct physiological assessment. Here, we demonstrate how the precise topography of the long (L)-, middle (M)-, and short (S)-wavelength-sensitive cones in the human parafovea (1.5° eccentricity) shapes perceptual sensitivity. We used adaptive optics microstimulation to measure psychophysical detection thresholds from individual cones with spectral types that had been classified independently by absorptance imaging. Measured against chromatic adapting backgrounds, the sensitivities of L and M cones were, on average, receptor-type specific, but individual cone thresholds varied systematically with the number of preferentially activated cones in the immediate neighborhood. The spatial and spectral patterns of these interactions suggest that interneurons mediating lateral inhibition in the central retina, likely horizontal cells, establish functional connections with L and M cones indiscriminately, implying that the cone-selective circuitry supporting red–green color vision emerges after the first retinal synapse. SIGNIFICANCE STATEMENT We present evidence for spatially antagonistic interactions between individual, spectrally typed cones in the central retina of human observers using adaptive optics. Using chromatic adapting fields to modulate the relative steady-state activity of long (L)- and middle (M)-wavelength-sensitive cones, we found that single-cone detection thresholds varied predictably with the spectral demographics of the surrounding cones. The spatial scale and spectral pattern of these photoreceptor interactions were consistent with lateral inhibition mediated by retinal horizontal cells that receive nonselective input from L and M cones. These results demonstrate a clear link between the neural architecture of the visual system inputs—cone photoreceptors—and visual perception and have implications for the neural locus of the cone-specific circuitry supporting color vision. PMID:28871030
The Na+/Ca2+, K+ exchanger NCKX4 is required for efficient cone-mediated vision.
Vinberg, Frans; Wang, Tian; De Maria, Alicia; Zhao, Haiqing; Bassnett, Steven; Chen, Jeannie; Kefalov, Vladimir J
2017-06-26
Calcium (Ca 2+ ) plays an important role in the function and health of neurons. In vertebrate cone photoreceptors, Ca 2+ controls photoresponse sensitivity, kinetics, and light adaptation. Despite the critical role of Ca 2+ in supporting the function and survival of cones, the mechanism for its extrusion from cone outer segments is not well understood. Here, we show that the Na + /Ca 2+ , K + exchanger NCKX4 is expressed in zebrafish, mouse, and primate cones. Functional analysis of NCKX4-deficient mouse cones revealed that this exchanger is essential for the wide operating range and high temporal resolution of cone-mediated vision. We show that NCKX4 shapes the cone photoresponse together with the cone-specific NCKX2: NCKX4 acts early to limit response amplitude, while NCKX2 acts late to further accelerate response recovery. The regulation of Ca 2+ by NCKX4 in cones is a novel mechanism that supports their ability to function as daytime photoreceptors and promotes their survival.
Activated mTORC1 promotes long-term cone survival in retinitis pigmentosa mice
Venkatesh, Aditya; Ma, Shan; Le, Yun Z.; Hall, Michael N.; Rüegg, Markus A.; Punzo, Claudio
2015-01-01
Retinitis pigmentosa (RP) is an inherited photoreceptor degenerative disorder that results in blindness. The disease is often caused by mutations in genes that are specific to rod photoreceptors; however, blindness results from the secondary loss of cones by a still unknown mechanism. Here, we demonstrated that the mammalian target of rapamycin complex 1 (mTORC1) is required to slow the progression of cone death during disease and that constitutive activation of mTORC1 in cones is sufficient to maintain cone function and promote long-term cone survival. Activation of mTORC1 in cones enhanced glucose uptake, retention, and utilization, leading to increased levels of the key metabolite NADPH. Moreover, cone death was delayed in the absence of the NADPH-sensitive cell death protease caspase 2, supporting the contribution of reduced NADPH in promoting cone death. Constitutive activation of mTORC1 preserved cones in 2 mouse models of RP, suggesting that the secondary loss of cones is caused mainly by metabolic deficits and is independent of a specific rod-associated mutation. Together, the results of this study address a longstanding question in the field and suggest that activating mTORC1 in cones has therapeutic potential to prolong vision in RP. PMID:25798619
Xu, Jianhua; Morris, Lynsie; Fliesler, Steven J; Sherry, David M; Ding, Xi-Qin
2011-06-01
To investigate the progression of cone dysfunction and degeneration in CNG channel subunit CNGB3 deficiency. Retinal structure and function in CNGB3(-/-) and wild-type (WT) mice were evaluated by electroretinography (ERG), lectin cytochemistry, and correlative Western blot analysis of cone-specific proteins. Cone and rod terminal integrity was assessed by electron microscopy and synaptic protein immunohistochemical distribution. Cone ERG amplitudes (photopic b-wave) in CNGB3(-/-) mice were reduced to approximately 50% of WT levels by postnatal day 15, decreasing further to approximately 30% of WT levels by 1 month and to approximately 20% by 12 months of age. Rod ERG responses (scotopic a-wave) were not affected in CNGB3(-/-) mice. Average CNGB3(-/-) cone densities were approximately 80% of WT levels at 1 month and declined slowly thereafter to only approximately 50% of WT levels by 12 months. Expression levels of M-opsin, cone transducin α-subunit, and cone arrestin in CNGB3(-/-) mice were reduced by 50% to 60% by 1 month and declined to 35% to 45% of WT levels by 9 months. In addition, cone opsin mislocalized to the outer nuclear layer and the outer plexiform layer in the CNGB3(-/-) retina. Cone and rod synaptic marker expression and terminal ultrastructure were normal in the CNGB3(-/-) retina. These findings are consistent with an early-onset, slow progression of cone functional defects and cone loss in CNGB3(-/-) mice, with the cone signaling deficits arising from disrupted phototransduction and cone loss rather than from synaptic defects.
On basis set superposition error corrected stabilization energies for large n-body clusters.
Walczak, Katarzyna; Friedrich, Joachim; Dolg, Michael
2011-10-07
In this contribution, we propose an approximate basis set superposition error (BSSE) correction scheme for the site-site function counterpoise and for the Valiron-Mayer function counterpoise correction of second order to account for the basis set superposition error in clusters with a large number of subunits. The accuracy of the proposed scheme has been investigated for a water cluster series at the CCSD(T), CCSD, MP2, and self-consistent field levels of theory using Dunning's correlation consistent basis sets. The BSSE corrected stabilization energies for a series of water clusters are presented. A study regarding the possible savings with respect to computational resources has been carried out as well as a monitoring of the basis set dependence of the approximate BSSE corrections. © 2011 American Institute of Physics
NASA Astrophysics Data System (ADS)
Singh, Manu Pratap; Radhey, Kishori; Kumar, Sandeep
2017-08-01
In the present paper, simultaneous classification of Orange and Apple has been carried out using both Grover's iterative algorithm (Grover 1996) and Ventura's model (Ventura and Martinez, Inf. Sci. 124, 273-296, 2000) taking different superposition of two- pattern start state containing Orange and Apple both, one- pattern start state containing Apple as search state and another one- pattern start state containing Orange as search state. It has been shown that the exclusion superposition is the most suitable two- pattern search state for simultaneous classification of pattern associated with Apples and Oranges and the superposition of phase-invariance are the best choice as the respective search state based on one -pattern start-states in both Grover's and Ventura's methods of classifications of patterns.
Photonic microwave waveforms generation based on pulse carving and superposition in time-domain
NASA Astrophysics Data System (ADS)
Xia, Yi; Jiang, Yang; Zi, Yuejiao; He, Yutong; Tian, Jing; Zhang, Xiaoyu; Luo, Hao; Dong, Ruyang
2018-05-01
A novel photonic approach for various microwave waveforms generation based on time-domain synthesis is theoretically analyzed and experimentally investigated. In this scheme, two single-drive Mach-Zehnder modulators are used for pulses shaping. After shifting the phase and implementing envelopes superposition of the pulses, desired waveforms can be achieved in time-domain. The theoretic analysis and simulations are presented. In the experimental demonstrations, a triangular waveform, square waveform, and half duty cycle sawtooth (or reversed-sawtooth) waveform are generated successfully. By utilizing time multiplexing technique, a frequency-doubled sawtooth (or reversed-sawtooth) waveform with 100% duty cycle can be obtained. In addition, a fundamental frequency sawtooth (or reversed-sawtooth) waveform with 100% duty cycle can also be achieved by the superposition of square waveform and frequency-doubled sawtooth waveform.
Superposition-model analysis of rare-earth doped BaY2F8
NASA Astrophysics Data System (ADS)
Magnani, N.; Amoretti, G.; Baraldi, A.; Capelletti, R.
The energy level schemes of four rare-earth dopants (Ce3+ , Nd3+ , Dy3+ , and Er3+) in BaY2 F-8 , as determined by optical absorption spectra, were fitted with a single-ion Hamiltonian and analysed within Newman's Superposition Model for the crystal field. A unified picture for the four dopants was obtained, by assuming a distortion of the F- ligand cage around the RE site; within the framework of the Superposition Model, this distortion is found to have a marked anisotropic behaviour for heavy rare earths, while it turns into an isotropic expansion of the nearest-neighbours polyhedron for light rare earths. It is also inferred that the substituting ion may occupy an off-center position with respect to the original Y3+ site in the crystal.
NASA Astrophysics Data System (ADS)
Zaima, Kazunori; Sasaki, Koichi
2016-08-01
We investigated the transient phenomena in a premixed burner flame with the superposition of a pulsed dielectric barrier discharge (DBD). The length of the flame was shortened by the superposition of DBD, indicating the activation of combustion chemical reactions with the help of the plasma. In addition, we observed the modulation of the top position of the unburned gas region and the formations of local minimums in the axial distribution of the optical emission intensity of OH. These experimental results reveal the oscillation of the rates of combustion chemical reactions as a response to the activation by pulsed DBD. The cycle of the oscillation was 0.18-0.2 ms, which could be understood as the eigenfrequency of the plasma-assisted combustion reaction system.
Coherent inflation for large quantum superpositions of levitated microspheres
NASA Astrophysics Data System (ADS)
Romero-Isart, Oriol
2017-12-01
We show that coherent inflation (CI), namely quantum dynamics generated by inverted conservative potentials acting on the center of mass of a massive object, is an enabling tool to prepare large spatial quantum superpositions in a double-slit experiment. Combined with cryogenic, extreme high vacuum, and low-vibration environments, we argue that it is experimentally feasible to exploit CI to prepare the center of mass of a micrometer-sized object in a spatial quantum superposition comparable to its size. In such a hitherto unexplored parameter regime gravitationally-induced decoherence could be unambiguously falsified. We present a protocol to implement CI in a double-slit experiment by letting a levitated microsphere traverse a static potential landscape. Such a protocol could be experimentally implemented with an all-magnetic scheme using superconducting microspheres.
Optical threshold secret sharing scheme based on basic vector operations and coherence superposition
NASA Astrophysics Data System (ADS)
Deng, Xiaopeng; Wen, Wei; Mi, Xianwu; Long, Xuewen
2015-04-01
We propose, to our knowledge for the first time, a simple optical algorithm for secret image sharing with the (2,n) threshold scheme based on basic vector operations and coherence superposition. The secret image to be shared is firstly divided into n shadow images by use of basic vector operations. In the reconstruction stage, the secret image can be retrieved by recording the intensity of the coherence superposition of any two shadow images. Compared with the published encryption techniques which focus narrowly on information encryption, the proposed method can realize information encryption as well as secret sharing, which further ensures the safety and integrality of the secret information and prevents power from being kept centralized and abused. The feasibility and effectiveness of the proposed method are demonstrated by numerical results.
An Interactive Graphics Program for Assistance in Learning Convolution.
ERIC Educational Resources Information Center
Frederick, Dean K.; Waag, Gary L.
1980-01-01
A program has been written for the interactive computer graphics facility at Rensselaer Polytechnic Institute that is designed to assist the user in learning the mathematical technique of convolving two functions. Because convolution can be represented graphically by a sequence of steps involving folding, shifting, multiplying, and integration, it…
Distribution and specificity of S-cone ("blue cone") signals in subcortical visual pathways.
Martin, Paul R; Lee, Barry B
2014-03-01
We review here the distribution of S-cone signals and properties of S-cone recipient receptive fields in subcortical pathways. Nearly everything we know about S-cone signals in the subcortical visual system comes from the study of visual systems in cats and primates (monkeys); in this review, we concentrate on results from macaque and marmoset monkeys. We discuss segregation of S-cone recipient (blue-on and blue-off) receptive fields in the dorsal lateral geniculate nucleus and describe their receptive field properties. We treat in some detail the question of detecting weak S-cone signals as an introduction for newcomers to the field. Finally, we briefly consider the question on how S-cone signals are distributed among nongeniculate targets.
The organization of the cone photoreceptor mosaic measured in the living human retina
Sawides, Lucie; de Castro, Alberto; Burns, Stephen A.
2016-01-01
The cone photoreceptors represent the initial fundamental sampling step in the acquisition of visual information. While recent advances in adaptive optics have provided increasingly precise estimates of the packing density and spacing of the cone photoreceptors in the living human retina, little is known about the local cone arrangement beyond a tendency towards hexagonal packing. We analyzed the cone mosaic in data from 10 normal subjects. A technique was applied to calculate the local average cone mosaic structure which allowed us to determine the hexagonality, spacing and orientation of local regions. Using cone spacing estimates, we find the expected decrease in cone density with retinal eccentricity and higher densities along the horizontal meridians as opposed to the vertical meridians. Orientation analysis reveals an asymmetry in the local cone spacing of the hexagonal packing, with cones having a larger local spacing along the horizontal direction. This horizontal/vertical asymmetry is altered at eccentricities larger than 2 degrees in the superior meridian and 2.5 degrees in the inferior meridian. Analysis of hexagon orientations in the central 1.4° of the retina show a tendency for orientation to be locally coherent, with orientation patches consisting of between 35 and 240 cones. PMID:27353225
Temporal and spatial characteristics of cone degeneration in RCS rats.
Huang, Yan Ming; Yin, Zheng Qin; Liu, Kang; Huo, Shu Jia
2011-03-01
The temporal and spatial characteristics of cone degeneration in the Royal College of Surgeons (RCS) rat were studied to provide information for treatment strategies of retinitis pigmentosa. Nonpigmented dystrophic RCS rats (RCS) and pigmented nondystrophic RCS rats (controls) were used. Cone processes were visualized with peanut agglutinin (PNA). Cone development appears to have been completed by postnatal day 21 (P21) in both the RCS and control rats. Signs of cone degeneration were obvious by P30, with shorter outer segments (OSs) and enlarged inner segments (ISs). At that time, 81.7% of the cones retained stained ISs. The rate of IS density decline was slower in the peripheral, nasal, and superior retina, and only 43.6% of the cones with ISs were present at P45. By P60, PNA-labeled cone ISs were distorted and restricted to the peripheral retina, and by P90, few cone pedicles were detected. Our findings indicate that therapeutic strategies aimed at rescuing cones in the degenerating retina should be applied before P21 and no later than P45 while substantial numbers of cones retain their ISs. Either the middle or peripheral regions of the nasal and superior retina are the best locations for transplantation strategies.
NASA Astrophysics Data System (ADS)
Hersch, Roger David; Crété, Frédérique
2004-12-01
Dot gain is different when dots are printed alone, printed in superposition with one ink or printed in superposition with two inks. In addition, the dot gain may also differ depending on which solid ink the considered halftone layer is superposed. In a previous research project, we developed a model for computing the effective surface coverage of a dot according to its superposition conditions. In the present contribution, we improve the Yule-Nielsen modified Neugebauer model by integrating into it our effective dot surface coverage computation model. Calibration of the reproduction curves mapping nominal to effective surface coverages in every superposition condition is carried out by fitting effective dot surfaces which minimize the sum of square differences between the measured reflection density spectra and reflection density spectra predicted according to the Yule-Nielsen modified Neugebauer model. In order to predict the reflection spectrum of a patch, its known nominal surface coverage values are converted into effective coverage values by weighting the contributions from different reproduction curves according to the weights of the contributing superposition conditions. We analyze the colorimetric prediction improvement brought by our extended dot surface coverage model for clustered-dot offset prints, thermal transfer prints and ink-jet prints. The color differences induced by the differences between measured reflection spectra and reflection spectra predicted according to the new dot surface estimation model are quantified on 729 different cyan, magenta, yellow patches covering the full color gamut. As a reference, these differences are also computed for the classical Yule-Nielsen modified spectral Neugebauer model incorporating a single halftone reproduction curve for each ink. Taking into account dot surface coverages according to different superposition conditions considerably improves the predictions of the Yule-Nielsen modified Neugebauer model. In the case of offset prints, the mean difference between predictions and measurements expressed in CIE-LAB CIE-94 ΔE94 values is reduced at 100 lpi from 1.54 to 0.90 (accuracy improvement factor: 1.7) and at 150 lpi it is reduced from 1.87 to 1.00 (accuracy improvement factor: 1.8). Similar improvements have been observed for a thermal transfer printer at 600 dpi, at lineatures of 50 and 75 lpi. In the case of an ink-jet printer at 600 dpi, the mean ΔE94 value is reduced at 75 lpi from 3.03 to 0.90 (accuracy improvement factor: 3.4) and at 100 lpi from 3.08 to 0.91 (accuracy improvement factor: 3.4).
NASA Astrophysics Data System (ADS)
Hersch, Roger David; Crete, Frederique
2005-01-01
Dot gain is different when dots are printed alone, printed in superposition with one ink or printed in superposition with two inks. In addition, the dot gain may also differ depending on which solid ink the considered halftone layer is superposed. In a previous research project, we developed a model for computing the effective surface coverage of a dot according to its superposition conditions. In the present contribution, we improve the Yule-Nielsen modified Neugebauer model by integrating into it our effective dot surface coverage computation model. Calibration of the reproduction curves mapping nominal to effective surface coverages in every superposition condition is carried out by fitting effective dot surfaces which minimize the sum of square differences between the measured reflection density spectra and reflection density spectra predicted according to the Yule-Nielsen modified Neugebauer model. In order to predict the reflection spectrum of a patch, its known nominal surface coverage values are converted into effective coverage values by weighting the contributions from different reproduction curves according to the weights of the contributing superposition conditions. We analyze the colorimetric prediction improvement brought by our extended dot surface coverage model for clustered-dot offset prints, thermal transfer prints and ink-jet prints. The color differences induced by the differences between measured reflection spectra and reflection spectra predicted according to the new dot surface estimation model are quantified on 729 different cyan, magenta, yellow patches covering the full color gamut. As a reference, these differences are also computed for the classical Yule-Nielsen modified spectral Neugebauer model incorporating a single halftone reproduction curve for each ink. Taking into account dot surface coverages according to different superposition conditions considerably improves the predictions of the Yule-Nielsen modified Neugebauer model. In the case of offset prints, the mean difference between predictions and measurements expressed in CIE-LAB CIE-94 ΔE94 values is reduced at 100 lpi from 1.54 to 0.90 (accuracy improvement factor: 1.7) and at 150 lpi it is reduced from 1.87 to 1.00 (accuracy improvement factor: 1.8). Similar improvements have been observed for a thermal transfer printer at 600 dpi, at lineatures of 50 and 75 lpi. In the case of an ink-jet printer at 600 dpi, the mean ΔE94 value is reduced at 75 lpi from 3.03 to 0.90 (accuracy improvement factor: 3.4) and at 100 lpi from 3.08 to 0.91 (accuracy improvement factor: 3.4).
Rapid Recovery of Visual Function Associated with Blue Cone Ablation in Zebrafish
Hagerman, Gordon F.; Noel, Nicole C. L.; Cao, Sylvia Y.; DuVal, Michèle G.; Oel, A. Phillip; Allison, W. Ted
2016-01-01
Hurdles in the treatment of retinal degeneration include managing the functional rewiring of surviving photoreceptors and integration of any newly added cells into the remaining second-order retinal neurons. Zebrafish are the premier genetic model for such questions, and we present two new transgenic lines allowing us to contrast vision loss and recovery following conditional ablation of specific cone types: UV or blue cones. The ablation of each cone type proved to be thorough (killing 80% of cells in each intended cone class), specific, and cell-autonomous. We assessed the loss and recovery of vision in larvae via the optomotor behavioural response (OMR). This visually mediated behaviour decreased to about 5% or 20% of control levels following ablation of UV or blue cones, respectively (P<0.05). We further assessed ocular photoreception by measuring the effects of UV light on body pigmentation, and observed that photoreceptor deficits and recovery occurred (p<0.01) with a timeline coincident to the OMR results. This corroborated and extended previous conclusions that UV cones are required photoreceptors for modulating body pigmentation, addressing assumptions that were unavoidable in previous experiments. Functional vision recovery following UV cone ablation was robust, as measured by both assays, returning to control levels within four days. In contrast, robust functional recovery following blue cone ablation was unexpectedly rapid, returning to normal levels within 24 hours after ablation. Ablation of cones led to increased proliferation in the retina, though the rapid recovery of vision following blue cone ablation was demonstrated to not be mediated by blue cone regeneration. Thus rapid visual recovery occurs following ablation of some, but not all, cone subtypes, suggesting an opportunity to contrast and dissect the sources and mechanisms of outer retinal recovery during cone photoreceptor death and regeneration. PMID:27893779
Twomey, Megan C.; Wolfenbarger, Sierra N.; Woods, Joanna L.; Gent, David H.
2015-01-01
Knowledge of processes leading to crop damage is central to devising rational approaches to disease management. Multiple experiments established that infection of hop cones by Podosphaera macularis was most severe if inoculation occurred within 15 to 21 days after bloom. This period of infection was associated with the most pronounced reductions in alpha acids, cone color, and accelerated maturation of cones. Susceptibility of cones to powdery mildew decreased progressively after the transition from bloom to cone development, although complete immunity to the disease failed to develop. Maturation of cone tissues was associated with multiple significant affects on the pathogen manifested as reduced germination of conidia, diminished frequency of penetration of bracts, lengthening of the latent period, and decreased sporulation. Cones challenged with P. macularis in juvenile developmental stages also led to greater frequency of colonization by a complex of saprophytic, secondary fungi. Since no developmental stage of cones was immune to powdery mildew, the incidence of powdery mildew continued to increase over time and exceeded 86% by late summer. In field experiments with a moderately susceptible cultivar, the incidence of cones with powdery mildew was statistically similar when fungicide applications were made season-long or targeted only to the juvenile stages of cone development. These studies establish that partial ontogenic resistance develops in hop cones and may influence multiple phases of the infection process and pathogen reproduction. The results further reinforce the concept that the efficacy of a fungicide program may depend largely on timing of a small number of sprays during a relatively brief period of cone development. However in practice, targeting fungicide and other management tactics to periods of enhanced juvenile susceptibility may be complicated by a high degree of asynchrony in cone development and other factors that are situation-dependent. PMID:25811173
Xu, Jianhua; Morris, Lynsie; Fliesler, Steven J.; Sherry, David M.
2011-01-01
Purpose. To investigate the progression of cone dysfunction and degeneration in CNG channel subunit CNGB3 deficiency. Methods. Retinal structure and function in CNGB3−/− and wild-type (WT) mice were evaluated by electroretinography (ERG), lectin cytochemistry, and correlative Western blot analysis of cone-specific proteins. Cone and rod terminal integrity was assessed by electron microscopy and synaptic protein immunohistochemical distribution. Results. Cone ERG amplitudes (photopic b-wave) in CNGB3−/− mice were reduced to approximately 50% of WT levels by postnatal day 15, decreasing further to approximately 30% of WT levels by 1 month and to approximately 20% by 12 months of age. Rod ERG responses (scotopic a-wave) were not affected in CNGB3−/− mice. Average CNGB3−/− cone densities were approximately 80% of WT levels at 1 month and declined slowly thereafter to only approximately 50% of WT levels by 12 months. Expression levels of M-opsin, cone transducin α-subunit, and cone arrestin in CNGB3−/− mice were reduced by 50% to 60% by 1 month and declined to 35% to 45% of WT levels by 9 months. In addition, cone opsin mislocalized to the outer nuclear layer and the outer plexiform layer in the CNGB3−/− retina. Cone and rod synaptic marker expression and terminal ultrastructure were normal in the CNGB3−/− retina. Conclusions. These findings are consistent with an early-onset, slow progression of cone functional defects and cone loss in CNGB3−/− mice, with the cone signaling deficits arising from disrupted phototransduction and cone loss rather than from synaptic defects. PMID:21273547
Zukoshi, Reo; Savelli, Ilaria; Novales Flamarique, Iñigo
2018-04-01
Many vertebrates have cone photoreceptors that are most sensitive to ultraviolet (UV) light termed UV cones. The ecological functions that these cones contribute to are seldom known though they are suspected of improving foraging and communication in a variety of fishes. In this study, we used several spectral backgrounds to assess the contribution of UV and violet cones, or long wavelength (L) cones, in the foraging performance of juvenile Cumaná guppy, Poecilia reticulata, or marine stickleback, Gasterosteus aculeatus. Regardless of whether the light spectrum contained or not wavelengths below 450 nm (the limiting wavelength for UV cone stimulation), the foraging performance of both species was statistically the same, as judged by the mean distance and angle associated with attacks on prey (Daphnia magna). Our experiments also showed that the foraging performance of sticklebacks when only the double cones (and, almost exclusively, the L cones) were active was similar to that when all cones were functional, demonstrating that the double cone was sufficient for prey detection. This result indicates that foraging potentially relied on an achromatic channel serving prey motion detection, as the two spectral cone types that make up the double cone [maximally sensitive to middle (M) and long (L) wavelengths, respectively] form the input to the achromatic channel in cyprinid fishes and double cones are widely associated with achromatic tasks in other vertebrates including reptiles and birds. Stickleback performance was also substantially better when foraging under a 100% linearly polarized light field than when under an unpolarized light field. Together, our results suggest that in some teleost species UV cones exert visually-mediated ecological functions different from foraging, and furthermore that polarization sensitivity could improve the foraging performance of sticklebacks. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Winters, Andrew C.
Careful observational work has demonstrated that the tropopause is typically characterized by a three-step pole-to-equator structure, with each break between steps in the tropopause height associated with a jet stream. While the two jet streams, the polar and subtropical jets, typically occupy different latitude bands, their separation can occasionally vanish, resulting in a vertical superposition of the two jets. A cursory examination of a number of historical and recent high-impact weather events over North America and the North Atlantic indicates that superposed jets can be an important component of their evolution. Consequently, this dissertation examines two recent jet superposition cases, the 18--20 December 2009 Mid-Atlantic Blizzard and the 1--3 May 2010 Nashville Flood, in an effort (1) to determine the specific influence that a superposed jet can have on the development of a high-impact weather event and (2) to illuminate the processes that facilitated the production of a superposition in each case. An examination of these cases from a basic-state variable and PV inversion perspective demonstrates that elements of both the remote and local synoptic environment are important to consider while diagnosing the development of a jet superposition. Specifically, the process of jet superposition begins with the remote production of a cyclonic (anticyclonic) tropopause disturbance at high (low) latitudes. The cyclonic circulation typically originates at polar latitudes, while organized tropical convection can encourage the development of an anticyclonic circulation anomaly within the tropical upper-troposphere. The concurrent advection of both anomalies towards middle latitudes subsequently allows their individual circulations to laterally displace the location of the individual tropopause breaks. Once the two circulation anomalies position the polar and subtropical tropopause breaks in close proximity to one another, elements within the local environment, such as proximate convection or transverse vertical circulations, can work to further deform the tropopause and to aid in the production of the two-step tropopause structure characteristic of a superposed jet. The analysis also demonstrates that the intensified transverse vertical circulation that accompanies a superposed jet serves as the primary mechanism through which it can influence the evolution of a high-impact weather event.
Cone opsins, colour blindness and cone dystrophy: Genotype-phenotype correlations.
Gardner, J C; Michaelides, M; Hardcastle, A J
2016-05-25
X-linked cone photoreceptor disorders caused by mutations in the OPN1LW (L) and OPN1MW (M) cone opsin genes on chromosome Xq28 include a range of conditions from mild stable red-green colour vision deficiencies to severe cone dystrophies causing progressive loss of vision and blindness. Advances in molecular genotyping and functional analyses of causative variants, combined with deep retinal phenotyping, are unravelling genetic mechanisms underlying the variability of cone opsin disorders.
Development of a full ice-cream cone model for halo CME structures
NASA Astrophysics Data System (ADS)
Na, Hyeonock; Moon, Yong-Jae
2015-04-01
The determination of three dimensional parameters (e.g., radial speed, angular width, source location) of Coronal Mass Ejections (CMEs) is very important for space weather forecast. To estimate these parameters, several cone models based on a flat cone or a shallow ice-cream cone with spherical front have been suggested. In this study, we investigate which cone model is proper for halo CME morphology using 33 CMEs which are identified as halo CMEs by one spacecraft (SOHO or STEREO-A or B) and as limb CMEs by the other ones. From geometrical parameters of these CMEs such as their front curvature, we find that near full ice-cream cone CMEs (28 events) are dominant over shallow ice-cream cone CMEs (5 events). So we develop a new full ice-cream cone model by assuming that a full ice-cream cone consists of many flat cones with different heights and angular widths. This model is carried out by the following steps: (1) construct a cone for given height and angular width, (2) project the cone onto the sky plane, (3) select points comprising the outer boundary, (4) minimize the difference between the estimated projection points with the observed ones. We apply this model to several halo CMEs and compare the results with those from other methods such as a Graduated Cylindrical Shell model and a geometrical triangulation method.
Loss and gain of cone types in vertebrate ciliary photoreceptor evolution.
Musser, Jacob M; Arendt, Detlev
2017-11-01
Ciliary photoreceptors are a diverse cell type family that comprises the rods and cones of the retina and other related cell types such as pineal photoreceptors. Ciliary photoreceptor evolution has been dynamic during vertebrate evolution with numerous gains and losses of opsin and phototransduction genes, and changes in their expression. For example, early mammals lost all but two cone opsins, indicating loss of cone receptor types in response to nocturnal lifestyle. Our review focuses on the comparison of specifying transcription factors and cell type-specific transcriptome data in vertebrate retinae to build and test hypotheses on ciliary photoreceptor evolution. Regarding cones, recent data reveal that a combination of factors specific for long-wavelength sensitive opsin (Lws)- cones in non-mammalian vertebrates (Thrb and Rxrg) is found across all differentiating cone photoreceptors in mice. This suggests that mammalian ancestors lost all but one ancestral cone type, the Lws-cone. We test this hypothesis by a correlation analysis of cone transcriptomes in mouse and chick, and find that, indeed, transcriptomes of all mouse cones are most highly correlated to avian Lws-cones. These findings underscore the importance of specifying transcription factors in tracking cell type evolution, and shed new light on the mechanisms of cell type loss and gain in retina evolution. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Cone and Seed Maturation of Southern Pines
James P. Barnett
1976-01-01
If slightly reduced yields and viability are acceptable, loblolly and slash cone collections can begin 2 to 3 weeks before maturity if the cones are stored before processing. Longleaf(P. palestris Mill.) pine cones should be collected only when mature, as storage decreased germination of seeds from immature cones. Biochemical analyses to determine reducing sugar...
NASA Astrophysics Data System (ADS)
Jang, S.; Moon, Y.; Na, H.
2012-12-01
We have made a comparison of CME-associated shock arrival times at the earth based on the WSA-ENLIL model with three cone models using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters from Michalek et al. (2007) as well as their associated interplanetary (IP) shocks. For this study we consider three different cone models (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine CME cone parameters (radial velocity, angular width and source location), which are used for input parameters of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the elliptical cone model is 10 hours, which is about 2 hours smaller than those of the other models. However, this value is still larger than that (8.7 hours) of an empirical model by Kim et al. (2007). We are investigating several possibilities on relatively large errors of the WSA-ENLIL cone model, which may be caused by CME-CME interaction, background solar wind speed, and/or CME density enhancement.
The Na+/Ca2+, K+ exchanger NCKX4 is required for efficient cone-mediated vision
Vinberg, Frans; Wang, Tian; De Maria, Alicia; Zhao, Haiqing; Bassnett, Steven; Chen, Jeannie; Kefalov, Vladimir J
2017-01-01
Calcium (Ca2+) plays an important role in the function and health of neurons. In vertebrate cone photoreceptors, Ca2+ controls photoresponse sensitivity, kinetics, and light adaptation. Despite the critical role of Ca2+ in supporting the function and survival of cones, the mechanism for its extrusion from cone outer segments is not well understood. Here, we show that the Na+/Ca2+, K+ exchanger NCKX4 is expressed in zebrafish, mouse, and primate cones. Functional analysis of NCKX4-deficient mouse cones revealed that this exchanger is essential for the wide operating range and high temporal resolution of cone-mediated vision. We show that NCKX4 shapes the cone photoresponse together with the cone-specific NCKX2: NCKX4 acts early to limit response amplitude, while NCKX2 acts late to further accelerate response recovery. The regulation of Ca2+ by NCKX4 in cones is a novel mechanism that supports their ability to function as daytime photoreceptors and promotes their survival. DOI: http://dx.doi.org/10.7554/eLife.24550.001 PMID:28650316
Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber
NASA Astrophysics Data System (ADS)
Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.; Bagby, L.; Baller, B.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Bugel, L.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; James, C.; de Vries, J. Jan; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Jones, B. J. P.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; von Rohr, C. Rudolf; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van de Water, R. G.; Viren, B.; Weber, M.; Weston, J.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Zeller, G. P.; Zennamo, J.; Zhang, C.
2017-03-01
We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.
Rock images classification by using deep convolution neural network
NASA Astrophysics Data System (ADS)
Cheng, Guojian; Guo, Wenhui
2017-08-01
Granularity analysis is one of the most essential issues in authenticate under microscope. To improve the efficiency and accuracy of traditional manual work, an convolutional neural network based method is proposed for granularity analysis from thin section image, which chooses and extracts features from image samples while build classifier to recognize granularity of input image samples. 4800 samples from Ordos basin are used for experiments under colour spaces of HSV, YCbCr and RGB respectively. On the test dataset, the correct rate in RGB colour space is 98.5%, and it is believable in HSV and YCbCr colour space. The results show that the convolution neural network can classify the rock images with high reliability.
Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akabani, G.; Hawkins, W.G.; Eckblade, M.B.
1999-01-01
The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transformmore » (CHT) algorithm.« less
Convolute laminations — a theoretical analysis: example of a Pennsylvanian sandstone
NASA Astrophysics Data System (ADS)
Visher, Glenn S.; Cunningham, Russ D.
1981-03-01
Data from an outcropping laminated interval were collected and analyzed to test the applicability of a theoretical model describing instability of layered systems. Rayleigh—Taylor wave perturbations result at the interface between fluids of contrasting density, viscosity, and thickness. In the special case where reverse density and viscosity interlaminations are developed, the deformation response produces a single wave with predictable amplitudes, wavelengths, and amplification rates. Physical measurements from both the outcropping section and modern sediments suggest the usefulness of the model for the interpretation of convolute laminations. Internal characteristics of the stratigraphic interval, and the developmental sequence of convoluted beds, are used to document the developmental history of these structures.
Detecting of foreign object debris on airfield pavement using convolution neural network
NASA Astrophysics Data System (ADS)
Cao, Xiaoguang; Gu, Yufeng; Bai, Xiangzhi
2017-11-01
It is of great practical significance to detect foreign object debris (FOD) timely and accurately on the airfield pavement, because the FOD is a fatal threaten for runway safety in airport. In this paper, a new FOD detection framework based on Single Shot MultiBox Detector (SSD) is proposed. Two strategies include making the detection network lighter and using dilated convolution, which are proposed to better solve the FOD detection problem. The advantages mainly include: (i) the network structure becomes lighter to speed up detection task and enhance detection accuracy; (ii) dilated convolution is applied in network structure to handle smaller FOD. Thus, we get a faster and more accurate detection system.
Rods and cones contain antigenically distinctive S-antigens.
Nork, T M; Mangini, N J; Millecchia, L L
1993-09-01
S-antigen (48 kDa protein or arrestin) is known to be present in rod photoreceptors. Its localization in cones is less clear with several conflicting reports among various species examined. This study employed three different anti-S-antigen antibodies (a48K, a polyclonal antiserum and two monoclonal antibodies, MAb A9-C6 and MAb 5c6.47) and examined their localization in rods and cones of human and cat retinas. To identify the respective cone types, an enzyme histochemical technique for carbonic anhydrase (CA) was employed to distinguish blue cones (CA-negative) from red or green cones (CA-positive). S-antigen localization was then examined by immunocytochemical staining of adjacent sections. In human retinas, a similar labeling pattern was seen with both a48K and MAb A9-C6, i.e., the rods and blue-sensitive cones were strongly positive, whereas the red- or green-sensitive cones showed little immunoreactivity. All human photoreceptors showed reactivity to MAb 5c6.47. In the cat retina, only CA-positive cones could be found. As in the human retina, both rods and cones of the cat were positive for MAb 5c6.47. A difference from the labeling pattern in human retina was noted for the other S-antigen antibodies; a48K labeled rods and all of the cones, whereas MAb A9-C6 reacted strongly with the rods but showed no cone staining. These results suggest that both rods and cones contain S-antigen but that they are antigenically distinctive.
Sensing Super-position: Visual Instrument Sensor Replacement
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2006-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.
NASA Astrophysics Data System (ADS)
Ferhatoglu, Erhan; Cigeroglu, Ender; Özgüven, H. Nevzat
2018-07-01
In this paper, a new modal superposition method based on a hybrid mode shape concept is developed for the determination of steady state vibration response of nonlinear structures. The method is developed specifically for systems having nonlinearities where the stiffness of the system may take different limiting values. Stiffness variation of these nonlinear systems enables one to define different linear systems corresponding to each value of the limiting equivalent stiffness. Moreover, the response of the nonlinear system is bounded by the confinement of these linear systems. In this study, a modal superposition method utilizing novel hybrid mode shapes which are defined as linear combinations of the modal vectors of the limiting linear systems is proposed to determine periodic response of nonlinear systems. In this method the response of the nonlinear system is written in terms of hybrid modes instead of the modes of the underlying linear system. This provides decrease of the number of modes that should be retained for an accurate solution, which in turn reduces the number of nonlinear equations to be solved. In this way, computational time for response calculation is directly curtailed. In the solution, the equations of motion are converted to a set of nonlinear algebraic equations by using describing function approach, and the numerical solution is obtained by using Newton's method with arc-length continuation. The method developed is applied on two different systems: a lumped parameter model and a finite element model. Several case studies are performed and the accuracy and computational efficiency of the proposed modal superposition method with hybrid mode shapes are compared with those of the classical modal superposition method which utilizes the mode shapes of the underlying linear system.
Quantum inertia stops superposition: Scan Quantum Mechanics
NASA Astrophysics Data System (ADS)
Gato-Rivera, Beatriz
2017-08-01
Scan Quantum Mechanics is a novel interpretation of some aspects of quantum mechanics in which the superposition of states is only an approximate effective concept. Quantum systems scan all possible states in the superposition and switch randomly and very rapidly among them. A crucial property that we postulate is quantum inertia, that increases whenever a constituent is added, or the system is perturbed with all kinds of interactions. Once the quantum inertia Iq reaches a critical value Icr for an observable, the switching among its different eigenvalues stops and the corresponding superposition comes to an end, leaving behind a system with a well defined value of that observable. Consequently, increasing the mass, temperature, gravitational strength, etc. of a quantum system increases its quantum inertia until the superposition of states disappears for all the observables and the system transmutes into a classical one. Moreover, the process could be reversible. Entanglement can only occur between quantum systems because an exact synchronization between the switchings of the systems involved must be established in the first place and classical systems do not have any switchings to start with. Future experiments might determine the critical inertia Icr corresponding to different observables, which translates into a critical mass Mcr for fixed environmental conditions as well as critical temperatures, critical electric and magnetic fields, etc. In addition, this proposal implies a new radiation mechanism from astrophysical objects with strong gravitational fields, giving rise to non-thermal synchrotron emission, that could contribute to neutron star formation. Superconductivity, superfluidity, Bose-Einstein condensates, and any other physical phenomena at very low temperatures must be reanalyzed in the light of this interpretation, as well as mesoscopic systems in general.
Directional imaging of the retinal cone mosaic
NASA Astrophysics Data System (ADS)
Vohnsen, Brian; Iglesias, Ignacio; Artal, Pablo
2004-05-01
We describe a near-IR scanning laser ophthalmoscope that allows the retinal cone mosaic to be imaged in the human eye in vivo without the use of wave-front correction techniques. The method takes advantage of the highly directional quality of cone photoreceptors that permits efficient coupling of light to individual cones and subsequent detection of most directional components of the backscattered light produced by the light-guiding effect of the cones. We discuss details of the system and describe cone-mosaic images obtained under different conditions.
Superficial dose evaluation of four dose calculation algorithms
NASA Astrophysics Data System (ADS)
Cao, Ying; Yang, Xiaoyu; Yang, Zhen; Qiu, Xiaoping; Lv, Zhiping; Lei, Mingjun; Liu, Gui; Zhang, Zijian; Hu, Yongmei
2017-08-01
Accurate superficial dose calculation is of major importance because of the skin toxicity in radiotherapy, especially within the initial 2 mm depth being considered more clinically relevant. The aim of this study is to evaluate superficial dose calculation accuracy of four commonly used algorithms in commercially available treatment planning systems (TPS) by Monte Carlo (MC) simulation and film measurements. The superficial dose in a simple geometrical phantom with size of 30 cm×30 cm×30 cm was calculated by PBC (Pencil Beam Convolution), AAA (Analytical Anisotropic Algorithm), AXB (Acuros XB) in Eclipse system and CCC (Collapsed Cone Convolution) in Raystation system under the conditions of source to surface distance (SSD) of 100 cm and field size (FS) of 10×10 cm2. EGSnrc (BEAMnrc/DOSXYZnrc) program was performed to simulate the central axis dose distribution of Varian Trilogy accelerator, combined with measurements of superficial dose distribution by an extrapolation method of multilayer radiochromic films, to estimate the dose calculation accuracy of four algorithms in the superficial region which was recommended in detail by the ICRU (International Commission on Radiation Units and Measurement) and the ICRP (International Commission on Radiological Protection). In superficial region, good agreement was achieved between MC simulation and film extrapolation method, with the mean differences less than 1%, 2% and 5% for 0°, 30° and 60°, respectively. The relative skin dose errors were 0.84%, 1.88% and 3.90%; the mean dose discrepancies (0°, 30° and 60°) between each of four algorithms and MC simulation were (2.41±1.55%, 3.11±2.40%, and 1.53±1.05%), (3.09±3.00%, 3.10±3.01%, and 3.77±3.59%), (3.16±1.50%, 8.70±2.84%, and 18.20±4.10%) and (14.45±4.66%, 10.74±4.54%, and 3.34±3.26%) for AXB, CCC, AAA and PBC respectively. Monte Carlo simulation verified the feasibility of the superficial dose measurements by multilayer Gafchromic films. And the rank of superficial dose calculation accuracy of four algorithms was AXB>CCC>AAA>PBC. Care should be taken when using the AAA and PBC algorithms in the superficial dose calculation.
SU-F-T-151: Measurement Evaluation of Skin Dose in Scanning Proton Beam Therapy for Breast Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, J; Nichols, E; Strauss, D
Purpose: To measure the skin dose and compare it with the calculated dose from a treatment planning system (TPS) for breast cancer treatment using scanning proton beam therapy (SPBT). Methods: A single en-face-beam SPBT plan was generated by a commercial TPS for two breast cancer patients. The treatment volumes were the entire breasts (218 cc and 1500 cc) prescribed to 50.4 Gy (RBE) in 28 fractions. A range shifter of 5 cm water equivalent thickness was used. The organ at risk (skin) was defined to be 5 mm thick from the surface. The skin doses were measured in water withmore » an ADCL calibrated parallel plate (PP) chamber. The measured data were compared with the values calculated in the TPS. Skin dose calculations can be subject to uncertainties created by the definition of the external contour and the limitations of the correction based algorithms, such as proton convolution superposition. Hence, the external contours were expanded by 0, 3 mm and 1 cm to include additional pixels for dose calculation. In addition, to examine the effects of the cloth gown on the skin dose, the skin dose measurements were conducted with and without gown. Results: On average the measured skin dose was 4% higher than the calculated values. At deeper depths, the measured and calculated doses were in better agreement (< 2%). Large discrepancy occur for the dose calculated without external expansion due to volume averaging. The addition of the gown only increased the measured skin dose by 0.4%. Conclusion: The implemented TPS underestimated the skin dose for breast treatments. Superficial dose calculation without external expansion would result in large errors for SPBT for breast cancer.« less
Saenz, Daniel L.; Paliwal, Bhudatt R.; Bayouth, John E.
2014-01-01
ViewRay, a novel technology providing soft-tissue imaging during radiotherapy is investigated for treatment planning capabilities assessing treatment plan dose homogeneity and conformity compared with linear accelerator plans. ViewRay offers both adaptive radiotherapy and image guidance. The combination of cobalt-60 (Co-60) with 0.35 Tesla magnetic resonance imaging (MRI) allows for magnetic resonance (MR)-guided intensity-modulated radiation therapy (IMRT) delivery with multiple beams. This study investigated head and neck, lung, and prostate treatment plans to understand what is possible on ViewRay to narrow focus toward sites with optimal dosimetry. The goal is not to provide a rigorous assessment of planning capabilities, but rather a first order demonstration of ViewRay planning abilities. Images, structure sets, points, and dose from treatment plans created in Pinnacle for patients in our clinic were imported into ViewRay. The same objectives were used to assess plan quality and all critical structures were treated as similarly as possible. Homogeneity index (HI), conformity index (CI), and volume receiving <20% of prescription dose (DRx) were calculated to assess the plans. The 95% confidence intervals were recorded for all measurements and presented with the associated bars in graphs. The homogeneity index (D5/D95) had a 1-5% inhomogeneity increase for head and neck, 3-8% for lung, and 4-16% for prostate. CI revealed a modest conformity increase for lung. The volume receiving 20% of the prescription dose increased 2-8% for head and neck and up to 4% for lung and prostate. Overall, for head and neck Co-60 ViewRay treatments planned with its Monte Carlo treatment planning software were comparable with 6 MV plans computed with convolution superposition algorithm on Pinnacle treatment planning system. PMID:24872603
Saenz, Daniel L; Paliwal, Bhudatt R; Bayouth, John E
2014-04-01
ViewRay, a novel technology providing soft-tissue imaging during radiotherapy is investigated for treatment planning capabilities assessing treatment plan dose homogeneity and conformity compared with linear accelerator plans. ViewRay offers both adaptive radiotherapy and image guidance. The combination of cobalt-60 (Co-60) with 0.35 Tesla magnetic resonance imaging (MRI) allows for magnetic resonance (MR)-guided intensity-modulated radiation therapy (IMRT) delivery with multiple beams. This study investigated head and neck, lung, and prostate treatment plans to understand what is possible on ViewRay to narrow focus toward sites with optimal dosimetry. The goal is not to provide a rigorous assessment of planning capabilities, but rather a first order demonstration of ViewRay planning abilities. Images, structure sets, points, and dose from treatment plans created in Pinnacle for patients in our clinic were imported into ViewRay. The same objectives were used to assess plan quality and all critical structures were treated as similarly as possible. Homogeneity index (HI), conformity index (CI), and volume receiving <20% of prescription dose (DRx) were calculated to assess the plans. The 95% confidence intervals were recorded for all measurements and presented with the associated bars in graphs. The homogeneity index (D5/D95) had a 1-5% inhomogeneity increase for head and neck, 3-8% for lung, and 4-16% for prostate. CI revealed a modest conformity increase for lung. The volume receiving 20% of the prescription dose increased 2-8% for head and neck and up to 4% for lung and prostate. Overall, for head and neck Co-60 ViewRay treatments planned with its Monte Carlo treatment planning software were comparable with 6 MV plans computed with convolution superposition algorithm on Pinnacle treatment planning system.
Coding performance of the Probe-Orbiter-Earth communication link
NASA Technical Reports Server (NTRS)
Divsalar, D.; Dolinar, S.; Pollara, F.
1993-01-01
The coding performance of the Probe-Orbiter-Earth communication link is analyzed and compared for several cases. It is assumed that the coding system consists of a convolutional code at the Probe, a quantizer and another convolutional code at the Orbiter, and two cascaded Viterbi decoders or a combined decoder on the ground.
2012-03-01
advanced antenna systems AMC adaptive modulation and coding AWGN additive white Gaussian noise BPSK binary phase shift keying BS base station BTC ...QAM-16, and QAM-64, and coding types include convolutional coding (CC), convolutional turbo coding (CTC), block turbo coding ( BTC ), zero-terminating
Sequential Syndrome Decoding of Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.
NASA Astrophysics Data System (ADS)
Zeng, X. G.; Liu, J. J.; Zuo, W.; Chen, W. L.; Liu, Y. X.
2018-04-01
Circular structures are widely distributed around the lunar surface. The most typical of them could be lunar impact crater, lunar dome, et.al. In this approach, we are trying to use the Convolutional Neural Network to classify the lunar circular structures from the lunar images.
Cone-Specific Promoters for Gene Therapy of Achromatopsia and Other Retinal Diseases
Ye, Guo-Jie; Budzynski, Ewa; Sonnentag, Peter; Nork, T. Michael; Sheibani, Nader; Gurel, Zafer; Boye, Sanford L.; Peterson, James J.; Boye, Shannon E.; Hauswirth, William W.; Chulay, Jeffrey D.
2016-01-01
Adeno-associated viral (AAV) vectors containing cone-specific promoters have rescued cone photoreceptor function in mouse and dog models of achromatopsia, but cone-specific promoters have not been optimized for use in primates. Using AAV vectors administered by subretinal injection, we evaluated a series of promoters based on the human L-opsin promoter, or a chimeric human cone transducin promoter, for their ability to drive gene expression of green fluorescent protein (GFP) in mice and nonhuman primates. Each of these promoters directed high-level GFP expression in mouse photoreceptors. In primates, subretinal injection of an AAV-GFP vector containing a 1.7-kb L-opsin promoter (PR1.7) achieved strong and specific GFP expression in all cone photoreceptors and was more efficient than a vector containing the 2.1-kb L-opsin promoter that was used in AAV vectors that rescued cone function in mouse and dog models of achromatopsia. A chimeric cone transducin promoter that directed strong GFP expression in mouse and dog cone photoreceptors was unable to drive GFP expression in primate cones. An AAV vector expressing a human CNGB3 gene driven by the PR1.7 promoter rescued cone function in the mouse model of achromatopsia. These results have informed the design of an AAV vector for treatment of patients with achromatopsia. PMID:26603570
Cone-Specific Promoters for Gene Therapy of Achromatopsia and Other Retinal Diseases.
Ye, Guo-Jie; Budzynski, Ewa; Sonnentag, Peter; Nork, T Michael; Sheibani, Nader; Gurel, Zafer; Boye, Sanford L; Peterson, James J; Boye, Shannon E; Hauswirth, William W; Chulay, Jeffrey D
2016-01-01
Adeno-associated viral (AAV) vectors containing cone-specific promoters have rescued cone photoreceptor function in mouse and dog models of achromatopsia, but cone-specific promoters have not been optimized for use in primates. Using AAV vectors administered by subretinal injection, we evaluated a series of promoters based on the human L-opsin promoter, or a chimeric human cone transducin promoter, for their ability to drive gene expression of green fluorescent protein (GFP) in mice and nonhuman primates. Each of these promoters directed high-level GFP expression in mouse photoreceptors. In primates, subretinal injection of an AAV-GFP vector containing a 1.7-kb L-opsin promoter (PR1.7) achieved strong and specific GFP expression in all cone photoreceptors and was more efficient than a vector containing the 2.1-kb L-opsin promoter that was used in AAV vectors that rescued cone function in mouse and dog models of achromatopsia. A chimeric cone transducin promoter that directed strong GFP expression in mouse and dog cone photoreceptors was unable to drive GFP expression in primate cones. An AAV vector expressing a human CNGB3 gene driven by the PR1.7 promoter rescued cone function in the mouse model of achromatopsia. These results have informed the design of an AAV vector for treatment of patients with achromatopsia.
Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification
Yang, Xinyi
2016-01-01
In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods. PMID:27610128
A pre-trained convolutional neural network based method for thyroid nodule diagnosis.
Ma, Jinlian; Wu, Fa; Zhu, Jiang; Xu, Dong; Kong, Dexing
2017-01-01
In ultrasound images, most thyroid nodules are in heterogeneous appearances with various internal components and also have vague boundaries, so it is difficult for physicians to discriminate malignant thyroid nodules from benign ones. In this study, we propose a hybrid method for thyroid nodule diagnosis, which is a fusion of two pre-trained convolutional neural networks (CNNs) with different convolutional layers and fully-connected layers. Firstly, the two networks pre-trained with ImageNet database are separately trained. Secondly, we fuse feature maps learned by trained convolutional filters, pooling and normalization operations of the two CNNs. Finally, with the fused feature maps, a softmax classifier is used to diagnose thyroid nodules. The proposed method is validated on 15,000 ultrasound images collected from two local hospitals. Experiment results show that the proposed CNN based methods can accurately and effectively diagnose thyroid nodules. In addition, the fusion of the two CNN based models lead to significant performance improvement, with an accuracy of 83.02%±0.72%. These demonstrate the potential clinical applications of this method. Copyright © 2016 Elsevier B.V. All rights reserved.
Enhancement of digital radiography image quality using a convolutional neural network.
Sun, Yuewen; Li, Litao; Cong, Peng; Wang, Zhentao; Guo, Xiaojing
2017-01-01
Digital radiography system is widely used for noninvasive security check and medical imaging examination. However, the system has a limitation of lower image quality in spatial resolution and signal to noise ratio. In this study, we explored whether the image quality acquired by the digital radiography system can be improved with a modified convolutional neural network to generate high-resolution images with reduced noise from the original low-quality images. The experiment evaluated on a test dataset, which contains 5 X-ray images, showed that the proposed method outperformed the traditional methods (i.e., bicubic interpolation and 3D block-matching approach) as measured by peak signal to noise ratio (PSNR) about 1.3 dB while kept highly efficient processing time within one second. Experimental results demonstrated that a residual to residual (RTR) convolutional neural network remarkably improved the image quality of object structural details by increasing the image resolution and reducing image noise. Thus, this study indicated that applying this RTR convolutional neural network system was useful to improve image quality acquired by the digital radiography system.
Alcoholism Detection by Data Augmentation and Convolutional Neural Network with Stochastic Pooling.
Wang, Shui-Hua; Lv, Yi-Ding; Sui, Yuxiu; Liu, Shuai; Wang, Su-Jing; Zhang, Yu-Dong
2017-11-17
Alcohol use disorder (AUD) is an important brain disease. It alters the brain structure. Recently, scholars tend to use computer vision based techniques to detect AUD. We collected 235 subjects, 114 alcoholic and 121 non-alcoholic. Among the 235 image, 100 images were used as training set, and data augmentation method was used. The rest 135 images were used as test set. Further, we chose the latest powerful technique-convolutional neural network (CNN) based on convolutional layer, rectified linear unit layer, pooling layer, fully connected layer, and softmax layer. We also compared three different pooling techniques: max pooling, average pooling, and stochastic pooling. The results showed that our method achieved a sensitivity of 96.88%, a specificity of 97.18%, and an accuracy of 97.04%. Our method was better than three state-of-the-art approaches. Besides, stochastic pooling performed better than other max pooling and average pooling. We validated CNN with five convolution layers and two fully connected layers performed the best. The GPU yielded a 149× acceleration in training and a 166× acceleration in test, compared to CPU.
Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification.
Pang, Shan; Yang, Xinyi
2016-01-01
In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.
NASA Technical Reports Server (NTRS)
Callier, F. M.; Desoer, C. A.
1973-01-01
A class of multivariable, nonlinear time-varying feedback systems with an unstable convolution subsystem as feedforward and a time-varying nonlinear gain as feedback was considered. The impulse response of the convolution subsystem is the sum of a finite number of increasing exponentials multiplied by nonnegative powers of the time t, a term that is absolutely integrable and an infinite series of delayed impulses. The main result is a theorem. It essentially states that if the unstable convolution subsystem can be stabilized by a constant feedback gain F and if incremental gain of the difference between the nonlinear gain function and F is sufficiently small, then the nonlinear system is L(p)-stable for any p between one and infinity. Furthermore, the solutions of the nonlinear system depend continuously on the inputs in any L(p)-norm. The fixed point theorem is crucial in deriving the above theorem.
Forcasting Shortleaf Pine Seed Crops in the Ouachita Mountains
Michael G. Shelton; Robert F. Wittwer
2004-01-01
We field tested a cone-rating system to forecast seed crops from 1993 to 1996 in 28 shortleaf pine (Pinus echinata Mill.) stands, which represented a wide range of stand conditions. Sample trees were visually assigned to one of three cone-density classes based on cone spacing, occurrence of cones in clusters, and distribution of cones within the...
Are seed and cone pathogens causing significant losses in Pacific Northwest seed orchards?
E.E. Nelson; W.G. Thies; C.Y. Li
1986-01-01
Cones systematically collected in 1983 from eight Douglas-fir seed orchards in western Washington and Oregon yielded large numbers of common molds. Fungi isolated from apparently healthy, developing cones were similar to those from necrotic cones. Necrosis in cones aborted in early stages of development was apparently not associated with pathogenic fungi or bacteria....
Nonclassical thermal-state superpositions: Analytical evolution law and decoherence behavior
NASA Astrophysics Data System (ADS)
Meng, Xiang-guo; Goan, Hsi-Sheng; Wang, Ji-suo; Zhang, Ran
2018-03-01
Employing the integration technique within normal products of bosonic operators, we present normal product representations of thermal-state superpositions and investigate their nonclassical features, such as quadrature squeezing, sub-Poissonian distribution, and partial negativity of the Wigner function. We also analytically and numerically investigate their evolution law and decoherence characteristics in an amplitude-decay model via the variations of the probability distributions and the negative volumes of Wigner functions in phase space. The results indicate that the evolution formulas of two thermal component states for amplitude decay can be viewed as the same integral form as a displaced thermal state ρ(V , d) , but governed by the combined action of photon loss and thermal noise. In addition, the larger values of the displacement d and noise V lead to faster decoherence for thermal-state superpositions.
Automated identification of cone photoreceptors in adaptive optics retinal images.
Li, Kaccie Y; Roorda, Austin
2007-05-01
In making noninvasive measurements of the human cone mosaic, the task of labeling each individual cone is unavoidable. Manual labeling is a time-consuming process, setting the motivation for the development of an automated method. An automated algorithm for labeling cones in adaptive optics (AO) retinal images is implemented and tested on real data. The optical fiber properties of cones aided the design of the algorithm. Out of 2153 manually labeled cones from six different images, the automated method correctly identified 94.1% of them. The agreement between the automated and the manual labeling methods varied from 92.7% to 96.2% across the six images. Results between the two methods disagreed for 1.2% to 9.1% of the cones. Voronoi analysis of large montages of AO retinal images confirmed the general hexagonal-packing structure of retinal cones as well as the general cone density variability across portions of the retina. The consistency of our measurements demonstrates the reliability and practicality of having an automated solution to this problem.
Micro-cones on a liquid interface in high electric field: Ionization effects
NASA Astrophysics Data System (ADS)
Subbotin, Andrey V.; Semenov, Alexander N.
2018-02-01
We formulate and explore electrohydrodynamic equations for conductive liquids taking dissociation/recombination processes into account and discover a novel type of liquid cones which carry both surface and net bulk charge and can be formed on a liquid interface in an electric field. The bulk charge is generated by the corona discharge due to a high electric field at the cone apex. We establish correlation between the cone angle and physical parameters of the liquid on the one hand and the electric current passing through the cone on the other hand. It is shown that the current strongly increases when the cone angle tends to a critical value which is a function of the dielectric permittivity of the liquid. The cone stability with respect to axially symmetric perturbations is analyzed. It is shown that the cones with apex angles close to the critical angle are likely to be stable. The effect of the imposed flow on the cone apex stability is also discussed.
Filopodial dynamics and growth cone stabilization in Drosophila visual circuit development
Özel, Mehmet Neset; Langen, Marion; Hassan, Bassem A; Hiesinger, P Robin
2015-01-01
Filopodial dynamics are thought to control growth cone guidance, but the types and roles of growth cone dynamics underlying neural circuit assembly in a living brain are largely unknown. To address this issue, we have developed long-term, continuous, fast and high-resolution imaging of growth cone dynamics from axon growth to synapse formation in cultured Drosophila brains. Using R7 photoreceptor neurons as a model we show that >90% of the growth cone filopodia exhibit fast, stochastic dynamics that persist despite ongoing stepwise layer formation. Correspondingly, R7 growth cones stabilize early and change their final position by passive dislocation. N-Cadherin controls both fast filopodial dynamics and growth cone stabilization. Surprisingly, loss of N-Cadherin causes no primary targeting defects, but destabilizes R7 growth cones to jump between correct and incorrect layers. Hence, growth cone dynamics can influence wiring specificity without a direct role in target recognition and implement simple rules during circuit assembly. DOI: http://dx.doi.org/10.7554/eLife.10721.001 PMID:26512889
If Lava Mingled with Ground Ice on Mars
NASA Astrophysics Data System (ADS)
Martel, L. M. V.
2001-06-01
Clusters of small cones on the lava plains of Mars have caught the attention of planetary geologists for years for a simple and compelling reason: ground ice. These cones look like volcanic rootless cones found on Earth where hot lava flows over wet surfaces such as marshes, shallow lakes or shallow aquifers. Steam explosions fragment the lava into small pieces that fall into cone-shaped debris piles. Peter Lanagan, Alfred McEwen, Laszlo Keszthelyi (University of Arizona), and Thorvaldur Thordarson (University of Hawaii) recently identified groups of cones in the equatorial region of Mars using new high-resolution Mars Orbiter Camera (MOC) images. They report that the Martian cones have the same appearance, size, and geologic setting as rootless cones found in Iceland. If the Martian and terrestrial cones formed in the same way, then the Martian cones mark places where ground ice or groundwater existed at the time the lavas surged across the surface, estimated to be less than 10 million years ago, and where ground ice may still be today.
Evolutionary image simplification for lung nodule classification with convolutional neural networks.
Lückehe, Daniel; von Voigt, Gabriele
2018-05-29
Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.
Producing data-based sensitivity kernels from convolution and correlation in exploration geophysics.
NASA Astrophysics Data System (ADS)
Chmiel, M. J.; Roux, P.; Herrmann, P.; Rondeleux, B.
2016-12-01
Many studies have shown that seismic interferometry can be used to estimate surface wave arrivals by correlation of seismic signals recorded at a pair of locations. In the case of ambient noise sources, the convergence towards the surface wave Green's functions is obtained with the criterion of equipartitioned energy. However, seismic acquisition with active, controlled sources gives more possibilities when it comes to interferometry. The use of controlled sources makes it possible to recover the surface wave Green's function between two points using either correlation or convolution. We investigate the convolutional and correlational approaches using land active-seismic data from exploration geophysics. The data were recorded on 10,710 vertical receivers using 51,808 sources (seismic vibrator trucks). The sources spacing is the same in both X and Y directions (30 m) which is known as a "carpet shooting". The receivers are placed in parallel lines with a spacing 150 m in the X direction and 30 m in the Y direction. Invoking spatial reciprocity between sources and receivers, correlation and convolution functions can thus be constructed between either pairs of receivers or pairs of sources. Benefiting from the dense acquisition, we extract sensitivity kernels from correlation and convolution measurements of the seismic data. These sensitivity kernels are subsequently used to produce phase-velocity dispersion curves between two points and to separate the higher mode from the fundamental mode for surface waves. Potential application to surface wave cancellation is also envisaged.
Dual energy approach for cone beam artifacts correction
NASA Astrophysics Data System (ADS)
Han, Chulhee; Choi, Shinkook; Lee, Changwoo; Baek, Jongduk
2017-03-01
Cone beam computed tomography systems generate 3D volumetric images, which provide further morphological information compared to radiography and tomosynthesis systems. However, reconstructed images by FDK algorithm contain cone beam artifacts when a cone angle is large. To reduce the cone beam artifacts, two-pass algorithm has been proposed. The two-pass algorithm considers the cone beam artifacts are mainly caused by high density materials, and proposes an effective method to estimate error images (i.e., cone beam artifacts images) by the high density materials. While this approach is simple and effective with a small cone angle (i.e., 5 - 7 degree), the correction performance is degraded as the cone angle increases. In this work, we propose a new method to reduce the cone beam artifacts using a dual energy technique. The basic idea of the proposed method is to estimate the error images generated by the high density materials more reliably. To do this, projection data of the high density materials are extracted from dual energy CT projection data using a material decomposition technique, and then reconstructed by iterative reconstruction using total-variation regularization. The reconstructed high density materials are used to estimate the error images from the original FDK images. The performance of the proposed method is compared with the two-pass algorithm using root mean square errors. The results show that the proposed method reduces the cone beam artifacts more effectively, especially with a large cone angle.
Growth cones are actively influenced by substrate-bound adhesion molecules.
Burden-Gulley, S M; Payne, H R; Lemmon, V
1995-06-01
As axons advance to appropriate target tissues during development, their growth cones encounter a variety of cell adhesion molecules (CAMs) and extracellular matrix molecules (ECM molecules). Purified CAMs and ECM molecules influence neurite outgrowth in vitro and are thought to have a similar function in vivo. For example, when retinal ganglion cell (RGC) neurons are grown on different CAM and ECM molecule substrates in vitro, their growth cones display distinctive morphologies (Payne et al., 1992). Similarly, RGC growth cones in vivo have distinctive shapes at different points in the pathway from the eye to the tectum, suggesting the presence of localized cues that determine growth cone behaviors such as pathway selection at choice points. In this report, time-lapse video microscopy was utilized to examine dynamic transformations of RGC growth cones as they progressed from L1/8D9, N-cadherin, or laminin onto a different substrate. Contact made by the leading edge of a growth cone with a new substrate resulted in a rapid and dramatic alteration in growth cone morphology. In some cases, the changes encompassed the entire growth cone including those regions not in direct contact with the new substrate. In addition, the growth cones displayed a variety of behavioral responses that were dependent upon the order of substrate contact. These studies demonstrate that growth cones are actively affected by the substrate, and suggest that abrupt changes in the molecular composition of the growth cone environment are influential during axonal pathfinding.
Fyk-Kolodziej, Bozena; Qin, Pu; Pourcho, Roberta G
2003-09-08
It has been generally accepted that rod photoreceptor cells in the mammalian retina make synaptic contact with only a single population of rod bipolar cells, whereas cone photoreceptors contact a variety of cone bipolar cells. This assumption has been challenged in rodents by reports of a type of cone bipolar cell which receives input from both rods and cones. Questions remained as to whether similar pathways are present in other mammals. We have used an antiserum against the glutamate transporter GLT1-B to visualize a population of cone bipolar cells in the cat retina which make flat contacts with axon terminals of both rod and cone photoreceptor cells. These cells are identified as OFF-cone bipolar cells and correspond morphologically to type cb1 (CBa2) cone bipolar cells which are a major source of input to OFF-beta ganglion cells in the cat retina. The GLT1-B transporter was also localized to processes making flat contacts with photoreceptor terminals in rat and rabbit retinas. Examination of tissue processed for the GluR1 glutamate receptor subunit showed that cb1 cone bipolar cells, like their rodent counterparts, express this alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA)-selective receptor at their contacts with rod spherules. Thus, a direct excitatory pathway from rod photoreceptors to OFF-cone bipolar cells appears to be a common feature of mammalian retinas. Copyright 2003 Wiley-Liss, Inc.
Quantifying the interplay effect in prostate IMRT delivery using a convolution-based method.
Li, Haisen S; Chetty, Indrin J; Solberg, Timothy D
2008-05-01
The authors present a segment-based convolution method to account for the interplay effect between intrafraction organ motion and the multileaf collimator position for each particular segment in intensity modulated radiation therapy (IMRT) delivered in a step-and-shoot manner. In this method, the static dose distribution attributed to each segment is convolved with the probability density function (PDF) of motion during delivery of the segment, whereas in the conventional convolution method ("average-based convolution"), the static dose distribution is convolved with the PDF averaged over an entire fraction, an entire treatment course, or even an entire patient population. In the case of IMRT delivered in a step-and-shoot manner, the average-based convolution method assumes that in each segment the target volume experiences the same motion pattern (PDF) as that of population. In the segment-based convolution method, the dose during each segment is calculated by convolving the static dose with the motion PDF specific to that segment, allowing both intrafraction motion and the interplay effect to be accounted for in the dose calculation. Intrafraction prostate motion data from a population of 35 patients tracked using the Calypso system (Calypso Medical Technologies, Inc., Seattle, WA) was used to generate motion PDFs. These were then convolved with dose distributions from clinical prostate IMRT plans. For a single segment with a small number of monitor units, the interplay effect introduced errors of up to 25.9% in the mean CTV dose compared against the planned dose evaluated by using the PDF of the entire fraction. In contrast, the interplay effect reduced the minimum CTV dose by 4.4%, and the CTV generalized equivalent uniform dose by 1.3%, in single fraction plans. For entire treatment courses delivered in either a hypofractionated (five fractions) or conventional (> 30 fractions) regimen, the discrepancy in total dose due to interplay effect was negligible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cates, J; Drzymala, R
2015-06-15
Purpose: The purpose of this study was to develop and use a novel phantom to evaluate the accuracy and usefulness of the Leskell Gamma Plan convolution-based dose calculation algorithm compared with the current TMR10 algorithm. Methods: A novel phantom was designed to fit the Leskell Gamma Knife G Frame which could accommodate various materials in the form of one inch diameter, cylindrical plugs. The plugs were split axially to allow EBT2 film placement. Film measurements were made during two experiments. The first utilized plans generated on a homogeneous acrylic phantom setup using the TMR10 algorithm, with various materials inserted intomore » the phantom during film irradiation to assess the effect on delivered dose due to unplanned heterogeneities upstream in the beam path. The second experiment utilized plans made on CT scans of different heterogeneous setups, with one plan using the TMR10 dose calculation algorithm and the second using the convolution-based algorithm. Materials used to introduce heterogeneities included air, LDPE, polystyrene, Delrin, Teflon, and aluminum. Results: The data shows that, as would be expected, having heterogeneities in the beam path does induce dose delivery error when using the TMR10 algorithm, with the largest errors being due to the heterogeneities with electron densities most different from that of water, i.e. air, Teflon, and aluminum. Additionally, the Convolution algorithm did account for the heterogeneous material and provided a more accurate predicted dose, in extreme cases up to a 7–12% improvement over the TMR10 algorithm. The convolution algorithm expected dose was accurate to within 3% in all cases. Conclusion: This study proves that the convolution algorithm is an improvement over the TMR10 algorithm when heterogeneities are present. More work is needed to determine what the heterogeneity size/volume limits are where this improvement exists, and in what clinical and/or research cases this would be relevant.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spagnolo, Nicolo; Consorzio Interuniversitario per le Scienze Fisiche della Materia, piazzale Aldo Moro 5, I-00185 Roma; Sciarrino, Fabio
We show that the quantum states generated by universal optimal quantum cloning of a single photon represent a universal set of quantum superpositions resilient to decoherence. We adopt the Bures distance as a tool to investigate the persistence of quantum coherence of these quantum states. According to this analysis, the process of universal cloning realizes a class of quantum superpositions that exhibits a covariance property in lossy configuration over the complete set of polarization states in the Bloch sphere.
Mehdi Tajvidi; Robert H. Falk; John C. Hermanson
2005-01-01
The timeâtemperature superposition principle was applied to the viscoelastic properties of a kenaf- fiber/high-density polyethylene (HDPE) composite, and its validity was tested. With a composite of 50% kenaf fibers, 48% HDPE, and 2% compatibilizer, frequency scans from a dynamic mechanical analyzer were performed in the range of 0.1â10 Hz at five different...
A System for Discovering Bioengineered Threats by Knowledge Base Driven Mining of Toxin Data
2004-08-01
RMSD cut - off and select a residue substitution matrix. The user is also allowed...in the sense that after super-positioning, the RMSD between the substructures is no more than the cut - off RMSD . * Residue substitutions are allowed...during super-positioning. Default RMSD cut - off and residue substitution matrix are provided. Users can specify their own RMSD cut - offs as well as
Quantum biology at the cellular level--elements of the research program.
Bordonaro, Michael; Ogryzko, Vasily
2013-04-01
Quantum biology is emerging as a new field at the intersection between fundamental physics and biology, promising novel insights into the nature and origin of biological order. We discuss several elements of QBCL (quantum biology at cellular level) - a research program designed to extend the reach of quantum concepts to higher than molecular levels of biological organization. We propose a new general way to address the issue of environmentally induced decoherence and macroscopic superpositions in biological systems, emphasizing the 'basis-dependent' nature of these concepts. We introduce the notion of 'formal superposition' and distinguish it from that of Schroedinger's cat (i.e., a superposition of macroscopically distinct states). Whereas the latter notion presents a genuine foundational problem, the former one contradicts neither common sense nor observation, and may be used to describe cellular 'decision-making' and adaptation. We stress that the interpretation of the notion of 'formal superposition' should involve non-classical correlations between molecular events in a cell. Further, we describe how better understanding of the physics of Life can shed new light on the mechanism driving evolutionary adaptation (viz., 'Basis-Dependent Selection', BDS). Experimental tests of BDS and the potential role of synthetic biology in closing the 'evolvability mechanism' loophole are also discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Zhang, Guangyu; Jiang, Xin; Wang, Enge
2003-04-18
We report the synthesis of tubular graphite cones using a chemical vapor deposition method. The cones have nanometer-sized tips, micrometer-sized roots, and hollow interiors with a diameter ranging from about 2 to several tens of nanometers. The cones are composed of cylindrical graphite sheets; a continuous shortening of the graphite layers from the interior to the exterior makes them cone-shaped. All of the tubular graphite cones have a faceted morphology. The constituent graphite sheets have identical chiralities of a zigzag type across the entire diameter, imparting structural control to tubular-based carbon structures. The tubular graphite cones have potential for use as tips for scanning probe microscopy, but with greater rigidity and easier mounting than currently used carbon nanotubes.
Human Blue Cone Opsin Regeneration Involves Secondary Retinal Binding with Analog Specificity.
Srinivasan, Sundaramoorthy; Fernández-Sampedro, Miguel A; Morillo, Margarita; Ramon, Eva; Jiménez-Rosés, Mireia; Cordomí, Arnau; Garriga, Pere
2018-03-27
Human color vision is mediated by the red, green, and blue cone visual pigments. Cone opsins are G-protein-coupled receptors consisting of an opsin apoprotein covalently linked to the 11-cis-retinal chromophore. All visual pigments share a common evolutionary origin, and red and green cone opsins exhibit a higher homology, whereas blue cone opsin shows more resemblance to the dim light receptor rhodopsin. Here we show that chromophore regeneration in photoactivated blue cone opsin exhibits intermediate transient conformations and a secondary retinoid binding event with slower binding kinetics. We also detected a fine-tuning of the conformational change in the photoactivated blue cone opsin binding site that alters the retinal isomer binding specificity. Furthermore, the molecular models of active and inactive blue cone opsins show specific molecular interactions in the retinal binding site that are not present in other opsins. These findings highlight the differential conformational versatility of human cone opsin pigments in the chromophore regeneration process, particularly compared to rhodopsin, and point to relevant functional, unexpected roles other than spectral tuning for the cone visual pigments. Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.
A transcriptomics investigation into pine reproductive organ development.
Niu, Shihui; Yuan, Huwei; Sun, Xinrui; Porth, Ilga; Li, Yue; El-Kassaby, Yousry A; Li, Wei
2016-02-01
The development of reproductive structures in gymnosperms is still poorly studied because of a lack of genomic information and useful genetic tools. The hermaphroditic reproductive structure derived from unisexual gymnosperms is an even less studied aspect of seed plant evolution. To extend our understanding of the molecular mechanism of hermaphroditism and the determination of sexual identity of conifer reproductive structures in general, unisexual and bisexual cones from Pinus tabuliformis were profiled for gene expression using 60K microarrays. Expression patterns of genes during progression of sexual cone development were analysed using RNA-seq. The results showed that, overall, the transcriptomes of male structures in bisexual cones were more similar to those of female cones. However, the expression of several MADS-box genes in the bisexual cones was similar to that of male cones at the more juvenile developmental stage, while despite these expression shifts, male structures of bisexual cones and normal male cones were histologically indistinguishable and cone development was continuous. This study represents a starting point for in-depth analysis of the molecular regulation of cone development and also the origin of hermaphroditism in pine. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
NASA Astrophysics Data System (ADS)
Jang, Soojeong; Moon, Y.-J.; Lee, Jae-Ok; Na, Hyeonock
2014-09-01
We have made a comparison between coronal mass ejection (CME)-associated shock propagations based on the Wang-Sheeley-Arge (WSA)-ENLIL model using three cone types and in situ observations. For this we use 28 full-halo CMEs, whose cone parameters are determined and their corresponding interplanetary shocks were observed at the Earth, from 2001 to 2002. We consider three different cone types (an asymmetric cone model, an ice cream cone model, and an elliptical cone model) to determine 3-D CME cone parameters (radial velocity, angular width, and source location), which are the input values of the WSA-ENLIL model. The mean absolute error of the CME-associated shock travel times for the WSA-ENLIL model using the ice-cream cone model is 9.9 h, which is about 1 h smaller than those of the other models. We compare the peak values and profiles of solar wind parameters (speed and density) with in situ observations. We find that the root-mean-square errors of solar wind peak speed and density for the ice cream and asymmetric cone model are about 190 km/s and 24/cm3, respectively. We estimate the cross correlations between the models and observations within the time lag of ± 2 days from the shock travel time. The correlation coefficients between the solar wind speeds from the WSA-ENLIL model using three cone types and in situ observations are approximately 0.7, which is larger than those of solar wind density (cc ˜0.6). Our preliminary investigations show that the ice cream cone model seems to be better than the other cone models in terms of the input parameters of the WSA-ENLIL model.
Gaudric, Alain; Woog, Kelly
2018-01-01
The aim of this article is to analyse cone density, spacing and arrangement using an adaptive optics flood illumination retina camera (rtx1™) on a healthy population. Cone density, cone spacing and packing arrangements were measured on the right retinas of 109 subjects at 2°, 3°, 4°, 5° and 6° of eccentricity along 4 meridians. The effects of eccentricity, meridian, axial length, spherical equivalent, gender and age were evaluated. Cone density decreased on average from 28 884 ± 3 692 cones/mm2, at 2° of eccentricity, to 15 843 ± 1 598 cones/mm2 at 6°. A strong inter-individual variation, especially at 2°, was observed. No important difference of cone density was observed between the nasal and temporal meridians or between the superior and inferior meridians. However, the horizontal and vertical meridians differed by around 14% (T-test, p<0.0001). Cone density, expressed in units of area, decreased as a function of axial length (r2 = 0.60), but remained constant (r2 = 0.05) when cone density is expressed in terms of visual angle supporting the hypothesis that the retina is stretched during the elongation of the eyeball. Gender did not modify the cone distribution. Cone density was slightly modified by age but only at 2°. The older group showed a smaller density (7%). Cone spacing increased from 6,49 ± 0,42 μm to 8,72 ± 0,45 μm respectively between 2° and 6° of eccentricity. The mosaic of the retina is mainly triangularly arranged (i.e. cells with 5 to 7 neighbors) from 2° to 6°. Around half of the cells had 6 neighbors. PMID:29338027
Rucker, F. J.; Osorio, D.
2009-01-01
Longitudinal chromatic aberration is a well-known imperfection of visual optics, but the consequences in natural conditions, and for the evolution of receptor spectral sensitivities are less well understood. This paper examines how chromatic aberration affects image quality in the middle-wavelength sensitive (M-) cones, viewing broad-band spectra, over a range of spatial frequencies and focal planes. We also model the effects on M-cone contrast of moving the M-cone fundamental relative to the long- and middle-wavelength (L- and M-cone) fundamentals, while the eye is accommodated at different focal planes or at a focal plane that maximizes luminance contrast. When the focal plane shifts towards longer (650 nm) or shorter wavelengths (420 nm) the effects on M-cone contrast are large: longitudinal chromatic aberration causes total loss of M-cone contrast above 10 to 20 c/d. In comparison, the shift of the M-cone fundamental causes smaller effects on M-cone contrast. At 10 c/d a shift in the peak of the M-cone spectrum from 560 nm to 460 nm decreases M-cone contrast by 30%, while a 10 nm blue-shift causes only a minor loss of contrast. However, a noticeable loss of contrast may be seen if the eye is focused at focal planes other than that which maximizes luminance contrast. The presence of separate long- and middle-wavelength sensitive cones therefore has a small, but not insignificant cost to the retinal image via longitudinal chromatic aberration. This aberration may therefore be a factor limiting evolution of visual pigments and trichromatic color vision. PMID:18639571
Tomizuka, Junko; Tachibanaki, Shuji; Kawamura, Satoru
2015-01-01
Visual pigment in photoreceptors is activated by light. Activated visual pigment (R*) is believed to be inactivated by phosphorylation of R* with subsequent binding of arrestin. There are two types of photoreceptors, rods and cones, in the vertebrate retina, and they express different subtypes of arrestin, rod and cone type. To understand the difference in the function between rod- and cone-type arrestin, we first identified the subtype of arrestins expressed in rods and cones in carp retina. We found that two rod-type arrestins, rArr1 and rArr2, are co-expressed in a rod and that a cone-type arrestin, cArr1, is expressed in blue- and UV-sensitive cones; the other cone-type arrestin, cArr2, is expressed in red- and green-sensitive cones. We quantified each arrestin subtype and estimated its concentration in the outer segment of a rod or a cone in the dark; they were ∼0.25 mm (rArr1 plus rArr2) in a rod and 0.6–0.8 mm (cArr1 or cArr2) in a cone. The effect of each arrestin was examined. In contrast to previous studies, both rod and cone arrestins suppressed the activation of transducin in the absence of visual pigment phosphorylation, and all of the arrestins examined (rArr1, rArr2, and cArr2) bound transiently to most probably nonphosphorylated R*. One rod arrestin, rArr2, bound firmly to phosphorylated pigment, and the other two, rArr1 and cArr2, once bound to phosphorylated R* but dissociated from it during incubation. Our results suggested a novel mechanism of arrestin effect on the suppression of the R* activity in both rods and cones. PMID:25713141
NASA Astrophysics Data System (ADS)
Riggs, N. R.; Duffield, W. A.
2008-12-01
Scoria cone eruptions are generally modeled as a simple succession from explosive eruption to form the cone to passive effusion of lava, generally from the base of the cone. Sector collapse of scoria cones, wherein parts of the cone are rafted on a lava flow, is increasingly recognized as common, but the reasons that a cone may not be rebuilt are poorly understood. Red Mountain volcano is a Pleistocene scoria cone in the San Francisco Volcanic Field of northern Arizona, USA. The cone lies along the trace of a major steeply dipping normal fault that originated during Proterozoic tectonism and was reactivated in Tertiary time. The earliest phase of eruption at Red Mountain was typical "Strombolian", forming a cone that was followed by or possibly synchronous with lava effusion, toward the west from the base of the cone. Rafting then ensued as the west side of the cone collapsed; approximately 15% of the cone is preserved in mounds as much as 30 m high. Rafting was extensive enough to remove most of the cone over the vent area, which effectively reduced the pressure cap on the magma conduit. Resultant low fountaining fed clastogenic lava flows and minor scoria fallback. Clastogenic flows traveled as far as 4 km and now form a cliff 30-40 m high at the edge of the lava platform. Although several possibilities explain the change in vent dynamics and eruptive style, we favor the interpretation that an increase in magma-rise rate caused collapse of the cone. The abrupt removal of 300 m of material over the vent removed a conduit "cork" and low fountaining began. Magma that had erupted effusively suddenly became explosive. This aspect of scoria cone rafting at Red Mountain is broadly similar to sector collapse followed by explosive eruption in larger systems. A steep-walled, 150-m-high amphitheatre on the northeast side of Red Mountain exposes weakly to strongly altered scoria cemented by calcite, iron, and zeolites. We suggest that vapor-phase alteration was responsible for sealing fine-grained ash beds in the cone, and a pressurized system developed. Residual heat from a dike that was emplaced as part of the magmatic activity provided heat that drove groundwater along the regional fault up into the cone. Eventually the overpressurized system exploded in a phreatic eruption that created the amphitheatre, which has subsequently been enlarged by water and wind erosion. The combined sequence of events at Red Mountain illustrates some of the complexities in monogenetic scoria cone eruptions that have received little attention to date.
NASA Astrophysics Data System (ADS)
Guo, Minghuan; Sun, Feihu; Wang, Zhifeng
2017-06-01
The solar tower concentrator is mainly composed of the central receiver on the tower top and the heliostat field around the tower. The optical efficiencies of a solar tower concentrator are important to the whole thermal performance of the solar tower collector, and the aperture plane of a cavity receiver or the (inner or external) absorbing surface of any central receiver is a key interface of energy flux. So it is necessary to simulate and analyze the concentrated time-changing solar flux density distributions on the flat or curved receiving surface of the collector, with main optical errors considered. The transient concentrated solar flux on the receiving surface is the superimposition of the flux density distributions of all the normal working heliostats in the field. In this paper, we will mainly introduce a new backward ray tracing (BRT) method combined with the lumped effective solar cone, to simulate the flux density map on the receiving-surface. For BRT, bundles of rays are launched at the receiving-surface points of interest, strike directly on the valid cell centers among the uniformly sampled mirror cell centers in the mirror surface of the heliostats, and then direct to the effective solar cone around the incident sun beam direction after reflection. All the optical errors are convoluted into the effective solar cone. The brightness distribution of the effective solar cone is here supposed to be circular Gaussian type. The mirror curvature can be adequately formulated by certain number of local normal vectors at the mirror cell centers of a heliostat. The shading & blocking mirror region of a heliostat by neighbor heliostats and also the solar tower shading on the heliostat mirror are all computed on the flat-ground-plane platform, i.e., projecting the mirror contours and the envelope cylinder of the tower onto the horizontal ground plane along the sun-beam incident direction or along the reflection directions. If the shading projection of a sampled mirror point of the current heliostat is inside the shade cast of a neighbor heliostat or in the shade cast of the tower, this mirror point should be shaded from the incident sun beam. A code based on this new ray tracing method for the 1MW Badaling solar tower power plant in Beijing has been developed using MATLAB. There are 100 azimuth-elevation tracking heliostats in the solar field and the total tower is 118 meters high. The mirror surface of the heliostats is 10m wide and 10m long, it is composed of 8 rows × 8 columns of square mirror facets and each mirror facet has the size of 1.25m×1.25m. This code also was verified by two sets of sun-beam concentrating experiments of the heliostat field on the June 14, 2015. One set of optical experiments were conducted between some typical heliostats to verify the shading & blocking computation of the code, since shading & blocking computation is the most complicated, time-consuming and important optical computing section of the code. The other set of solar concentrating tests were carried out on the field center heliostat (No. 78) to verify the simulated the solar flux images on the white target region of the northern wall of the tower. The target center is 74.5 m high to the ground plane.
Diagnosis of Normal and Abnormal Color Vision with Cone-Specific VEPs.
Rabin, Jeff C; Kryder, Andrew C; Lam, Dan
2016-05-01
Normal color vision depends on normal long wavelength (L), middle wavelength (M), and short wavelength sensitive (S) cones. Hereditary "red-green" color vision deficiency (CVD) is due to a shift in peak sensitivity or lack of L or M cones. Hereditary S cone CVD is rare but can be acquired as an early sign of disease. Current tests detect CVD but few diagnose type or severity, critical for linking performance to real-world demands. The anomaloscope and newer subjective tests quantify CVD but are not applicable to infants or cognitively impaired patients. Our purpose was to develop an objective test of CVD with sensitivity and specificity comparable to current tests. A calibrated visual-evoked potential (VEP) display and Food and Drug Administration-approved system was used to record L, M, and S cone-specific pattern-onset VEPs from 18 color vision normals (CVNs) and 13 hereditary CVDs. VEP amplitudes and latencies were compared between groups to establish VEP sensitivity and specificity. Cone VEPs show 100% sensitivity for diagnosis of CVD and 94% specificity for confirming CVN. L cone (protan) CVDs showed a significant increase in L cone latency (53.1 msec, P < 0.003) and decreased amplitude (10.8 uV, P < 0.0000005) but normal M and S cone VEPs ( P > 0.31). M cone (deutan) CVDs showed a significant increase in M cone latency (31.0 msec, P < 0.000004) and decreased amplitude (8.4 uV, P < 0.006) but normal L and S cone VEPs ( P > 0.29). Cone-specific VEPs offer a rapid, objective test to diagnose hereditary CVD and show potential for detecting acquired CVD in various diseases. This paper describes the efficacy of cone-specific color VEPs for quantification of normal and abnormal color vision. The rapid, objective nature of this approach makes it suitable for detecting color sensitivity loss in infants and the cognitively impaired.
NASA Astrophysics Data System (ADS)
Sparice, Domenico; Scarpati, Claudio; Perrotta, Annamaria; Mazzeo, Fabio Carmine; Calvert, Andrew T.; Lanphere, Marvin A.
2017-11-01
Pre-caldera (> 22 ka) lateral activity at Somma-Vesuvius is related to scoria- and spatter-cone forming events of monogenetic or polygenetic nature. A new stratigraphic, sedimentological, textural and lithofacies investigation was performed on five parasitic cones (Pollena cones, Traianello cone, S. Maria a Castello cone and the recently found Terzigno cone) occurring below the Pomici di Base (22 ka) Plinian products emplaced during the first caldera collapse at Somma-Vesuvius. A new Ar/Ar age of 23.6 ± 0.3 ka obtained for the Traianello cone as well as the absence of a paleosol or reworked material between the S. Maria a Castello cone and the Pomici di Base deposits suggest that such cone-forming eruptions occurred near the upper limit of the pre-caldera period (22-39 ky). The stratigraphy of three of these eccentric cones (Pollena cones and Traianello cone) exhibits erosion surfaces, exotic tephras, volcaniclastic layers, paleosols, unconformity and paraconformity between superimposed eruptive units revealing their multi-phase, polygenetic evolution related to activation of separate vents and periods of quiescence. Such eccentric cones have been described as composed of scoria deposits and pure effusive lavas by previous authors. Lavas are here re-interpreted as welded horizons (lava-like) composed of coalesced spatter fragments whose pyroclastic nature is locally revealed by relicts of original fragments and remnants of clast outlines. These welded horizons show, locally, rheomorphic structures allowing to define them as emplaced as clastogenic lava flows. The lava-like facies is transitional, upward and downward, to less welded facies composed of agglutinated to unwelded spatter horizons in which clasts outlines are increasingly discernible. Such textural characteristics and facies variation are consistent with a continuous fall deposition of Hawaiian fire-fountains episodes alternated with Strombolian phases emplacing loose scoria deposits. High enrichment factor values, measured in the scoria deposits, imply the ejection of large proportion of ash even during Strombolian events.
Maintenance costs of serotiny in a variably serotinous pine: The role of water supply.
Martín-Sanz, Ruth C; Callejas-Díaz, Marta; Tonnabel, Jeanne; Climent, José M
2017-01-01
Serotiny is an important adaptation for plants in fire-prone environments. However, different mechanisms also induce the opening of serotinous cones in the absence of fire in variably serotinous species. Xeriscence -cone opening driven by dry and hot conditions- is considered to be mediated only by the external environment, but endogenous factors could also play a significant role. Using the variably serotinous Pinus halepensis as our model species, we determined the effects of cone age and scales density in cone opening, and using in-situ and ex-situ manipulative experiments we investigated the role of water availability in the opening of serotinous cones. We hypothesized that loss of connection between the cones and the branch through the peduncles or the absence of water supply could induce a faster cone opening. Results showed that older cones lost more water and opened at lower temperatures, with no influence of scales density. Both field and chamber manipulative experiments (using paired cones of the same whorl) confirmed that water intake through the peduncles affected significantly the pace of cone opening, such that lack of water supply speeded up cone dehiscence. However, this was true for weakly serotinous provenances-more common in this species-, while highly serotinous provenances were indifferent to this effect in the field test. All our results support that cone serotiny in P. halepensis involves the allocation of water to the cones, which is highly consistent with the previously observed environmental effects. Importantly, the existence of maintenance costs of serotinous cones has strong implications on the effects of climate change in the resilience of natural populations, via modifications of the canopy seed banks and recruitment after stand-replacing fires. Moreover, evolutionary models for serotiny in P. halepensis must take into account the significant contribution of maintenance costs to the complex interaction between genotype and the environment.
Cone Photoreceptor Packing Density and the Outer Nuclear Layer Thickness in Healthy Subjects
Chui, Toco Y. P.; Song, Hongxin; Clark, Christopher A.; Papay, Joel A.; Burns, Stephen A.; Elsner, Ann E.
2012-01-01
Purpose. We evaluated the relationship between cone photoreceptor packing density and outer nuclear layer (ONL) thickness within the central 15 degrees. Methods. Individual differences for healthy subjects in cone packing density and ONL thickness were examined in 8 younger and 8 older subjects, mean age 27.2 versus 56.2 years. Cone packing density was obtained using an adaptive optics scanning laser ophthalmoscope (AOSLO). The ONL thickness measurements included the ONL and the Henle fiber layer (ONL + HFL), and were obtained using spectral domain optical coherence tomography (SDOCT) and custom segmentation software. Results. There were sizeable individual differences in cone packing density and ONL + HFL thickness. Older subjects had on average lower cone packing densities, but thicker ONL + HFL measurements. Cone packing density and ONL + HFL thickness decreased with increasing retinal eccentricity. The ratio of the cone packing density-to-ONL2 was larger for the younger subjects group, and decreased with retinal eccentricity. Conclusions. The individual differences in cone packing density and ONL + HFL thickness are consistent with aging changes, indicating that normative aging data are necessary for fine comparisons in the early stages of disease or response to treatment. Our finding of ONL + HFL thickness increasing with aging is inconsistent with the hypothesis that ONL measurements with SDOCT depend only on the number of functioning cones, since in our older group cones were fewer, but thickness was greater. PMID:22570340
Comparison of Asymmetric and Ice-cream Cone Models for Halo Coronal Mass Ejections
NASA Astrophysics Data System (ADS)
Na, H.; Moon, Y.
2011-12-01
Halo coronal mass ejections (HCMEs) are major cause of the geomagnetic storms. To minimize the projection effect by coronagraph observation, several cone models have been suggested: an ice-cream cone model, an asymmetric cone model etc. These models allow us to determine the three dimensional parameters of HCMEs such as radial speed, angular width, and the angle between sky plane and central axis of the cone. In this study, we compare these parameters obtained from different models using 48 well-observed HCMEs from 2001 to 2002. And we obtain the root mean square error (RMS error) between measured projection speeds and calculated projection speeds for both cone models. As a result, we find that the radial speeds obtained from the models are well correlated with each other (R = 0.86), and the correlation coefficient of angular width is 0.6. The correlation coefficient of the angle between sky plane and central axis of the cone is 0.31, which is much smaller than expected. The reason may be due to the fact that the source locations of the asymmetric cone model are distributed near the center, while those of the ice-cream cone model are located in a wide range. The average RMS error of the asymmetric cone model (85.6km/s) is slightly smaller than that of the ice-cream cone model (87.8km/s).
Volcanic cones in Hydraotes chaos : implications for the chaotic terrains formation
NASA Astrophysics Data System (ADS)
Meresse, S.; Costard, F.; Mangold, N.; Masson, P.; Neukum, G.
2006-12-01
Numerous geologic scenarios have been proposed for the chaotic terrains formation. They include (1) sub-ice volcanism and other magma-ice interactions and (2) catastrophic release of groundwater from confined aquifers. The lack of volcanic morphology in the chaos was an handicap for the hypothesis of magma-ice interactions but the HRSC (High Resolution Stereo Camera) images have recently revealed possible volcanic cones inside the Hydraotes chaos. About thirty cones lie on the lowest parts of the chaos at elevation between -4300 and -5100 meters. They have basal diameters of 500-1900 m and heights exceeding 100 m. They are observed on young surface: the south smooth floor and inside the narrow valleys separating the mesas. The cones are relatively fresh. Similar morphologies of small cone-shaped structures have been previously identified in the northern lowlands of Mars (Chryse, Acidalia, Amazonis, Isidis and Elysuim Planitia) but their origin remains uncertain. A number of volcanic or cold climate landforms were proposed as potential terrestrial analogues : Icelandic pseudocraters (or rootless cones), cinder cones, tuff cones, pingos and spatter cones. The morphologic measurements made on the Hydraotes cones argue rather for a volcanic origin in comparison with terrestrial analogues. These first volcanic cones observed in Hydraotes chaos suggest that volcanic or subvolcanic activity might have played an important part in the chaotic terrains formation and outflow channels genesis.
Murakami, Y; Ikeda, Y; Nakatake, S; Tachibana, T; Fujiwara, K; Yoshida, N; Notomi, S; Nakao, S; Hisatomi, T; Miller, J W; Vavvas, DG; Sonoda, KH; Ishibashi, T
2015-01-01
Retinitis pigmentosa (RP) refers to a group of inherited retinal degenerations resulting form rod and cone photoreceptor cell death. The rod cell death due to deleterious genetic mutations has been shown to occur mainly through apoptosis, whereas the mechanisms and features of the secondary cone cell death have not been fully elucidated. Our previous study showed that the cone cell death in rd10 mice, an animal model of RP, involves necrotic features and is partly mediated by the receptor interacting protein kinase. However, the relevancy of necrotic cone cell death in human RP patients remains unknown. In the present study, we showed that dying cone cells in rd10 mice exhibited cellular enlargement, along with necrotic changes such as cellular swelling and mitochondrial rupture. In human eyes, live imaging of cone cells by adaptive optics scanning laser ophthalmoscopy revealed significantly increased percentages of enlarged cone cells in the RP patients compared with the control subjects. The vitreous of the RP patients contained significantly higher levels of high-mobility group box-1, which is released extracellularly associated with necrotic cell death. These findings suggest that necrotic enlargement of cone cells is involved in the process of cone degeneration, and that necrosis may be a novel target to prevent or delay the loss of cone-mediated central vision in RP. PMID:27551484
Ahn, Y C; Ju, S G; Kim, D Y; Choi, D R; Huh, S J; Park, Y H; Lim, D H; Kim, M K
1999-05-01
In stereotactic radiotherapy using X-Knife system, the commercially supplied collimator cone system had a few mechanical limitations. The authors have developed new collimator cones to overcome these limitations and named them "SMC type" collimator cones. We made use of cadmium-free cerrobend alloy within the stainless steel cylinder housing. We made nine cones of relatively larger sizes (3.0 cm to 7.0 cm in diameter) and of shorter length with bigger clearance from the isocenter than the commercial cones. The cone housing and the collimator cones were designed to insert into the wedge mount of the gantry head to enable double-exposure linac-gram taking. The mechanical accuracy of pointing to the isocenter was tested by ball test and cone rotation test, and the dosimetric measurements were performed, all of which were with satisfactory results. A new innovative quality assurance procedure using linac-grams on the patients at the actual treatment setup was attempted after taking 10 sets of AP and lateral linac-grams and the overall mechanical isocenter accuracy was excellent (average error = 0.4 +/- 0.2 mm). We have developed the SMC type collimator cone system mainly for fractionated stereotactic radiation therapy use with our innovative ideas. The new cones' mechanical accuracy and physical properties were satisfactory for clinical use, and the verification of the isocenter accuracy on the actual treatment setup has become possible.
Cone Storage and Seed Quality in Longleaf Pine
F.T. Bonner
1987-01-01
Immature cones of longleaf pine (Pinus palustris Mill.) can be stored for at least 5 weeks without adversely affecting extraction or seed quality. Cone moisture should be below 50 percent before using heat to open cones.
Carroll, Joseph; Rossi, Ethan A; Porter, Jason; Neitz, Jay; Roorda, Austin; Williams, David R; Neitz, Maureen
2010-09-15
Blue cone monochromacy (BCM) is an X-linked condition in which long- (L) and middle- (M) wavelength-sensitive cone function is absent. Due to the X-linked nature of the condition, female carriers are spared from a full manifestation of the associated defects but can show visual symptoms, including abnormal cone electroretinograms. Here we imaged the cone mosaic in four females carrying an L/M array with deletion of the locus control region, resulting in an absence of L/M opsin gene expression (effectively acting as a cone opsin knockout). On average, they had cone mosaics with reduced density and disrupted organization compared to normal trichromats. This suggests that the absence of opsin in a subset of cones results in their early degeneration, with X-inactivation the likely mechanism underlying phenotypic variability in BCM carriers. Copyright 2010 Elsevier Ltd. All rights reserved.
VLSI single-chip (255,223) Reed-Solomon encoder with interleaver
NASA Technical Reports Server (NTRS)
Hsu, In-Shek (Inventor); Deutsch, Leslie J. (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor)
1990-01-01
The invention relates to a concatenated Reed-Solomon/convolutional encoding system consisting of a Reed-Solomon outer code and a convolutional inner code for downlink telemetry in space missions, and more particularly to a Reed-Solomon encoder with programmable interleaving of the information symbols and code correction symbols to combat error bursts in the Viterbi decoder.
USDA-ARS?s Scientific Manuscript database
It is challenging to achieve rapid and accurate processing of large amounts of hyperspectral image data. This research was aimed to develop a novel classification method by employing deep feature representation with the stacked sparse auto-encoder (SSAE) and the SSAE combined with convolutional neur...
A Real-Time Convolution Algorithm and Architecture with Applications in SAR Processing
1993-10-01
multidimensional lOnnulation of the DFT and convolution. IEEE-ASSP, ASSP-25(3):239-242, June 1977. [6] P. Hoogenboom et al. Definition study PHARUS: final...algorithms and Ihe role of lhe tensor product. IEEE-ASSP, ASSP-40( 1 2):292 J-2930, December 1992. 181 P. Hoogenboom , P. Snoeij. P.J. Koomen. and H
Two-level convolution formula for nuclear structure function
NASA Astrophysics Data System (ADS)
Ma, Boqiang
1990-05-01
A two-level convolution formula for the nuclear structure function is derived in considering the nucleus as a composite system of baryon-mesons which are also composite systems of quark-gluons again. The results show that the European Muon Colaboration effect can not be explained by the nuclear effects as nucleon Fermi motion and nuclear binding contributions.
DSN telemetry system performance with convolutionally code data
NASA Technical Reports Server (NTRS)
Mulhall, B. D. L.; Benjauthrit, B.; Greenhall, C. A.; Kuma, D. M.; Lam, J. K.; Wong, J. S.; Urech, J.; Vit, L. D.
1975-01-01
The results obtained to date and the plans for future experiments for the DSN telemetry system were presented. The performance of the DSN telemetry system in decoding convolutionally coded data by both sequential and maximum likelihood techniques is being determined by testing at various deep space stations. The evaluation of performance models is also an objective of this activity.
Long-term changes in flowering and cone production by longleaf pine
William D. Boyer
1998-01-01
Abstract.Cone production by longleaf pine has been followed for up to 30 years in regeneration areas at five to nine coastal plain sites from North Carolina to Louisiana. A rapid increase in the size and frequency of cone crops has occured since 1986 following 20 years of relative stability. Cone production for the last 10 years averaged 36 cones per...
Two-dimensional convolute integers for analytical instrumentation
NASA Technical Reports Server (NTRS)
Edwards, T. R.
1982-01-01
As new analytical instruments and techniques emerge with increased dimensionality, a corresponding need is seen for data processing logic which can appropriately address the data. Two-dimensional measurements reveal enhanced unknown mixture analysis capability as a result of the greater spectral information content over two one-dimensional methods taken separately. It is noted that two-dimensional convolute integers are merely an extension of the work by Savitzky and Golay (1964). It is shown that these low-pass, high-pass and band-pass digital filters are truly two-dimensional and that they can be applied in a manner identical with their one-dimensional counterpart, that is, a weighted nearest-neighbor, moving average with zero phase shifting, convoluted integer (universal number) weighting coefficients.
A convolutional neural network neutrino event classifier
Aurisano, A.; Radovic, A.; Rocco, D.; ...
2016-09-01
Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less
Airplane detection in remote sensing images using convolutional neural networks
NASA Astrophysics Data System (ADS)
Ouyang, Chao; Chen, Zhong; Zhang, Feng; Zhang, Yifei
2018-03-01
Airplane detection in remote sensing images remains a challenging problem and has also been taking a great interest to researchers. In this paper we propose an effective method to detect airplanes in remote sensing images using convolutional neural networks. Deep learning methods show greater advantages than the traditional methods with the rise of deep neural networks in target detection, and we give an explanation why this happens. To improve the performance on detection of airplane, we combine a region proposal algorithm with convolutional neural networks. And in the training phase, we divide the background into multi classes rather than one class, which can reduce false alarms. Our experimental results show that the proposed method is effective and robust in detecting airplane.
Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber
Acciarri, R.; Adams, C.; An, R.; ...
2017-03-14
Here, we present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. Lastly, we also address technical issues that arise when applying this technique to data from a large LArTPCmore » at or near ground level.« less
Video-based convolutional neural networks for activity recognition from robot-centric videos
NASA Astrophysics Data System (ADS)
Ryoo, M. S.; Matthies, Larry
2016-05-01
In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.
Gas Classification Using Deep Convolutional Neural Networks.
Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin
2018-01-08
In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP).
Gas Classification Using Deep Convolutional Neural Networks
Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin
2018-01-01
In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP). PMID:29316723
Applications of deep convolutional neural networks to digitized natural history collections.
Schuettpelz, Eric; Frandsen, Paul B; Dikow, Rebecca B; Brown, Abel; Orli, Sylvia; Peters, Melinda; Metallo, Adam; Funk, Vicki A; Dorr, Laurence J
2017-01-01
Natural history collections contain data that are critical for many scientific endeavors. Recent efforts in mass digitization are generating large datasets from these collections that can provide unprecedented insight. Here, we present examples of how deep convolutional neural networks can be applied in analyses of imaged herbarium specimens. We first demonstrate that a convolutional neural network can detect mercury-stained specimens across a collection with 90% accuracy. We then show that such a network can correctly distinguish two morphologically similar plant families 96% of the time. Discarding the most challenging specimen images increases accuracy to 94% and 99%, respectively. These results highlight the importance of mass digitization and deep learning approaches and reveal how they can together deliver powerful new investigative tools.
A convolutional neural network neutrino event classifier
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aurisano, A.; Radovic, A.; Rocco, D.
Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less
Clinicopathologic correlations in Alibert-type mycosis fungoides.
Eng, A M; Blekys, I; Worobec, S M
1981-06-01
Five cases of mycosis fungoides of the Alibert type were studied by taking multiple biopsy specimens at different stages of the disease. Large hyperchromatic, slightly irregular mononuclear cells are the most frequent cells. Ultrastructurally, the cells were only slightly convoluted, had prominent heterochromatin banding at the nuclear membrane, and unremarkable cytoplasmic organelles. Highly convoluted cerebriform nucleated cells were few. Large regular vesicular histiocytes were prominent in the early stages. Ultrastructurally, the cells showed evenly distributed euchromatin. Epidermotrophism was equally as important as Pautrier's abscess as a hallmark of the disease. Stereologic techniques comparing the infiltrate with regard to size and convolution of cells in all stages of mycosis fungoides with infiltrates seen in a variety of benign dermatoses showed no statistically significant differences.
Deep Learning with Hierarchical Convolutional Factor Analysis
Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence
2013-01-01
Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342
Comparison of the WSA-ENLIL model with three CME cone types
NASA Astrophysics Data System (ADS)
Jang, Soojeong; Moon, Y.; Na, H.
2013-07-01
We have made a comparison of the CME-associated shock propagation based on the WSA-ENLIL model with three cone types using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters as well as their associated interplanetary (IP) shocks. For this study we consider three different cone types (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine 3-D CME parameters (radial velocity, angular width and source location), which are the input values of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the asymmetric cone model is 10.6 hours, which is about 1 hour smaller than those of the other models. Their ensemble average of MAE is 9.5 hours. However, this value is still larger than that (8.7 hours) of the empirical model of Kim et al. (2007). We will compare their IP shock velocities and densities with those from ACE in-situ measurements and discuss them in terms of the prediction of geomagnetic storms.Abstract (2,250 Maximum Characters): We have made a comparison of the CME-associated shock propagation based on the WSA-ENLIL model with three cone types using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters as well as their associated interplanetary (IP) shocks. For this study we consider three different cone types (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine 3-D CME parameters (radial velocity, angular width and source location), which are the input values of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the asymmetric cone model is 10.6 hours, which is about 1 hour smaller than those of the other models. Their ensemble average of MAE is 9.5 hours. However, this value is still larger than that (8.7 hours) of the empirical model of Kim et al. (2007). We will compare their IP shock velocities and densities with those from ACE in-situ measurements and discuss them in terms of the prediction of geomagnetic storms.
Clérin, Emmanuelle; Wicker, Nicolas; Mohand-Saïd, Saddek; Poch, Olivier; Sahel, José-Alain; Léveillard, Thierry
2011-12-20
Retinitis pigmentosa is characterized by the sequential loss of rod and cone photoreceptors. The preservation of cones would prevent blindness due to their essential role in human vision. Rod-derived Cone Viability Factor is a thioredoxin-like protein that is secreted by rods and is involved in cone survival. To validate the activity of Rod-derived Cone Viability Factors (RdCVFs) as therapeutic agents for treating retinitis Pigmentosa, we have developed e-conome, an automated cell counting platform for retinal flat mounts of rodent models of cone degeneration. This automated quantification method allows for faster data analysis thereby accelerating translational research. An inverted fluorescent microscope, motorized and coupled to a CCD camera records images of cones labeled with fluorescent peanut agglutinin lectin on flat-mounted retinas. In an average of 300 fields per retina, nine Z-planes at magnification X40 are acquired after two-stage autofocus individually for each field. The projection of the stack of 9 images is subject to a threshold, filtered to exclude aberrant images based on preset variables. The cones are identified by treating the resulting image using 13 variables empirically determined. The cone density is calculated over the 300 fields. The method was validated by comparison to the conventional stereological counting. The decrease in cone density in rd1 mouse was found to be equivalent to the decrease determined by stereological counting. We also studied the spatiotemporal pattern of the degeneration of cones in the rd1 mouse and show that while the reduction in cone density starts in the central part of the retina, cone degeneration progresses at the same speed over the whole retinal surface. We finally show that for mice with an inactivation of the Nucleoredoxin-like genes Nxnl1 or Nxnl2 encoding RdCVFs, the loss of cones is more pronounced in the ventral retina. The automated platform ℮-conome used here for retinal disease is a tool that can broadly accelerate translational research for neurodegenerative diseases.
Variation of Cone Photoreceptor Packing Density with Retinal Eccentricity and Age
Song, Hongxin; Chui, Toco Yuen Ping; Zhong, Zhangyi; Elsner, Ann E.
2011-01-01
Purpose. To study the variation of cone photoreceptor packing density across the retina in healthy subjects of different ages. Methods. High-resolution adaptive optics scanning laser ophthalmoscope (AOSLO) systems were used to systematically image the retinas of two groups of subjects of different ages. Ten younger subjects (age range, 22–35 years) and 10 older subjects (age range, 50–65 years) were tested. Strips of cone photoreceptors, approximately 12° × 1.8° long were imaged for each of the four primary retinal meridians: superior, inferior, nasal, and temporal. Cone photoreceptors within the strips were counted, and cone photoreceptor packing density was calculated. Statistical analysis (three-way ANOVA) was used to calculate the interaction for cone photoreceptor packing density between age, meridian, and eccentricity. Results. As expected, cone photoreceptor packing density was higher close to the fovea and decreased with increasing retinal eccentricity from 0.18 to 3.5 mm (∼0.6–12°). Older subjects had approximately 75% of the cone density at 0.18 mm (∼0.6°), and this difference decreased rapidly with eccentricity, with the two groups having similar cone photoreceptor packing densities beyond 0.5 mm retinal eccentricity on average. Conclusions. Cone packing density in the living human retina decreases as a function of age within the foveal center with the largest difference being found at our most central measurement site. At all ages, the retina showed meridional difference in cone densities, with cone photoreceptor packing density decreasing faster with increasing eccentricity in the vertical dimensions than in the horizontal dimensions. PMID:21724911
Variation of cone photoreceptor packing density with retinal eccentricity and age.
Song, Hongxin; Chui, Toco Yuen Ping; Zhong, Zhangyi; Elsner, Ann E; Burns, Stephen A
2011-09-01
To study the variation of cone photoreceptor packing density across the retina in healthy subjects of different ages. High-resolution adaptive optics scanning laser ophthalmoscope (AOSLO) systems were used to systematically image the retinas of two groups of subjects of different ages. Ten younger subjects (age range, 22-35 years) and 10 older subjects (age range, 50-65 years) were tested. Strips of cone photoreceptors, approximately 12° × 1.8° long were imaged for each of the four primary retinal meridians: superior, inferior, nasal, and temporal. Cone photoreceptors within the strips were counted, and cone photoreceptor packing density was calculated. Statistical analysis (three-way ANOVA) was used to calculate the interaction for cone photoreceptor packing density between age, meridian, and eccentricity. As expected, cone photoreceptor packing density was higher close to the fovea and decreased with increasing retinal eccentricity from 0.18 to 3.5 mm (∼0.6-12°). Older subjects had approximately 75% of the cone density at 0.18 mm (∼0.6°), and this difference decreased rapidly with eccentricity, with the two groups having similar cone photoreceptor packing densities beyond 0.5 mm retinal eccentricity on average. Cone packing density in the living human retina decreases as a function of age within the foveal center with the largest difference being found at our most central measurement site. At all ages, the retina showed meridional difference in cone densities, with cone photoreceptor packing density decreasing faster with increasing eccentricity in the vertical dimensions than in the horizontal dimensions.
Branching habit and the allocation of reproductive resources in conifers.
Leslie, Andrew B
2012-09-01
Correlated relationships between branch thickness, branch density, and twig and leaf size have been used extensively to study the evolution of plant canopy architecture, but fewer studies have explored the impact of these relationships on the allocation of reproductive resources. This study quantifies pollen cone production in conifers, which have similar basic reproductive biology but vary dramatically in branching habit, in order to test how differences in branch diameter influence pollen cone size and the density with which they are deployed in the canopy. Measurements of canopy branch density, the number of cones per branch and cone size were used to estimate the amount of pollen cone tissues produced by 16 species in three major conifer clades. The number of pollen grains produced was also estimated using direct counts from individual pollen cones. The total amount of pollen cone tissues in the conifer canopy varied little among species and clades, although vegetative traits such as branch thickness, branch density and pollen cone size varied over several orders of magnitude. However, branching habit controls the way these tissues are deployed: taxa with small branches produce small pollen cones at a high density, while taxa with large branches produce large cones relatively sparsely. Conifers appear to invest similar amounts of energy in pollen production independent of branching habit. However, similar associations between branch thickness, branch density and pollen cone size are seen across conifers, including members of living and extinct groups not directly studied here. This suggests that reproductive features relating to pollen cone size are in large part a function of the evolution of vegetative morphology and branching habit.
Experimental and raytrace results for throat-to-throat compound parabolic concentrators
NASA Technical Reports Server (NTRS)
Leviton, D. B.; Leitch, J. W.
1986-01-01
Compound parabolic concentrators are nonimaging cone-shaped optics with useful angular transmission characteristics. Two cones used throat-to-throat accept radiant flux within one well-defined acceptance angle and redistribute it into another. If the entrance cone is fed with Lambertian flux, the exit cone produces a beam whose half-angle is the exit cone's acceptance angle and whose cross section shows uniform irradiance from near the exit mouth to infinity. (The pair is a beam angle transformer). The design of one pair of cones is discussed, also an experiment to map the irradiance of the emergent beam, and a raytracing program which models the cones fed by Lambertian flux. Experimental results compare favorably with raytrace results.
Replaceable filters and cones for flared-tubing connectors
NASA Technical Reports Server (NTRS)
Grant, L. E.; Howland, B. T.
1970-01-01
Connector is modified by machining the cone from one end before the fitting is bored to accommodate a metallic-filament type of slip-in filter. Thus, when surface of the cone is damaged, only the cone needs replacement.
Ueno, Akiko; Omori, Yoshihiro; Sugita, Yuko; Watanabe, Satoshi; Chaya, Taro; Kozuka, Takashi; Kon, Tetsuo; Yoshida, Satoyo; Matsushita, Kenji; Kuwahara, Ryusuke; Kajimura, Naoko; Okada, Yasushi; Furukawa, Takahisa
2018-03-27
In the vertebrate retina, cone photoreceptors play crucial roles in photopic vision by transmitting light-evoked signals to ON- and/or OFF-bipolar cells. However, the mechanisms underlying selective synapse formation in the cone photoreceptor pathway remain poorly understood. Here, we found that Lrit1, a leucine-rich transmembrane protein, localizes to the photoreceptor synaptic terminal and regulates the synaptic connection between cone photoreceptors and cone ON-bipolar cells. Lrit1-deficient retinas exhibit an aberrant morphology of cone photoreceptor pedicles, as well as an impairment of signal transmission from cone photoreceptors to cone ON-bipolar cells. Furthermore, we demonstrated that Lrit1 interacts with Frmpd2, a photoreceptor scaffold protein, and with mGluR6, an ON-bipolar cell-specific glutamate receptor. Additionally, Lrit1-null mice showed visual acuity impairments in their optokinetic responses. These results suggest that the Frmpd2-Lrit1-mGluR6 axis regulates selective synapse formation in cone photoreceptors and is essential for normal visual function. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Willey, Melvin G.
1981-01-01
An infinite blender that achieves a homogeneous mixture of fuel microspheres is provided. Blending is accomplished by directing respective groups of desired particles onto the apex of a stationary coaxial cone. The particles progress downward over the cone surface and deposit in a space at the base of the cone that is described by a flexible band provided with a wide portion traversing and in continuous contact with the circumference of the cone base and extending upwardly therefrom. The band, being attached to the cone at a narrow inner end thereof, causes the cone to rotate on its arbor when the band is subsequently pulled onto a take-up spool. As a point at the end of the wide portion of the band passes the point where it is tangent to the cone, the blended particles are released into a delivery tube leading directly into a mold, and a plate mounted on the lower portion of the cone and positioned between the end of the wide portion of the band and the cone assures release of the particles only at the tangent point.
Conditional generation of an arbitrary superposition of coherent states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeoka, Masahiro; Sasaki, Masahide
2007-06-15
We present a scheme to conditionally generate an arbitrary superposition of a pair of coherent states from a squeezed vacuum by means of the modified photon subtraction where a coherent state ancilla and two on/off type detectors are used. We show that, even including realistic imperfections of the detectors, our scheme can generate a target state with a high fidelity. The amplitude of the generated states can be amplified by conditional homodyne detections.
Fast, large-scale hologram calculation in wavelet domain
NASA Astrophysics Data System (ADS)
Shimobaba, Tomoyoshi; Matsushima, Kyoji; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Ito, Tomoyoshi
2018-04-01
We propose a large-scale hologram calculation using WAvelet ShrinkAge-Based superpositIon (WASABI), a wavelet transform-based algorithm. An image-type hologram calculated using the WASABI method is printed on a glass substrate with the resolution of 65 , 536 × 65 , 536 pixels and a pixel pitch of 1 μm. The hologram calculation time amounts to approximately 354 s on a commercial CPU, which is approximately 30 times faster than conventional methods.
QUANTUM COMPUTING: Quantum Entangled Bits Step Closer to IT.
Zeilinger, A
2000-07-21
In contrast to today's computers, quantum computers and information technologies may in future be able to store and transmit information not only in the state "0" or "1," but also in superpositions of the two; information will then be stored and transmitted in entangled quantum states. Zeilinger discusses recent advances toward using this principle for quantum cryptography and highlights studies into the entanglement (or controlled superposition) of several photons, atoms, or ions.
Formation of Large-Amplitude Wave Groups in an Experimental Model Basin
2008-08-01
varying parameters, including amplitude, frequency, and signal duration. Superposition of thes finite regular waves produced repeatable wave groups at a...19 Regular Waves 20 Irregular Waves 21 Senix Wave Gages 21 GLRP 23 Instrumentation Calibration and Uncertainty 26 Senix Ultrasonic Wave Gages... signal output from sine wave superposition, two sine waves combined: x] + x2 (top) and x3 + x4 (middle), all four waves (x, + x2 + x, + xA
Modeling decoherence with qubits
NASA Astrophysics Data System (ADS)
Heusler, Stefan; Dür, Wolfgang
2018-03-01
Quantum effects like the superposition principle contradict our experience of daily life. Decoherence can be viewed as a possible explanation why we do not observe quantum superposition states in the macroscopic world. In this article, we use the qubit ansatz to discuss decoherence in the simplest possible model system and propose a visualization for the microscopic origin of decoherence, and the emergence of a so-called pointer basis. Finally, we discuss the possibility of ‘macroscopic’ quantum effects.
Robot Behavior Acquisition Superposition and Composting of Behaviors Learned through Teleoperation
NASA Technical Reports Server (NTRS)
Peters, Richard Alan, II
2004-01-01
Superposition of a small set of behaviors, learned via teleoperation, can lead to robust completion of a simple articulated reach-and-grasp task. Results support the hypothesis that a set of learned behaviors can be combined to generate new behaviors of a similar type. This supports the hypothesis that a robot can learn to interact purposefully with its environment through a developmental acquisition of sensory-motor coordination. Teleoperation bootstraps the process by enabling the robot to observe its own sensory responses to actions that lead to specific outcomes. A reach-and-grasp task, learned by an articulated robot through a small number of teleoperated trials, can be performed autonomously with success in the face of significant variations in the environment and perturbations of the goal. Superpositioning was performed using the Verbs and Adverbs algorithm that was developed originally for the graphical animation of articulated characters. Work was performed on Robonaut at NASA-JSC.
Quantum computer games: quantum minesweeper
NASA Astrophysics Data System (ADS)
Gordon, Michal; Gordon, Goren
2010-07-01
The computer game of quantum minesweeper is introduced as a quantum extension of the well-known classical minesweeper. Its main objective is to teach the unique concepts of quantum mechanics in a fun way. Quantum minesweeper demonstrates the effects of superposition, entanglement and their non-local characteristics. While in the classical minesweeper the goal of the game is to discover all the mines laid out on a board without triggering them, in the quantum version there are several classical boards in superposition. The goal is to know the exact quantum state, i.e. the precise layout of all the mines in all the superposed classical boards. The player can perform three types of measurement: a classical measurement that probabilistically collapses the superposition; a quantum interaction-free measurement that can detect a mine without triggering it; and an entanglement measurement that provides non-local information. The application of the concepts taught by quantum minesweeper to one-way quantum computing are also presented.
On the Mixing of Single and Opposed Rows of Jets With a Confined Crossflow
NASA Technical Reports Server (NTRS)
Holdeman, James D.; Clisset, James R.; Moder, Jeffrey P.; Lear, William E.
2006-01-01
The primary objectives of this study were 1) to demonstrate that contour plots could be made using the data interface in the NASA GRC jet-in-crossflow (JIC) spreadsheet, and 2) to investigate the suitability of using superposition for the case of opposed rows of jets with their centerlines in-line. The current report is similar to NASA/TM-2005-213137 but the "basic" effects of a confined JIC that are shown in profile plots there are shown as contour plots in this report, and profile plots for opposed rows of aligned jets are presented here using both symmetry and superposition models. Although superposition was found to be suitable for most cases of opposed rows of jets with jet centerlines in-line, the calculation procedure in the JIC spreadsheet was not changed and it still uses the symmetry method for this case, as did all previous publications of the NASA empirical model.
Coherent Control to Prepare an InAs Quantum Dot for Spin-Photon Entanglement
NASA Astrophysics Data System (ADS)
Webster, L. A.; Truex, K.; Duan, L.-M.; Steel, D. G.; Bracker, A. S.; Gammon, D.; Sham, L. J.
2014-03-01
We optically generated an electronic state in a single InAs /GaAs self-assembled quantum dot that is a precursor to the deterministic entanglement of the spin of the electron with an emitted photon in the proposal of W. Yao, R.-B. Liu, and L. J. Sham [Phys. Rev. Lett. 95, 030504 (2005).]. A superposition state is prepared by optical pumping to a pure state followed by an initial pulse. By modulating the subsequent pulse arrival times and precisely controlling them using interferometric measurement of path length differences, we are able to implement a coherent control technique to selectively drive exactly one of the two components of the superposition to the ground state. This optical transition contingent on spin was driven with the same broadband pulses that created the superposition through the use of a two pulse coherent control sequence. A final pulse affords measurement of the coherence of this "preentangled" state.
Oscillatory Dynamics of One-Dimensional Homogeneous Granular Chains
NASA Astrophysics Data System (ADS)
Starosvetsky, Yuli; Jayaprakash, K. R.; Hasan, Md. Arif; Vakakis, Alexander F.
The acoustics of the homogeneous granular chains has been studied extensively both numerically and experimentally in the references cited in the previous chapters. This chapter focuses on the oscillatory behavior of finite dimensional homogeneous granular chains. It is well known that normal vibration modes are the building blocks of the vibrations of linear systems due to the applicability of the principle of superposition. One the other hand, nonlinear theory is deprived of such a general superposition principle (although special cases of nonlinear superpositions do exist), but nonlinear normal modes ‒ NNMs still play an important role in the forced and resonance dynamics of these systems. In their basic definition [1], NNMs were defined as time-periodic nonlinear oscillations of discrete or continuous dynamical systems where all coordinates (degrees-of-freedom) oscillate in-unison with the same frequency; further extensions of this definition have been considered to account for NNMs of systems with internal resonances [2]...
A Simple Encryption Algorithm for Quantum Color Image
NASA Astrophysics Data System (ADS)
Li, Panchi; Zhao, Ya
2017-06-01
In this paper, a simple encryption scheme for quantum color image is proposed. Firstly, a color image is transformed into a quantum superposition state by employing NEQR (novel enhanced quantum representation), where the R,G,B values of every pixel in a 24-bit RGB true color image are represented by 24 single-qubit basic states, and each value has 8 qubits. Then, these 24 qubits are respectively transformed from a basic state into a balanced superposition state by employed the controlled rotation gates. At this time, the gray-scale values of R, G, B of every pixel are in a balanced superposition of 224 multi-qubits basic states. After measuring, the whole image is an uniform white noise, which does not provide any information. Decryption is the reverse process of encryption. The experimental results on the classical computer show that the proposed encryption scheme has better security.
NASA Astrophysics Data System (ADS)
Hahn, S.; Machefaux, E.; Hristov, Y. V.; Albano, M.; Threadgill, R.
2016-09-01
In the present study, combination of the standalone dynamic wake meandering (DWM) model with Reynolds-averaged Navier-Stokes (RANS) CFD solutions for ambient ABL flows is introduced, and its predictive performance for annual energy production (AEP) is evaluated against Vestas’ SCADA data for six operating wind farms over semi-complex terrains under neutral conditions. The performances of conventional linear and quadratic wake superposition techniques are also compared, together with the in-house implemention of successive hierarchical merging approaches. As compared to our standard procedure based on the Jensen model in WindPRO, the overall results are promising, leading to a significant improvement in AEP accuracy for four of the six sites. While the conventional linear superposition shows the best performance for the improved four sites, the hierarchical square superposition shows the least deteriorated result for the other two sites.
Recurrent abnormalities in conifer cones and the evolutionary origins of flower-like structures.
Rudall, Paula J; Hilton, Jason; Vergara-Silva, Francisco; Bateman, Richard M
2011-03-01
Conifer cones are reproductive structures that are typically of restricted growth and either exclusively pollen-bearing (male) or exclusively ovule-bearing (female). Here, we review two common spontaneous developmental abnormalities of conifer cones: proliferated cones, in which the apex grows vegetatively, and bisexual cones, which possess both male and female structures. Emerging developmental genetic data, combined with evidence from comparative morphology, ontogeny and palaeobotany, provide new insights into the evolution of both cones and flowers, and prompt novel strategies for understanding seed-plant evolution. Copyright © 2010 Elsevier Ltd. All rights reserved.
Surface micro-structuring of silicon by excimer-laser irradiation in reactive atmospheres
NASA Astrophysics Data System (ADS)
Pedraza, A. J.; Fowlkes, J. D.; Jesse, S.; Mao, C.; Lowndes, D. H.
2000-12-01
The formation mechanisms of cones and columns by pulsed-laser irradiation in reactive atmospheres were studied using scanning electron microscopy and profilometry. Deep etching takes place in SF6- and O2- rich atmospheres and consequently, silicon-containing molecules and clusters are released. Transport of silicon from the etched/ablated regions to the tip of columns and cones and to the side of the cones is required because both structures, columns and cones, protrude above the initial surface. The laser-induced micro-structure is influenced not only by the nature but also by the partial pressure of the reactive gas in the atmosphere. Irradiation in Ar following cone formation in SF6 produced no additional growth but rather melting and resolidification. Subsequent irradiation using again a SF6 atmosphere lead to cone restructuring and growth resumption. Thus the effects of etching plus re-deposition that produce column/cone formation and growth are clearly separated from the effects of just melting. On the other hand, irradiation continued in air after first performed in SF6 resulted in: (a) an intense etching of the cones and a tendency to transform them into columns; (b) growth of new columns on top of the existing cones and (c) filamentary nano-structures coating the sides of the columns and cones.
Short-wavelength cone-opponent retinal ganglion cells in mammals.
Marshak, David W; Mills, Stephen L
2014-03-01
In all of the mammalian species studied to date, the short-wavelength-sensitive (S) cones and the S-cone bipolar cells that receive their input are very similar, but the retinal ganglion cells that receive synapses from the S-cone bipolar cells appear to be quite different. Here, we review the literature on mammalian retinal ganglion cells that respond selectively to stimulation of S-cones and respond with opposite polarity to longer wavelength stimuli. There are at least three basic mechanisms to generate these color-opponent responses, including: (1) opponency is generated in the outer plexiform layer by horizontal cells and is conveyed to the ganglion cells via S-cone bipolar cells, (2) inputs from bipolar cells with different cone inputs and opposite response polarity converge directly on the ganglion cells, and (3) inputs from S-cone bipolar cells are inverted by S-cone amacrine cells. These are not mutually exclusive; some mammalian ganglion cells that respond selectively to S-cone stimulation seem to utilize at least two of them. Based on these findings, we suggest that the small bistratified ganglion cells described in primates are not the ancestral type, as proposed previously. Instead, the known types of ganglion cells in this pathway evolved from monostratified ancestral types and became bistratified in some mammalian lineages.
Evaporation From Soil Containers With Irregular Shapes
NASA Astrophysics Data System (ADS)
Assouline, Shmuel; Narkis, Kfir
2017-11-01
Evaporation from bare soils under laboratory conditions is generally studied using containers of regular shapes where the vertical edges are parallel to the flow lines in the drying domain. The main objective of this study was to investigate the impact of irregular container shapes, for which the flow lines either converge or diverge toward the surface. Evaporation from initially saturated sand and sandy loam soils packed in cones and inverted cones was compared to evaporation from corresponding cylindrical columns. The initial evaporation rate was higher in the cones, and close to potential evaporation. At the end of the experiment, the cumulative evaporation depth in the sand cone was equal to that in the column but higher than in the inverted cone, while in the sandy loam, the order was cone > column > inverted cone. By comparison to the column, stage 1 evaporation was longer in the cones, and practically similar in the inverted cones. Stage 2 evaporation rate decreased with the increase of the evaporating surface area. These results were more pronounced in the sandy loam. For the sand column, the transition between stage 1 and stage 2 evaporation occurred when the depth of the saturation front was approximately equal to the characteristic length of the soil. However, for the cone and the inverted cone, it occurred for a shallower depth of the saturation front. It seems therefore that the concept of the characteristic length derived from the soil hydraulic properties is related to drying systems of regular shapes.