NASA Astrophysics Data System (ADS)
Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud
2015-10-01
Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and 18F, 99mTc, 131I and 177Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the 99mTc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.
Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud
2015-10-21
Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and (18)F, (99m)Tc, (131)I and (177)Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the (99m)Tc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.
NASA Astrophysics Data System (ADS)
Devpura, S.; Siddiqui, M. S.; Chen, D.; Liu, D.; Li, H.; Kumar, S.; Gordon, J.; Ajlouni, M.; Movsas, B.; Chetty, I. J.
2014-03-01
The purpose of this study was to systematically evaluate dose distributions computed with 5 different dose algorithms for patients with lung cancers treated using stereotactic ablative body radiotherapy (SABR). Treatment plans for 133 lung cancer patients, initially computed with a 1D-pencil beam (equivalent-path-length, EPL-1D) algorithm, were recalculated with 4 other algorithms commissioned for treatment planning, including 3-D pencil-beam (EPL-3D), anisotropic analytical algorithm (AAA), collapsed cone convolution superposition (CCC), and Monte Carlo (MC). The plan prescription dose was 48 Gy in 4 fractions normalized to the 95% isodose line. Tumors were classified according to location: peripheral tumors surrounded by lung (lung-island, N=39), peripheral tumors attached to the rib-cage or chest wall (lung-wall, N=44), and centrally-located tumors (lung-central, N=50). Relative to the EPL-1D algorithm, PTV D95 and mean dose values computed with the other 4 algorithms were lowest for "lung-island" tumors with smallest field sizes (3-5 cm). On the other hand, the smallest differences were noted for lung-central tumors treated with largest field widths (7-10 cm). Amongst all locations, dose distribution differences were most strongly correlated with tumor size for lung-island tumors. For most cases, convolution/superposition and MC algorithms were in good agreement. Mean lung dose (MLD) values computed with the EPL-1D algorithm were highly correlated with that of the other algorithms (correlation coefficient =0.99). The MLD values were found to be ~10% lower for small lung-island tumors with the model-based (conv/superposition and MC) vs. the correction-based (pencil-beam) algorithms with the model-based algorithms predicting greater low dose spread within the lungs. This study suggests that pencil beam algorithms should be avoided for lung SABR planning. For the most challenging cases, small tumors surrounded entirely by lung tissue (lung-island type), a Monte-Carlo-based algorithm may be warranted.
Improved scatter correction using adaptive scatter kernel superposition
NASA Astrophysics Data System (ADS)
Sun, M.; Star-Lack, J. M.
2010-11-01
Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devpura, S; Li, H; Liu, C
Purpose: To correlate dose distributions computed using six algorithms for recurrent early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT), with outcome (local failure). Methods: Of 270 NSCLC patients treated with 12Gyx4, 20 were found to have local recurrence prior to the 2-year time point. These patients were originally planned with 1-D pencil beam (1-D PB) algorithm. 4D imaging was performed to manage tumor motion. Regions of local failures were determined from follow-up PET-CT scans. Follow-up CT images were rigidly fused to the planning CT (pCT), and recurrent tumor volumes (Vrecur) were mapped to themore » pCT. Dose was recomputed, retrospectively, using five algorithms: 3-D PB, collapsed cone convolution (CCC), anisotropic analytical algorithm (AAA), AcurosXB, and Monte Carlo (MC). Tumor control probability (TCP) was computed using the Marsden model (1,2). Patterns of failure were classified as central, in-field, marginal, and distant for Vrecur ≥95% of prescribed dose, 95–80%, 80–20%, and ≤20%, respectively (3). Results: Average PTV D95 (dose covering 95% of the PTV) for 3-D PB, CCC, AAA, AcurosXB, and MC relative to 1-D PB were 95.3±2.1%, 84.1±7.5%, 84.9±5.7%, 86.3±6.0%, and 85.1±7.0%, respectively. TCP values for 1-D PB, 3-D PB, CCC, AAA, AcurosXB, and MC were 98.5±1.2%, 95.7±3.0, 79.6±16.1%, 79.7±16.5%, 81.1±17.5%, and 78.1±20%, respectively. Patterns of local failures were similar for 1-D and 3D PB plans, which predicted that the majority of failures occur in centraldistal regions, with only ∼15% occurring distantly. However, with convolution/superposition and MC type algorithms, the majority of failures (65%) were predicted to be distant, consistent with the literature. Conclusion: Based on MC and convolution/superposition type algorithms, average PTV D95 and TCP were ∼15% lower than the planned 1-D PB dose calculation. Patterns of failure results suggest that MC and convolution/superposition type algorithms predict different outcomes for patterns of failure relative to PB algorithms. Work supported in part by Varian Medical Systems, Palo Alto, CA.« less
Cone-shaped source characteristics and inductance effect of transient electromagnetic method
NASA Astrophysics Data System (ADS)
Yang, Hai-Yan; Li, Feng-Ping; Yue, Jian-Hua; Guo, Fu-Sheng; Liu, Xu-Hua; Zhang, Hua
2017-03-01
Small multi-turn coil devices are used with the transient electromagnetic method (TEM) in areas with limited space, particularly in underground environments such as coal mines roadways and engineering tunnels, and for detecting shallow geological targets in environmental and engineering fields. However, the equipment involved has strong mutual inductance coupling, which causes a lengthy turn-offtime and a deep "blind zone". This study proposes a new transmitter device with a conical-shape source and derives the radius formula of each coil and the mutual inductance coefficient of the cone. According to primary field characteristics, results of the two fields created, calculation of the conical-shaped source in a uniform medium using theoretical analysis, and a comparison of the inductance of the new device with that of the multi-turn coil, show that inductance of the multi-turn coil is nine times greater than that of the conical source with the same equivalent magnetic moment of 926.1 A·m2. This indicates that the new source leads to a much shallower "blind zone." Furthermore, increasing the bottom radius and turn of the cone creates a larger mutual inductance but increasing the cone height results in a lower mutual inductance. Using the superposition principle, the primary and secondary magnetic fields for a conical source in a homogeneous medium are calculated; results indicate that the magnetic behavior of the cone is the same as that of the multi-turn coils, but the transient responses of the secondary field and the total field are more stronger than those of the multi-turn coils. To study the transient response characteristics using a cone-shaped source in a layered earth, a numerical filtering algorithm is then developed using the fast Hankel transform and the improved cosine transform, again using the superposition principle. During development, an average apparent resistivity inverted from the induced electromotive force using each coil is defined to represent the comprehensive resistivity of the conical source. To verify the forward calculation method, the transient responses of H type models and KH type models are calculated, and data are inverted using a "smoke ring" inversion. The results of inversion have good agreement with original models and show that the forward calculation method is effective. The results of this study provide an option for solving the problem of a deep "blind zone" and also provide a theoretical indicator for further research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tajaldeen, A; Ramachandran, P; Geso, M
2015-06-15
Purpose: The purpose of this study was to investigate and quantify the variation in dose distributions in small field lung cancer radiotherapy using seven different dose calculation algorithms. Methods: The study was performed in 21 lung cancer patients who underwent Stereotactic Ablative Body Radiotherapy (SABR). Two different methods (i) Same dose coverage to the target volume (named as same dose method) (ii) Same monitor units in all algorithms (named as same monitor units) were used for studying the performance of seven different dose calculation algorithms in XiO and Eclipse treatment planning systems. The seven dose calculation algorithms include Superposition, Fastmore » superposition, Fast Fourier Transform ( FFT) Convolution, Clarkson, Anisotropic Analytic Algorithm (AAA), Acurous XB and pencil beam (PB) algorithms. Prior to this, a phantom study was performed to assess the accuracy of these algorithms. Superposition algorithm was used as a reference algorithm in this study. The treatment plans were compared using different dosimetric parameters including conformity, heterogeneity and dose fall off index. In addition to this, the dose to critical structures like lungs, heart, oesophagus and spinal cord were also studied. Statistical analysis was performed using Prism software. Results: The mean±stdev with conformity index for Superposition, Fast superposition, Clarkson and FFT convolution algorithms were 1.29±0.13, 1.31±0.16, 2.2±0.7 and 2.17±0.59 respectively whereas for AAA, pencil beam and Acurous XB were 1.4±0.27, 1.66±0.27 and 1.35±0.24 respectively. Conclusion: Our study showed significant variations among the seven different algorithms. Superposition and AcurosXB algorithms showed similar values for most of the dosimetric parameters. Clarkson, FFT convolution and pencil beam algorithms showed large differences as compared to superposition algorithms. Based on our study, we recommend Superposition and AcurosXB algorithms as the first choice of algorithms in lung cancer radiotherapy involving small fields. However, further investigation by Monte Carlo simulation is required to confirm our results.« less
Xiao, Kai; Chen, Danny Z; Hu, X Sharon; Zhou, Bo
2012-12-01
The three-dimensional digital differential analyzer (3D-DDA) algorithm is a widely used ray traversal method, which is also at the core of many convolution∕superposition (C∕S) dose calculation approaches. However, porting existing C∕S dose calculation methods onto graphics processing unit (GPU) has brought challenges to retaining the efficiency of this algorithm. In particular, straightforward implementation of the original 3D-DDA algorithm inflicts a lot of branch divergence which conflicts with the GPU programming model and leads to suboptimal performance. In this paper, an efficient GPU implementation of the 3D-DDA algorithm is proposed, which effectively reduces such branch divergence and improves performance of the C∕S dose calculation programs running on GPU. The main idea of the proposed method is to convert a number of conditional statements in the original 3D-DDA algorithm into a set of simple operations (e.g., arithmetic, comparison, and logic) which are better supported by the GPU architecture. To verify and demonstrate the performance improvement, this ray traversal method was integrated into a GPU-based collapsed cone convolution∕superposition (CCCS) dose calculation program. The proposed method has been tested using a water phantom and various clinical cases on an NVIDIA GTX570 GPU. The CCCS dose calculation program based on the efficient 3D-DDA ray traversal implementation runs 1.42 ∼ 2.67× faster than the one based on the original 3D-DDA implementation, without losing any accuracy. The results show that the proposed method can effectively reduce branch divergence in the original 3D-DDA ray traversal algorithm and improve the performance of the CCCS program running on GPU. Considering the wide utilization of the 3D-DDA algorithm, various applications can benefit from this implementation method.
Litigated Metal Clusters - Structures, Energy and Reactivity
2016-04-01
projection superposition approximation ( PSA ) algorithm through a more careful consideration of how to calculate cross sections for elongated molecules...superposition approximation ( PSA ) is now complete. We have made it available free of charge to the scientific community on a dedicated website at UCSB. We...by AFOSR. We continued to improve the projection superposition approximation ( PSA ) algorithm through a more careful consideration of how to calculate
Generalization of some hidden subgroup algorithms for input sets of arbitrary size
NASA Astrophysics Data System (ADS)
Poslu, Damla; Say, A. C. Cem
2006-05-01
We consider the problem of generalizing some quantum algorithms so that they will work on input domains whose cardinalities are not necessarily powers of two. When analyzing the algorithms we assume that generating superpositions of arbitrary subsets of basis states whose cardinalities are not necessarily powers of two perfectly is possible. We have taken Ballhysa's model as a template and have extended it to Chi, Kim and Lee's generalizations of the Deutsch-Jozsa algorithm and to Simon's algorithm. With perfectly equal superpositions of input sets of arbitrary size, Chi, Kim and Lee's generalized Deutsch-Jozsa algorithms, both for evenly-distributed and evenly-balanced functions, worked with one-sided error property. For Simon's algorithm the success probability of the generalized algorithm is the same as that of the original for input sets of arbitrary cardinalities with equiprobable superpositions, since the property that the measured strings are all those which have dot product zero with the string we search, for the case where the function is 2-to-1, is not lost.
CT cardiac imaging: evolution from 2D to 3D backprojection
NASA Astrophysics Data System (ADS)
Tang, Xiangyang; Pan, Tinsu; Sasaki, Kosuke
2004-04-01
The state-of-the-art multiple detector-row CT, which usually employs fan beam reconstruction algorithms by approximating a cone beam geometry into a fan beam geometry, has been well recognized as an important modality for cardiac imaging. At present, the multiple detector-row CT is evolving into volumetric CT, in which cone beam reconstruction algorithms are needed to combat cone beam artifacts caused by large cone angle. An ECG-gated cardiac cone beam reconstruction algorithm based upon the so-called semi-CB geometry is implemented in this study. To get the highest temporal resolution, only the projection data corresponding to 180° plus the cone angle are row-wise rebinned into the semi-CB geometry for three-dimensional reconstruction. Data extrapolation is utilized to extend the z-coverage of the ECG-gated cardiac cone beam reconstruction algorithm approaching the edge of a CT detector. A helical body phantom is used to evaluate the ECG-gated cone beam reconstruction algorithm"s z-coverage and capability of suppressing cone beam artifacts. Furthermore, two sets of cardiac data scanned by a multiple detector-row CT scanner at 16 x 1.25 (mm) and normalized pitch 0.275 and 0.3 respectively are used to evaluate the ECG-gated CB reconstruction algorithm"s imaging performance. As a reference, the images reconstructed by a fan beam reconstruction algorithm for multiple detector-row CT are also presented. The qualitative evaluation shows that, the ECG-gated cone beam reconstruction algorithm outperforms its fan beam counterpart from the perspective of cone beam artifact suppression and z-coverage while the temporal resolution is well maintained. Consequently, the scan speed can be increased to reduce the contrast agent amount and injection time, improve the patient comfort and x-ray dose efficiency. Based up on the comparison, it is believed that, with the transition of multiple detector-row CT into volumetric CT, ECG-gated cone beam reconstruction algorithms will provide better image quality for CT cardiac applications.
Real-time dose computation: GPU-accelerated source modeling and superposition/convolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacques, Robert; Wong, John; Taylor, Russell
Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for themore » total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3} times were 8.3 and 94 s, respectively, on an AMD (Sunnyvale, CA) Opteron 254 (two cores, 2.8 GHz). Conclusions: The authors have completed a comprehensive, GPU-accelerated dose engine in order to provide a substantial performance gain over CPU based implementations. Real-time dose computation is feasible with the accuracy levels of the superposition/convolution algorithm.« less
UAS Collision Avoidance Algorithm that Minimizes the Impact on Route Surveillance
2009-03-01
Appendix A: Collision Avoidance Algorithm/Virtual Cockpit Interface .......................124 Appendix B : Collision Cone Boundary Rates... b ) Split Cone (c) Multiple Intruders, Single and Split Cones [27] ........................................................ 27 3-3: Collision Cone...Approach in the Vertical Plane (a) Single Cone ( b ) Multiple Intruders, Single and Split Cone [27
NASA Astrophysics Data System (ADS)
Xie, Shi-Peng; Luo, Li-Min
2012-06-01
The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT). The scatter kernel superposition (SKS) method has been used occasionally in previous studies. However, this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel. This study first evaluates the scatter kernel parameters using the SDB, and then isolates the scatter distribution based on the SKS. The quality of image can be improved by removing the scatter distribution. The results show that the method can effectively reduce the scatter artifacts, and increase the image quality. Our approach increases the image contrast and reduces the magnitude of cupping. The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel. This method is computationally efficient, easy to implement, and provides scatter correction using a single scan acquisition.
Nielsen, Tine B; Wieslander, Elinore; Fogliata, Antonella; Nielsen, Morten; Hansen, Olfred; Brink, Carsten
2011-05-01
To investigate differences in calculated doses and normal tissue complication probability (NTCP) values between different dose algorithms. Six dose algorithms from four different treatment planning systems were investigated: Eclipse AAA, Oncentra MasterPlan Collapsed Cone and Pencil Beam, Pinnacle Collapsed Cone and XiO Multigrid Superposition, and Fast Fourier Transform Convolution. Twenty NSCLC patients treated in the period 2001-2006 at the same accelerator were included and the accelerator used for treatments were modeled in the different systems. The treatment plans were recalculated with the same number of monitor units and beam arrangements across the dose algorithms. Dose volume histograms of the GTV, PTV, combined lungs (excluding the GTV), and heart were exported and evaluated. NTCP values for heart and lungs were calculated using the relative seriality model and the LKB model, respectively. Furthermore, NTCP for the lungs were calculated from two different model parameter sets. Calculations and evaluations were performed both including and excluding density corrections. There are found statistical significant differences between the calculated dose to heart, lung, and targets across the algorithms. Mean lung dose and V20 are not very sensitive to change between the investigated dose calculation algorithms. However, the different dose levels for the PTV averaged over the patient population are varying up to 11%. The predicted NTCP values for pneumonitis vary between 0.20 and 0.24 or 0.35 and 0.48 across the investigated dose algorithms depending on the chosen model parameter set. The influence of the use of density correction in the dose calculation on the predicted NTCP values depends on the specific dose calculation algorithm and the model parameter set. For fixed values of these, the changes in NTCP can be up to 45%. Calculated NTCP values for pneumonitis are more sensitive to the choice of algorithm than mean lung dose and V20 which are also commonly used for plan evaluation. The NTCP values for heart complication are, in this study, not very sensitive to the choice of algorithm. Dose calculations based on density corrections result in quite different NTCP values than calculations without density corrections. It is therefore important when working with NTCP planning to use NTCP parameter values based on calculations and treatments similar to those for which the NTCP is of interest.
The NLO jet vertex in the small-cone approximation for kt and cone algorithms
NASA Astrophysics Data System (ADS)
Colferai, D.; Niccoli, A.
2015-04-01
We determine the jet vertex for Mueller-Navelet jets and forward jets in the small-cone approximation for two particular choices of jet algoritms: the kt algorithm and the cone algorithm. These choices are motivated by the extensive use of such algorithms in the phenomenology of jets. The differences with the original calculations of the small-cone jet vertex by Ivanov and Papa, which is found to be equivalent to a formerly algorithm proposed by Furman, are shown at both analytic and numerical level, and turn out to be sizeable. A detailed numerical study of the error introduced by the small-cone approximation is also presented, for various observables of phenomenological interest. For values of the jet "radius" R = 0 .5, the use of the small-cone approximation amounts to an error of about 5% at the level of cross section, while it reduces to less than 2% for ratios of distributions such as those involved in the measure of the azimuthal decorrelation of dijets.
Pattern Classifications Using Grover's and Ventura's Algorithms in a Two-qubits System
NASA Astrophysics Data System (ADS)
Singh, Manu Pratap; Radhey, Kishori; Rajput, B. S.
2018-03-01
Carrying out the classification of patterns in a two-qubit system by separately using Grover's and Ventura's algorithms on different possible superposition, it has been shown that the exclusion superposition and the phase-invariance superposition are the most suitable search states obtained from two-pattern start-states and one-pattern start-states, respectively, for the simultaneous classifications of patterns. The higher effectiveness of Grover's algorithm for large search states has been verified but the higher effectiveness of Ventura's algorithm for smaller data base has been contradicted in two-qubit systems and it has been demonstrated that the unknown patterns (not present in the concerned data-base) are classified more efficiently than the known ones (present in the data-base) in both the algorithms. It has also been demonstrated that different states of Singh-Rajput MES obtained from the corresponding self-single- pattern start states are the most suitable search states for the classification of patterns |00>,|01 >, |10> and |11> respectively on the second iteration of Grover's method or the first operation of Ventura's algorithm.
Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera
Xue, Bai; Choi, Stacey S.; Doble, Nathan; Werner, John S.
2008-01-01
A fast and efficient method for quantifying photoreceptor density in images obtained with an en-face flood-illuminated adaptive optics (AO) imaging system is described. To improve accuracy of cone counting, en-face images are analyzed over extended areas. This is achieved with two separate semiautomated algorithms: (1) a montaging algorithm that joins retinal images with overlapping common features without edge effects and (2) a cone density measurement algorithm that counts the individual cones in the montaged image. The accuracy of the cone density measurement algorithm is high, with >97% agreement for a simulated retinal image (of known density, with low contrast) and for AO images from normal eyes when compared with previously reported histological data. Our algorithms do not require spatial regularity in cone packing and are, therefore, useful for counting cones in diseased retinas, as demonstrated for eyes with Stargardt’s macular dystrophy and retinitis pigmentosa. PMID:17429482
Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera
NASA Astrophysics Data System (ADS)
Xue, Bai; Choi, Stacey S.; Doble, Nathan; Werner, John S.
2007-05-01
A fast and efficient method for quantifying photoreceptor density in images obtained with an en-face flood-illuminated adaptive optics (AO) imaging system is described. To improve accuracy of cone counting, en-face images are analyzed over extended areas. This is achieved with two separate semiautomated algorithms: (1) a montaging algorithm that joins retinal images with overlapping common features without edge effects and (2) a cone density measurement algorithm that counts the individual cones in the montaged image. The accuracy of the cone density measurement algorithm is high, with >97% agreement for a simulated retinal image (of known density, with low contrast) and for AO images from normal eyes when compared with previously reported histological data. Our algorithms do not require spatial regularity in cone packing and are, therefore, useful for counting cones in diseased retinas, as demonstrated for eyes with Stargardt's macular dystrophy and retinitis pigmentosa.
A reconstruction algorithm for helical CT imaging on PI-planes.
Liang, Hongzhu; Zhang, Cishen; Yan, Ming
2006-01-01
In this paper, a Feldkamp type approximate reconstruction algorithm is presented for helical cone-beam Computed Tomography. To effectively suppress artifacts due to large cone angle scanning, it is proposed to reconstruct the object point-wisely on unique customized tilted PI-planes which are close to the data collecting helices of the corresponding points. Such a reconstruction scheme can considerably suppress the artifacts in the cone-angle scanning. Computer simulations show that the proposed algorithm can provide improved imaging performance compared with the existing approximate cone-beam reconstruction algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parenica, H; Ford, J; Mavroidis, P
Purpose: To quantify and compare the effect of metallic dental implants (MDI) on dose distributions calculated using Collapsed Cone Convolution Superposition (CCCS) algorithm or a Monte Carlo algorithm (with and without correcting for the density of the MDI). Methods: Seven previously treated patients to the head and neck region were included in this study. The MDI and the streaking artifacts on the CT images were carefully contoured. For each patient a plan was optimized and calculated using the Pinnacle3 treatment planning system (TPS). For each patient two dose calculations were performed, a) with the densities of the MDI and CTmore » artifacts overridden (12 g/cc and 1 g/cc respectively) and b) without density overrides. The plans were then exported to the Monaco TPS and recalculated using Monte Carlo dose calculation algorithm. The changes in dose to PTVs and surrounding Regions of Interest (ROIs) were examined between all plans. Results: The Monte Carlo dose calculation indicated that PTVs received 6% lower dose than the CCCS algorithm predicted. In some cases, the Monte Carlo algorithm indicated that surrounding ROIs received higher dose (up to a factor of 2). Conclusion: Not properly accounting for dental implants can impact both the high dose regions (PTV) and the low dose regions (OAR). This study implies that if MDI and the artifacts are not appropriately contoured and given the correct density, there is potential significant impact on PTV coverage and OAR maximum doses.« less
Cooper, Robert F.; Lombardo, Marco; Carroll, Joseph; Sloan, Kenneth R.; Lombardo, Giuseppe
2016-01-01
The ability to non-invasively image the cone photoreceptor mosaic holds significant potential as a diagnostic for retinal disease. Central to the realization of this potential is the development of sensitive metrics for characterizing the organization of the mosaic. Here we evaluated previously-described (Pum et al., 1990) and newly-developed (Fourier- and Radon-based) methods of measuring cone orientation in both simulated and real images of the parafoveal cone mosaic. The proposed algorithms correlated well across both simulated and real mosaics, suggesting that each algorithm would provide an accurate description of individual photoreceptor orientation. Despite the high agreement between algorithms, each performed differently in response to image intensity variation and cone coordinate jitter. The integration property of the Fourier transform allowed the Fourier-based method to be resistant to cone coordinate jitter and perform the most robustly of all three algorithms. Conversely, when there is good image quality but unreliable cone identification, the Radon algorithm performed best. Finally, in cases where both the image and cone coordinate reliability was excellent, the method of Pum et al. (1990) performed best. These descriptors are complementary to conventional descriptive metrics of the cone mosaic, such as cell density and spacing, and have the potential to aid in the detection of photoreceptor pathology. PMID:27484961
PI-line-based image reconstruction in helical cone-beam computed tomography with a variable pitch.
Zou, Yu; Pan, Xiaochuan; Xia, Dan; Wang, Ge
2005-08-01
Current applications of helical cone-beam computed tomography (CT) involve primarily a constant pitch where the translating speed of the table and the rotation speed of the source-detector remain constant. However, situations do exist where it may be more desirable to use a helical scan with a variable translating speed of the table, leading a variable pitch. One of such applications could arise in helical cone-beam CT fluoroscopy for the determination of vascular structures through real-time imaging of contrast bolus arrival. Most of the existing reconstruction algorithms have been developed only for helical cone-beam CT with constant pitch, including the backprojection-filtration (BPF) and filtered-backprojection (FBP) algorithms that we proposed previously. It is possible to generalize some of these algorithms to reconstruct images exactly for helical cone-beam CT with a variable pitch. In this work, we generalize our BPF and FBP algorithms to reconstruct images directly from data acquired in helical cone-beam CT with a variable pitch. We have also performed a preliminary numerical study to demonstrate and verify the generalization of the two algorithms. The results of the study confirm that our generalized BPF and FBP algorithms can yield exact reconstruction in helical cone-beam CT with a variable pitch. It should be pointed out that our generalized BPF algorithm is the only algorithm that is capable of reconstructing exactly region-of-interest image from data containing transverse truncations.
Experimental superposition of orders of quantum gates
Procopio, Lorenzo M.; Moqanaki, Amir; Araújo, Mateus; Costa, Fabio; Alonso Calafell, Irati; Dowd, Emma G.; Hamel, Deny R.; Rozema, Lee A.; Brukner, Časlav; Walther, Philip
2015-01-01
Quantum computers achieve a speed-up by placing quantum bits (qubits) in superpositions of different states. However, it has recently been appreciated that quantum mechanics also allows one to ‘superimpose different operations'. Furthermore, it has been shown that using a qubit to coherently control the gate order allows one to accomplish a task—determining if two gates commute or anti-commute—with fewer gate uses than any known quantum algorithm. Here we experimentally demonstrate this advantage, in a photonic context, using a second qubit to control the order in which two gates are applied to a first qubit. We create the required superposition of gate orders by using additional degrees of freedom of the photons encoding our qubits. The new resource we exploit can be interpreted as a superposition of causal orders, and could allow quantum algorithms to be implemented with an efficiency unlikely to be achieved on a fixed-gate-order quantum computer. PMID:26250107
The [Gamma] Algorithm and Some Applications
ERIC Educational Resources Information Center
Castillo, Enrique; Jubete, Francisco
2004-01-01
In this paper the power of the [gamma] algorithm for obtaining the dual of a given cone and some of its multiple applications is discussed. The meaning of each sequential tableau appearing during the process is interpreted. It is shown that each tableau contains the generators of the dual cone of a given cone and that the algorithm updates the…
NASA Astrophysics Data System (ADS)
Russo, G.; Attili, A.; Battistoni, G.; Bertrand, D.; Bourhaleb, F.; Cappucci, F.; Ciocca, M.; Mairani, A.; Milian, F. M.; Molinelli, S.; Morone, M. C.; Muraro, S.; Orts, T.; Patera, V.; Sala, P.; Schmitt, E.; Vivaldo, G.; Marchetto, F.
2016-01-01
The calculation algorithm of a modern treatment planning system for ion-beam radiotherapy should ideally be able to deal with different ion species (e.g. protons and carbon ions), to provide relative biological effectiveness (RBE) evaluations and to describe different beam lines. In this work we propose a new approach for ion irradiation outcomes computations, the beamlet superposition (BS) model, which satisfies these requirements. This model applies and extends the concepts of previous fluence-weighted pencil-beam algorithms to quantities of radiobiological interest other than dose, i.e. RBE- and LET-related quantities. It describes an ion beam through a beam-line specific, weighted superposition of universal beamlets. The universal physical and radiobiological irradiation effect of the beamlets on a representative set of water-like tissues is evaluated once, coupling the per-track information derived from FLUKA Monte Carlo simulations with the radiobiological effectiveness provided by the microdosimetric kinetic model and the local effect model. Thanks to an extension of the superposition concept, the beamlet irradiation action superposition is applicable for the evaluation of dose, RBE and LET distributions. The weight function for the beamlets superposition is derived from the beam phase space density at the patient entrance. A general beam model commissioning procedure is proposed, which has successfully been tested on the CNAO beam line. The BS model provides the evaluation of different irradiation quantities for different ions, the adaptability permitted by weight functions and the evaluation speed of analitical approaches. Benchmarking plans in simple geometries and clinical plans are shown to demonstrate the model capabilities.
Information Theoretic Characterization of Physical Theories with Projective State Space
NASA Astrophysics Data System (ADS)
Zaopo, Marco
2015-08-01
Probabilistic theories are a natural framework to investigate the foundations of quantum theory and possible alternative or deeper theories. In a generic probabilistic theory, states of a physical system are represented as vectors of outcomes probabilities and state spaces are convex cones. In this picture the physics of a given theory is related to the geometric shape of the cone of states. In quantum theory, for instance, the shape of the cone of states corresponds to a projective space over complex numbers. In this paper we investigate geometric constraints on the state space of a generic theory imposed by the following information theoretic requirements: every non completely mixed state of a system is perfectly distinguishable from some other state in a single shot measurement; information capacity of physical systems is conserved under making mixtures of states. These assumptions guarantee that a generic physical system satisfies a natural principle asserting that the more a state of the system is mixed the less information can be stored in the system using that state as logical value. We show that all theories satisfying the above assumptions are such that the shape of their cones of states is that of a projective space over a generic field of numbers. Remarkably, these theories constitute generalizations of quantum theory where superposition principle holds with coefficients pertaining to a generic field of numbers in place of complex numbers. If the field of numbers is trivial and contains only one element we obtain classical theory. This result tells that superposition principle is quite common among probabilistic theories while its absence gives evidence of either classical theory or an implausible theory.
Dual energy approach for cone beam artifacts correction
NASA Astrophysics Data System (ADS)
Han, Chulhee; Choi, Shinkook; Lee, Changwoo; Baek, Jongduk
2017-03-01
Cone beam computed tomography systems generate 3D volumetric images, which provide further morphological information compared to radiography and tomosynthesis systems. However, reconstructed images by FDK algorithm contain cone beam artifacts when a cone angle is large. To reduce the cone beam artifacts, two-pass algorithm has been proposed. The two-pass algorithm considers the cone beam artifacts are mainly caused by high density materials, and proposes an effective method to estimate error images (i.e., cone beam artifacts images) by the high density materials. While this approach is simple and effective with a small cone angle (i.e., 5 - 7 degree), the correction performance is degraded as the cone angle increases. In this work, we propose a new method to reduce the cone beam artifacts using a dual energy technique. The basic idea of the proposed method is to estimate the error images generated by the high density materials more reliably. To do this, projection data of the high density materials are extracted from dual energy CT projection data using a material decomposition technique, and then reconstructed by iterative reconstruction using total-variation regularization. The reconstructed high density materials are used to estimate the error images from the original FDK images. The performance of the proposed method is compared with the two-pass algorithm using root mean square errors. The results show that the proposed method reduces the cone beam artifacts more effectively, especially with a large cone angle.
Fu, Jian; Hu, Xinhua; Velroyen, Astrid; Bech, Martin; Jiang, Ming; Pfeiffer, Franz
2015-01-01
Due to the potential of compact imaging systems with magnified spatial resolution and contrast, cone-beam x-ray differential phase-contrast computed tomography (DPC-CT) has attracted significant interest. The current proposed FDK reconstruction algorithm with the Hilbert imaginary filter will induce severe cone-beam artifacts when the cone-beam angle becomes large. In this paper, we propose an algebraic iterative reconstruction (AIR) method for cone-beam DPC-CT and report its experiment results. This approach considers the reconstruction process as the optimization of a discrete representation of the object function to satisfy a system of equations that describes the cone-beam DPC-CT imaging modality. Unlike the conventional iterative algorithms for absorption-based CT, it involves the derivative operation to the forward projections of the reconstructed intermediate image to take into account the differential nature of the DPC projections. This method is based on the algebraic reconstruction technique, reconstructs the image ray by ray, and is expected to provide better derivative estimates in iterations. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a mini-focus x-ray tube source. It is shown that the proposed method can reduce the cone-beam artifacts and performs better than FDK under large cone-beam angles. This algorithm is of interest for future cone-beam DPC-CT applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez, P.; Tambasco, M.; LaFontaine, R.
2014-08-15
Our goal is to compare the dosimetric accuracy of the Pinnacle-3 9.2 Collapsed Cone Convolution Superposition (CCCS) and the iPlan 4.1 Monte Carlo (MC) and Pencil Beam (PB) algorithms in an anthropomorphic lung phantom using measurement as the gold standard. Ion chamber measurements were taken for 6, 10, and 18 MV beams in a CIRS E2E SBRT Anthropomorphic Lung Phantom, which mimics lung, spine, ribs, and tissue. The plan implemented six beams with a 5×5 cm{sup 2} field size, delivering a total dose of 48 Gy. Data from the planning systems were computed at the treatment isocenter in the leftmore » lung, and two off-axis points, the spinal cord and the right lung. The measurements were taken using a pinpoint chamber. The best results between data from the algorithms and our measurements occur at the treatment isocenter. For the 6, 10, and 18 MV beams, iPlan 4.1 MC software performs the best with 0.3%, 0.2%, and 4.2% absolute percent difference from measurement, respectively. Differences between our measurements and algorithm data are much greater for the off-axis points. The best agreement seen for the right lung and spinal cord is 11.4% absolute percent difference with 6 MV iPlan 4.1 PB and 18 MV iPlan 4.1 MC, respectively. As energy increases absolute percent difference from measured data increases up to 54.8% for the 18 MV CCCS algorithm. This study suggests that iPlan 4.1 MC computes peripheral dose and target dose in the lung more accurately than the iPlan 4.1 PB and Pinnicale CCCS algorithms.« less
The Impact of Monte Carlo Dose Calculations on Intensity-Modulated Radiation Therapy
NASA Astrophysics Data System (ADS)
Siebers, J. V.; Keall, P. J.; Mohan, R.
The effect of dose calculation accuracy for IMRT was studied by comparing different dose calculation algorithms. A head and neck IMRT plan was optimized using a superposition dose calculation algorithm. Dose was re-computed for the optimized plan using both Monte Carlo and pencil beam dose calculation algorithms to generate patient and phantom dose distributions. Tumor control probabilities (TCP) and normal tissue complication probabilities (NTCP) were computed to estimate the plan outcome. For the treatment plan studied, Monte Carlo best reproduces phantom dose measurements, the TCP was slightly lower than the superposition and pencil beam results, and the NTCP values differed little.
Phase retrieval in generalized optical interferometry systems.
Farriss, Wesley E; Fienup, James R; Malhotra, Tanya; Vamivakas, A Nick
2018-02-05
Modal analysis of an optical field via generalized interferometry (GI) is a novel technique that treats said field as a linear superposition of transverse modes and recovers the amplitudes of modal weighting coefficients. We use phase retrieval by nonlinear optimization to recover the phase of these modal weighting coefficients. Information diversity increases the robustness of the algorithm by better constraining the solution. Additionally, multiple sets of random starting phase values assist the algorithm in overcoming local minima. The algorithm was able to recover nearly all coefficient phases for simulated fields consisting of up to 21 superpositioned Hermite Gaussian modes from simulated data and proved to be resilient to shot noise.
A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT
Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.; Pan, Xiaochuan
2010-01-01
Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack–Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories. PMID:20175463
A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT.
Cho, Seungryong; Xia, Dan; Pellizzari, Charles A; Pan, Xiaochuan
2010-01-01
Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredback-projection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.
NASA Astrophysics Data System (ADS)
Singh, Manu Pratap; Radhey, Kishori; Kumar, Sandeep
2017-08-01
In the present paper, simultaneous classification of Orange and Apple has been carried out using both Grover's iterative algorithm (Grover 1996) and Ventura's model (Ventura and Martinez, Inf. Sci. 124, 273-296, 2000) taking different superposition of two- pattern start state containing Orange and Apple both, one- pattern start state containing Apple as search state and another one- pattern start state containing Orange as search state. It has been shown that the exclusion superposition is the most suitable two- pattern search state for simultaneous classification of pattern associated with Apples and Oranges and the superposition of phase-invariance are the best choice as the respective search state based on one -pattern start-states in both Grover's and Ventura's methods of classifications of patterns.
A reconstruction method for cone-beam differential x-ray phase-contrast computed tomography.
Fu, Jian; Velroyen, Astrid; Tan, Renbo; Zhang, Junwei; Chen, Liyuan; Tapfer, Arne; Bech, Martin; Pfeiffer, Franz
2012-09-10
Most existing differential phase-contrast computed tomography (DPC-CT) approaches are based on three kinds of scanning geometries, described by parallel-beam, fan-beam and cone-beam. Due to the potential of compact imaging systems with magnified spatial resolution, cone-beam DPC-CT has attracted significant interest. In this paper, we report a reconstruction method based on a back-projection filtration (BPF) algorithm for cone-beam DPC-CT. Due to the differential nature of phase contrast projections, the algorithm restrains from differentiation of the projection data prior to back-projection, unlike BPF algorithms commonly used for absorption-based CT data. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a micro-focus x-ray tube source. Moreover, the numerical simulation and experimental results demonstrate that the proposed method can deal with several classes of truncated cone-beam datasets. We believe that this feature is of particular interest for future medical cone-beam phase-contrast CT imaging applications.
[Accurate 3D free-form registration between fan-beam CT and cone-beam CT].
Liang, Yueqiang; Xu, Hongbing; Li, Baosheng; Li, Hongsheng; Yang, Fujun
2012-06-01
Because the X-ray scatters, the CT numbers in cone-beam CT cannot exactly correspond to the electron densities. This, therefore, results in registration error when the intensity-based registration algorithm is used to register planning fan-beam CT and cone-beam CT. In order to reduce the registration error, we have developed an accurate gradient-based registration algorithm. The gradient-based deformable registration problem is described as a minimization of energy functional. Through the calculus of variations and Gauss-Seidel finite difference method, we derived the iterative formula of the deformable registration. The algorithm was implemented by GPU through OpenCL framework, with which the registration time was greatly reduced. Our experimental results showed that the proposed gradient-based registration algorithm could register more accurately the clinical cone-beam CT and fan-beam CT images compared with the intensity-based algorithm. The GPU-accelerated algorithm meets the real-time requirement in the online adaptive radiotherapy.
Zhu, Jinhan; Chen, Lixin; Chen, Along; Luo, Guangwen; Deng, Xiaowu; Liu, Xiaowei
2015-04-11
To use a graphic processing unit (GPU) calculation engine to implement a fast 3D pre-treatment dosimetric verification procedure based on an electronic portal imaging device (EPID). The GPU algorithm includes the deconvolution and convolution method for the fluence-map calculations, the collapsed-cone convolution/superposition (CCCS) algorithm for the 3D dose calculations and the 3D gamma evaluation calculations. The results of the GPU-based CCCS algorithm were compared to those of Monte Carlo simulations. The planned and EPID-based reconstructed dose distributions in overridden-to-water phantoms and the original patients were compared for 6 MV and 10 MV photon beams in intensity-modulated radiation therapy (IMRT) treatment plans based on dose differences and gamma analysis. The total single-field dose computation time was less than 8 s, and the gamma evaluation for a 0.1-cm grid resolution was completed in approximately 1 s. The results of the GPU-based CCCS algorithm exhibited good agreement with those of the Monte Carlo simulations. The gamma analysis indicated good agreement between the planned and reconstructed dose distributions for the treatment plans. For the target volume, the differences in the mean dose were less than 1.8%, and the differences in the maximum dose were less than 2.5%. For the critical organs, minor differences were observed between the reconstructed and planned doses. The GPU calculation engine was used to boost the speed of 3D dose and gamma evaluation calculations, thus offering the possibility of true real-time 3D dosimetric verification.
Ning, Ruola; Tang, Xiangyang; Conover, David; Yu, Rongfeng
2003-07-01
Cone beam computed tomography (CBCT) has been investigated in the past two decades due to its potential advantages over a fan beam CT. These advantages include (a) great improvement in data acquisition efficiency, spatial resolution, and spatial resolution uniformity, (b) substantially better utilization of x-ray photons generated by the x-ray tube compared to a fan beam CT, and (c) significant advancement in clinical three-dimensional (3D) CT applications. However, most studies of CBCT in the past are focused on cone beam data acquisition theories and reconstruction algorithms. The recent development of x-ray flat panel detectors (FPD) has made CBCT imaging feasible and practical. This paper reports a newly built flat panel detector-based CBCT prototype scanner and presents the results of the preliminary evaluation of the prototype through a phantom study. The prototype consisted of an x-ray tube, a flat panel detector, a GE 8800 CT gantry, a patient table and a computer system. The prototype was constructed by modifying a GE 8800 CT gantry such that both a single-circle cone beam acquisition orbit and a circle-plus-two-arcs orbit can be achieved. With a circle-plus-two-arcs orbit, a complete set of cone beam projection data can be obtained, consisting of a set of circle projections and a set of arc projections. Using the prototype scanner, the set of circle projections were acquired by rotating the x-ray tube and the FPD together on the gantry, and the set of arc projections were obtained by tilting the gantry while the x-ray tube and detector were at the 12 and 6 o'clock positions, respectively. A filtered backprojection exact cone beam reconstruction algorithm based on a circle-plus-two-arcs orbit was used for cone beam reconstruction from both the circle and arc projections. The system was first characterized in terms of the linearity and dynamic range of the detector. Then the uniformity, spatial resolution and low contrast resolution were assessed using different phantoms mainly in the central plane of the cone beam reconstruction. Finally, the reconstruction accuracy of using the circle-plus-two-arcs orbit and its related filtered backprojection cone beam volume CT reconstruction algorithm was evaluated with a specially designed disk phantom. The results obtained using the new cone beam acquisition orbit and the related reconstruction algorithm were compared to those obtained using a single-circle cone beam geometry and Feldkamp's algorithm in terms of reconstruction accuracy. The results of the study demonstrate that the circle-plus-two-arcs cone beam orbit is achievable in practice. Also, the reconstruction accuracy of cone beam reconstruction is significantly improved with the circle-plus-two-arcs orbit and its related exact CB-FPB algorithm, as compared to using a single circle cone beam orbit and Feldkamp's algorithm.
Compressional and Shear Wakes in a 2D Dusty Plasma Crystal
NASA Astrophysics Data System (ADS)
Nosenko, V.; Goree, J.; Ma, Z. W.; Dubin, D. H. E.
2001-10-01
A 2D crystalline lattice can vibrate with two kinds of sound waves, compressional and shear (transverse), where the latter has a much slower sound speed. When these waves are excited by a moving supersonic disturbance, the superposition of the waves creates a Mach cone, i.e., a V-shaped wake. In our experiments, the supersonic disturbance was a moving spot of argon laser light, and this laser light applied a force, due to radiation pressure, on the particles. The beam was swept across the lattice in a controlled and repeatable manner. The particles were levitated in an argon rf discharge. By moving the laser spot faster than the shear sound speed c_t, but slower than the compressional sound speed c_l, we excited a shear wave Mach cone. Alternatively, by moving the laser spot faster than c_l, we excited both cones. In addition to Mach cones, we also observed a wake structure that arises from the compressional wave’s dispersion. We compare our results to Dubin’s theory (Phys. Plasmas 2000) and to molecular dynamics (MD) simulations.
A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.
2010-01-15
Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, amore » chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.« less
Region-of-interest image reconstruction in circular cone-beam microCT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Seungryong; Bian, Junguo; Pelizzari, Charles A.
2007-12-15
Cone-beam microcomputed tomography (microCT) is one of the most popular choices for small animal imaging which is becoming an important tool for studying animal models with transplanted diseases. Region-of-interest (ROI) imaging techniques in CT, which can reconstruct an ROI image from the projection data set of the ROI, can be used not only for reducing imaging-radiation exposure to the subject and scatters to the detector but also for potentially increasing spatial resolution of the reconstructed images. Increasing spatial resolution in microCT images can facilitate improved accuracy in many assessment tasks. A method proposed previously for increasing CT image spatial resolutionmore » entails the exploitation of the geometric magnification in cone-beam CT. Due to finite detector size, however, this method can lead to data truncation for a large geometric magnification. The Feldkamp-Davis-Kress (FDK) algorithm yields images with artifacts when truncated data are used, whereas the recently developed backprojection filtration (BPF) algorithm is capable of reconstructing ROI images without truncation artifacts from truncated cone-beam data. We apply the BPF algorithm to reconstructing ROI images from truncated data of three different objects acquired by our circular cone-beam microCT system. Reconstructed images by use of the FDK and BPF algorithms from both truncated and nontruncated cone-beam data are compared. The results of the experimental studies demonstrate that, from certain truncated data, the BPF algorithm can reconstruct ROI images with quality comparable to that reconstructed from nontruncated data. In contrast, the FDK algorithm yields ROI images with truncation artifacts. Therefore, an implication of the studies is that, when truncated data are acquired with a configuration of a large geometric magnification, the BPF algorithm can be used for effective enhancement of the spatial resolution of a ROI image.« less
Automated identification of cone photoreceptors in adaptive optics retinal images.
Li, Kaccie Y; Roorda, Austin
2007-05-01
In making noninvasive measurements of the human cone mosaic, the task of labeling each individual cone is unavoidable. Manual labeling is a time-consuming process, setting the motivation for the development of an automated method. An automated algorithm for labeling cones in adaptive optics (AO) retinal images is implemented and tested on real data. The optical fiber properties of cones aided the design of the algorithm. Out of 2153 manually labeled cones from six different images, the automated method correctly identified 94.1% of them. The agreement between the automated and the manual labeling methods varied from 92.7% to 96.2% across the six images. Results between the two methods disagreed for 1.2% to 9.1% of the cones. Voronoi analysis of large montages of AO retinal images confirmed the general hexagonal-packing structure of retinal cones as well as the general cone density variability across portions of the retina. The consistency of our measurements demonstrates the reliability and practicality of having an automated solution to this problem.
NOTE: A BPF-type algorithm for CT with a curved PI detector
NASA Astrophysics Data System (ADS)
Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping
2006-08-01
Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941 59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.
A BPF-type algorithm for CT with a curved PI detector.
Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping
2006-08-21
Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941-59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam-Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam-Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rit, Simon, E-mail: simon.rit@creatis.insa-lyon.fr; Clackdoyle, Rolf; Keuschnigg, Peter
Purpose: A new cone-beam CT scanner for image-guided radiotherapy (IGRT) can independently rotate the source and the detector along circular trajectories. Existing reconstruction algorithms are not suitable for this scanning geometry. The authors propose and evaluate a three-dimensional (3D) filtered-backprojection reconstruction for this situation. Methods: The source and the detector trajectories are tuned to image a field-of-view (FOV) that is offset with respect to the center-of-rotation. The new reconstruction formula is derived from the Feldkamp algorithm and results in a similar three-step algorithm: projection weighting, ramp filtering, and weighted backprojection. Simulations of a Shepp Logan digital phantom were used tomore » evaluate the new algorithm with a 10 cm-offset FOV. A real cone-beam CT image with an 8.5 cm-offset FOV was also obtained from projections of an anthropomorphic head phantom. Results: The quality of the cone-beam CT images reconstructed using the new algorithm was similar to those using the Feldkamp algorithm which is used in conventional cone-beam CT. The real image of the head phantom exhibited comparable image quality to that of existing systems. Conclusions: The authors have proposed a 3D filtered-backprojection reconstruction for scanners with independent source and detector rotations that is practical and effective. This algorithm forms the basis for exploiting the scanner’s unique capabilities in IGRT protocols.« less
Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong
2011-02-21
X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.
Optimal Superpositioning of Flexible Molecule Ensembles
Gapsys, Vytautas; de Groot, Bert L.
2013-01-01
Analysis of the internal dynamics of a biological molecule requires the successful removal of overall translation and rotation. Particularly for flexible or intrinsically disordered peptides, this is a challenging task due to the absence of a well-defined reference structure that could be used for superpositioning. In this work, we started the analysis with a widely known formulation of an objective for the problem of superimposing a set of multiple molecules as variance minimization over an ensemble. A negative effect of this superpositioning method is the introduction of ambiguous rotations, where different rotation matrices may be applied to structurally similar molecules. We developed two algorithms to resolve the suboptimal rotations. The first approach minimizes the variance together with the distance of a structure to a preceding molecule in the ensemble. The second algorithm seeks for minimal variance together with the distance to the nearest neighbors of each structure. The newly developed methods were applied to molecular-dynamics trajectories and normal-mode ensembles of the Aβ peptide, RS peptide, and lysozyme. These new (to our knowledge) superpositioning methods combine the benefits of variance and distance between nearest-neighbor(s) minimization, providing a solution for the analysis of intrinsic motions of flexible molecules and resolving ambiguous rotations. PMID:23332072
Extended volume coverage in helical cone-beam CT by using PI-line based BPF algorithm
NASA Astrophysics Data System (ADS)
Cho, Seungryong; Pan, Xiaochuan
2007-03-01
We compared data requirements of filtered-backprojection (FBP) and backprojection-filtration (BPF) algorithms based on PI-lines in helical cone-beam CT. Since the filtration process in FBP algorithm needs all the projection data of PI-lines for each view, the required detector size should be bigger than the size that can cover Tam-Danielsson (T-D) window to avoid data truncation. BPF algorithm, however, requires the projection data only within the T-D window, which means smaller detector size can be used to reconstruct the same image than that in FBP. In other words, a longer helical pitch can be obtained by using BPF algorithm without any truncation artifacts when a fixed detector size is given. The purpose of the work is to demonstrate numerically that extended volume coverage in helical cone-beam CT by using PI-line-based BPF algorithm can be achieved.
Optimal simultaneous superpositioning of multiple structures with missing data.
Theobald, Douglas L; Steindel, Phillip A
2012-08-01
Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually 'missing' from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation-maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. dtheobald@brandeis.edu Supplementary data are available at Bioinformatics online.
A national dosimetry audit for stereotactic ablative radiotherapy in lung.
Distefano, Gail; Lee, Jonny; Jafari, Shakardokht; Gouldstone, Clare; Baker, Colin; Mayles, Helen; Clark, Catharine H
2017-03-01
A UK national dosimetry audit was carried out to assess the accuracy of Stereotactic Ablative Body Radiotherapy (SABR) lung treatment delivery. This mail-based audit used an anthropomorphic thorax phantom containing nine alanine pellets positioned in the lung region for dosimetry, as well as EBT3 film in the axial plane for isodose comparison. Centres used their local planning protocol/technique, creating 27 SABR plans. A range of delivery techniques including conformal, volumetric modulated arc therapy (VMAT) and Cyberknife (CK) were used with six different calculation algorithms (collapsed cone, superposition, pencil-beam (PB), AAA, Acuros and Monte Carlo). The mean difference between measured and calculated dose (excluding PB results) was 0.4±1.4% for alanine and 1.4±3.4% for film. PB differences were -6.1% and -12.9% respectively. The median of the absolute maximum isodose-to-isodose distances was 3mm (-6mm to 7mm) and 5mm (-10mm to +19mm) for the 100% and 50% isodose lines respectively. Alanine and film is an effective combination for verifying dosimetric and geometric accuracy. There were some differences across dose algorithms, and geometric accuracy was better for VMAT and CK compared with conformal techniques. The alanine dosimetry results showed that planned and delivered doses were within ±3.0% for 25/27 SABR plans. Copyright © 2017 Elsevier B.V. All rights reserved.
Image reconstruction in cone-beam CT with a spherical detector using the BPF algorithm
NASA Astrophysics Data System (ADS)
Zuo, Nianming; Zou, Yu; Jiang, Tianzi; Pan, Xiaochuan
2006-03-01
Both flat-panel detectors and cylindrical detectors have been used in CT systems for data acquisition. The cylindrical detector generally offers a sampling of a transverse image plane more uniformly than does a flat-panel detector. However, in the longitudinal dimension, the cylindrical and flat-panel detectors offer similar sampling of the image space. In this work, we investigate a detector of spherical shape, which can yield uniform sampling of the 3D image space because the solid angle subtended by each individual detector bin remains unchanged. We have extended the backprojection-filtration (BPF) algorithm, which we have developed previously for cone-beam CT, to reconstruct images in cone-beam CT with a spherical detector. We also conduct computer-simulation studies to validate the extended BPF algorithm. Quantitative results in these numerical studies indicate that accurate images can be obtained from data acquired with a spherical detector by use of our extended BPF cone-beam algorithms.
Evaluation of six TPS algorithms in computing entrance and exit doses.
Tan, Yun I; Metwaly, Mohamed; Glegg, Martin; Baggarley, Shaun; Elliott, Alex
2014-05-08
Entrance and exit doses are commonly measured in in vivo dosimetry for comparison with expected values, usually generated by the treatment planning system (TPS), to verify accuracy of treatment delivery. This report aims to evaluate the accuracy of six TPS algorithms in computing entrance and exit doses for a 6 MV beam. The algorithms tested were: pencil beam convolution (Eclipse PBC), analytical anisotropic algorithm (Eclipse AAA), AcurosXB (Eclipse AXB), FFT convolution (XiO Convolution), multigrid superposition (XiO Superposition), and Monte Carlo photon (Monaco MC). Measurements with ionization chamber (IC) and diode detector in water phantoms were used as a reference. Comparisons were done in terms of central axis point dose, 1D relative profiles, and 2D absolute gamma analysis. Entrance doses computed by all TPS algorithms agreed to within 2% of the measured values. Exit doses computed by XiO Convolution, XiO Superposition, Eclipse AXB, and Monaco MC agreed with the IC measured doses to within 2%-3%. Meanwhile, Eclipse PBC and Eclipse AAA computed exit doses were higher than the IC measured doses by up to 5.3% and 4.8%, respectively. Both algorithms assume that full backscatter exists even at the exit level, leading to an overestimation of exit doses. Despite good agreements at the central axis for Eclipse AXB and Monaco MC, 1D relative comparisons showed profiles mismatched at depths beyond 11.5 cm. Overall, the 2D absolute gamma (3%/3 mm) pass rates were better for Monaco MC, while Eclipse AXB failed mostly at the outer 20% of the field area. The findings of this study serve as a useful baseline for the implementation of entrance and exit in vivo dosimetry in clinical departments utilizing any of these six common TPS algorithms for reference comparison.
MultiSETTER: web server for multiple RNA structure comparison.
Čech, Petr; Hoksza, David; Svozil, Daniel
2015-08-12
Understanding the architecture and function of RNA molecules requires methods for comparing and analyzing their tertiary and quaternary structures. While structural superposition of short RNAs is achievable in a reasonable time, large structures represent much bigger challenge. Therefore, we have developed a fast and accurate algorithm for RNA pairwise structure superposition called SETTER and implemented it in the SETTER web server. However, though biological relationships can be inferred by a pairwise structure alignment, key features preserved by evolution can be identified only from a multiple structure alignment. Thus, we extended the SETTER algorithm to the alignment of multiple RNA structures and developed the MultiSETTER algorithm. In this paper, we present the updated version of the SETTER web server that implements a user friendly interface to the MultiSETTER algorithm. The server accepts RNA structures either as the list of PDB IDs or as user-defined PDB files. After the superposition is computed, structures are visualized in 3D and several reports and statistics are generated. To the best of our knowledge, the MultiSETTER web server is the first publicly available tool for a multiple RNA structure alignment. The MultiSETTER server offers the visual inspection of an alignment in 3D space which may reveal structural and functional relationships not captured by other multiple alignment methods based either on a sequence or on secondary structure motifs.
Geometric convex cone volume analysis
NASA Astrophysics Data System (ADS)
Li, Hsiao-Chi; Chang, Chein-I.
2016-05-01
Convexity is a major concept used to design and develop endmember finding algorithms (EFAs). For abundance unconstrained techniques, Pixel Purity Index (PPI) and Automatic Target Generation Process (ATGP) which use Orthogonal Projection (OP) as a criterion, are commonly used method. For abundance partially constrained techniques, Convex Cone Analysis is generally preferred which makes use of convex cones to impose Abundance Non-negativity Constraint (ANC). For abundance fully constrained N-FINDR and Simplex Growing Algorithm (SGA) are most popular methods which use simplex volume as a criterion to impose ANC and Abundance Sum-to-one Constraint (ASC). This paper analyze an issue encountered in volume calculation with a hyperplane introduced to illustrate an idea of bounded convex cone. Geometric Convex Cone Volume Analysis (GCCVA) projects the boundary vectors of a convex cone orthogonally on a hyperplane to reduce the effect of background signatures and a geometric volume approach is applied to address the issue arose from calculating volume and further improve the performance of convex cone-based EFAs.
Three-Dimensional Weighting in Cone Beam FBP Reconstruction and Its Transformation Over Geometries.
Tang, Shaojie; Huang, Kuidong; Cheng, Yunyong; Niu, Tianye; Tang, Xiangyang
2018-06-01
With substantially increased number of detector rows in multidetector CT (MDCT), axial scan with projection data acquired along a circular source trajectory has become the method-of-choice in increasing clinical applications. Recognizing the practical relevance of image reconstruction directly from the projection data acquired in the native cone beam (CB) geometry, especially in scenarios wherein the most achievable in-plane resolution is desirable, we present a three-dimensional (3-D) weighted CB-FBP algorithm in such geometry in this paper. We start the algorithm's derivation in the cone-parallel geometry. Via changing of variables, taking the Jacobian into account and making heuristic and empirical assumptions, we arrive at the formulas for 3-D weighted image reconstruction in the native CB geometry. Using the projection data simulated by computer and acquired by an MDCT scanner, we evaluate and verify performance of the proposed algorithm for image reconstruction directly from projection data acquired in the native CB geometry. The preliminary data show that the proposed algorithm performs as well as the 3-D weighted CB-FBP algorithm in the cone-parallel geometry. The proposed algorithm is anticipated to find its utility in extensive clinical and preclinical applications wherein the reconstruction of images in the native CB geometry, i.e., the geometry for data acquisition, is of relevance.
Optimal simultaneous superpositioning of multiple structures with missing data
Theobald, Douglas L.; Steindel, Phillip A.
2012-01-01
Motivation: Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually ‘missing’ from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Results: Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation–maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. Availability and implementation: The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. Contact: dtheobald@brandeis.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22543369
Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina
2016-05-01
Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.
Modeling IrisCode and its variants as convex polyhedral cones and its security implications.
Kong, Adams Wai-Kin
2013-03-01
IrisCode, developed by Daugman, in 1993, is the most influential iris recognition algorithm. A thorough understanding of IrisCode is essential, because over 100 million persons have been enrolled by this algorithm and many biometric personal identification and template protection methods have been developed based on IrisCode. This paper indicates that a template produced by IrisCode or its variants is a convex polyhedral cone in a hyperspace. Its central ray, being a rough representation of the original biometric signal, can be computed by a simple algorithm, which can often be implemented in one Matlab command line. The central ray is an expected ray and also an optimal ray of an objective function on a group of distributions. This algorithm is derived from geometric properties of a convex polyhedral cone but does not rely on any prior knowledge (e.g., iris images). The experimental results show that biometric templates, including iris and palmprint templates, produced by different recognition methods can be matched through the central rays in their convex polyhedral cones and that templates protected by a method extended from IrisCode can be broken into. These experimental results indicate that, without a thorough security analysis, convex polyhedral cone templates cannot be assumed secure. Additionally, the simplicity of the algorithm implies that even junior hackers without knowledge of advanced image processing and biometric databases can still break into protected templates and reveal relationships among templates produced by different recognition methods.
NASA Technical Reports Server (NTRS)
Abdallah, Ayman A.; Barnett, Alan R.; Ibrahim, Omar M.; Manella, Richard T.
1993-01-01
Within the MSC/NASTRAN DMAP (Direct Matrix Abstraction Program) module TRD1, solving physical (coupled) or modal (uncoupled) transient equations of motion is performed using the Newmark-Beta or mode superposition algorithms, respectively. For equations of motion with initial conditions, only the Newmark-Beta integration routine has been available in MSC/NASTRAN solution sequences for solving physical systems and in custom DMAP sequences or alters for solving modal systems. In some cases, one difficulty with using the Newmark-Beta method is that the process of selecting suitable integration time steps for obtaining acceptable results is lengthy. In addition, when very small step sizes are required, a large amount of time can be spent integrating the equations of motion. For certain aerospace applications, a significant time savings can be realized when the equations of motion are solved using an exact integration routine instead of the Newmark-Beta numerical algorithm. In order to solve modal equations of motion with initial conditions and take advantage of efficiencies gained when using uncoupled solution algorithms (like that within TRD1), an exact mode superposition method using MSC/NASTRAN DMAP has been developed and successfully implemented as an enhancement to an existing coupled loads methodology at the NASA Lewis Research Center.
Fast, large-scale hologram calculation in wavelet domain
NASA Astrophysics Data System (ADS)
Shimobaba, Tomoyoshi; Matsushima, Kyoji; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Ito, Tomoyoshi
2018-04-01
We propose a large-scale hologram calculation using WAvelet ShrinkAge-Based superpositIon (WASABI), a wavelet transform-based algorithm. An image-type hologram calculated using the WASABI method is printed on a glass substrate with the resolution of 65 , 536 × 65 , 536 pixels and a pixel pitch of 1 μm. The hologram calculation time amounts to approximately 354 s on a commercial CPU, which is approximately 30 times faster than conventional methods.
Image reconstruction from cone-beam projections with attenuation correction
NASA Astrophysics Data System (ADS)
Weng, Yi
1997-07-01
In single photon emission computered tomography (SPECT) imaging, photon attenuation within the body is a major factor contributing to the quantitative inaccuracy in measuring the distribution of radioactivity. Cone-beam SPECT provides improved sensitivity for imaging small organs. This thesis extends the results for 2D parallel- beam and fan-beam geometry to 3D parallel-beam and cone- beam geometries in order to derive filtered backprojection reconstruction algorithms for the 3D exponential parallel-beam transform and for the exponential cone-beam transform with sampling on a sphere. An exact inversion formula for the 3D exponential parallel-beam transform is obtained and is extended to the 3D exponential cone-beam transform. Sampling on a sphere is not useful clinically and current cone-beam tomography, with the focal point traversing a planar orbit, does not acquire sufficient data to give an accurate reconstruction. Thus a data acquisition method that obtains complete data for cone-beam SPECT by simultaneously rotating the gamma camera and translating the patient bed, so that cone-beam projections can be obtained with the focal point traversing a helix that surrounds the patient was developed. First, an implementation of Grangeat's algorithm for helical cone- beam projections was developed without attenuation correction. A fast new rebinning scheme was developed that uses all of the detected data to reconstruct the image and properly normalizes any multiply scanned data. In the case of attenuation no theorem analogous to Tuy's has been proven. We hypothesized that an artifact-free reconstruction could be obtained even if the cone-beam data are attenuated, provided the imaging orbit satisfies Tuy's condition and the exact attenuation map is known. Cone-beam emission data were acquired by using a circle- and-line and a helix orbit on a clinical SPECT system. An iterative conjugate gradient reconstruction algorithm was used to reconstruct projection data with a known attenuation map. The quantitative accuracy of the attenuation-corrected emission reconstruction was significantly improved.
Exact BPF and FBP algorithms for nonstandard saddle curves.
Yu, Hengyong; Zhao, Shiying; Ye, Yangbo; Wang, Ge
2005-11-01
A hot topic in cone-beam CT research is exact cone-beam reconstruction from a general scanning trajectory. Particularly, a nonstandard saddle curve attracts attention, as this construct allows the continuous periodic scanning of a volume-of-interest (VOI). Here we evaluate two algorithms for reconstruction from data collected along a nonstandard saddle curve, which are in the filtered backprojection (FBP) and backprojection filtration (BPF) formats, respectively. Both the algorithms are implemented in a chord-based coordinate system. Then, a rebinning procedure is utilized to transform the reconstructed results into the natural coordinate system. The simulation results demonstrate that the FBP algorithm produces better image quality than the BPF algorithm, while both the algorithms exhibit similar noise characteristics.
The Quantum Binding Problem in the Context of Associative Memory
Wichert, Andreas
2016-01-01
We present a method to solve the binding problem by using a quantum algorithm for the retrieval of associations from associative memory during visual scene analysis. The problem is solved by mapping the information representing different objects into superposition by using entanglement and Grover’s amplification algorithm. PMID:27603782
Superposition of polarized waves at layered media: theoretical modeling and measurement
NASA Astrophysics Data System (ADS)
Finkele, Rolf; Wanielik, Gerd
1997-12-01
The detection of ice layers on road surfaces is a crucial requirement for a system that is designed to warn vehicle drivers of hazardous road conditions. In the millimeter wave regime at 76 GHz the dielectric constant of ice and conventional road surface materials (i.e. asphalt, concrete) is found to be nearly similar. Thus, if the layer of ice is very thin and thus is of the same shape of roughness as the underlying road surface it cannot be securely detected using conventional algorithmic approaches. The method introduced in this paper extents and applies the theoretical work of Pancharatnam on the superposition of polarized waves. The projection of the Stokes vectors onto the Poincare sphere traces a circle due to the variation of the thickness of the ice layer. The paper presents a method that utilizes the concept of wave superposition to detect this trace even if it is corrupted by stochastic variation due to rough surface scattering. Measurement results taken under real traffic conditions prove the validity of the proposed algorithms. Classification results are presented and the results discussed.
Evaluation of six TPS algorithms in computing entrance and exit doses
Metwaly, Mohamed; Glegg, Martin; Baggarley, Shaun P.; Elliott, Alex
2014-01-01
Entrance and exit doses are commonly measured in in vivo dosimetry for comparison with expected values, usually generated by the treatment planning system (TPS), to verify accuracy of treatment delivery. This report aims to evaluate the accuracy of six TPS algorithms in computing entrance and exit doses for a 6 MV beam. The algorithms tested were: pencil beam convolution (Eclipse PBC), analytical anisotropic algorithm (Eclipse AAA), AcurosXB (Eclipse AXB), FFT convolution (XiO Convolution), multigrid superposition (XiO Superposition), and Monte Carlo photon (Monaco MC). Measurements with ionization chamber (IC) and diode detector in water phantoms were used as a reference. Comparisons were done in terms of central axis point dose, 1D relative profiles, and 2D absolute gamma analysis. Entrance doses computed by all TPS algorithms agreed to within 2% of the measured values. Exit doses computed by XiO Convolution, XiO Superposition, Eclipse AXB, and Monaco MC agreed with the IC measured doses to within 2%‐3%. Meanwhile, Eclipse PBC and Eclipse AAA computed exit doses were higher than the IC measured doses by up to 5.3% and 4.8%, respectively. Both algorithms assume that full backscatter exists even at the exit level, leading to an overestimation of exit doses. Despite good agreements at the central axis for Eclipse AXB and Monaco MC, 1D relative comparisons showed profiles mismatched at depths beyond 11.5 cm. Overall, the 2D absolute gamma (3%/3 mm) pass rates were better for Monaco MC, while Eclipse AXB failed mostly at the outer 20% of the field area. The findings of this study serve as a useful baseline for the implementation of entrance and exit in vivo dosimetry in clinical departments utilizing any of these six common TPS algorithms for reference comparison. PACS numbers: 87.55.‐x, 87.55.D‐, 87.55.N‐, 87.53.Bn PMID:24892349
Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm*
Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan
2010-01-01
The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement. PMID:20617122
Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm.
Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan
2010-02-01
The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement.
Computational algebraic geometry for statistical modeling FY09Q2 progress.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, David C.; Rojas, Joseph Maurice; Pebay, Philippe Pierre
2009-03-01
This is a progress report on polynomial system solving for statistical modeling. This is a progress report on polynomial system solving for statistical modeling. This quarter we have developed our first model of shock response data and an algorithm for identifying the chamber cone containing a polynomial system in n variables with n+k terms within polynomial time - a significant improvement over previous algorithms, all having exponential worst-case complexity. We have implemented and verified the chamber cone algorithm for n+3 and are working to extend the implementation to handle arbitrary k. Later sections of this report explain chamber cones inmore » more detail; the next section provides an overview of the project and how the current progress fits into it.« less
Scintillator performance considerations for dedicated breast computed tomography
NASA Astrophysics Data System (ADS)
Vedantham, Srinivasan; Shi, Linxi; Karellas, Andrew
2017-09-01
Dedicated breast computed tomography (BCT) is an emerging clinical modality that can eliminate tissue superposition and has the potential for improved sensitivity and specificity for breast cancer detection and diagnosis. It is performed without physical compression of the breast. Most of the dedicated BCT systems use large-area detectors operating in cone-beam geometry and are referred to as cone-beam breast CT (CBBCT) systems. The large-area detectors in CBBCT systems are energy-integrating, indirect-type detectors employing a scintillator that converts x-ray photons to light, followed by detection of optical photons. A key consideration that determines the image quality achieved by such CBBCT systems is the choice of scintillator and its performance characteristics. In this work, a framework for analyzing the impact of the scintillator on CBBCT performance and its use for task-specific optimization of CBBCT imaging performance is described.
Correction of scatter in megavoltage cone-beam CT
NASA Astrophysics Data System (ADS)
Spies, L.; Ebert, M.; Groh, B. A.; Hesse, B. M.; Bortfeld, T.
2001-03-01
The role of scatter in a cone-beam computed tomography system using the therapeutic beam of a medical linear accelerator and a commercial electronic portal imaging device (EPID) is investigated. A scatter correction method is presented which is based on a superposition of Monte Carlo generated scatter kernels. The kernels are adapted to both the spectral response of the EPID and the dimensions of the phantom being scanned. The method is part of a calibration procedure which converts the measured transmission data acquired for each projection angle into water-equivalent thicknesses. Tomographic reconstruction of the projections then yields an estimate of the electron density distribution of the phantom. It is found that scatter produces cupping artefacts in the reconstructed tomograms. Furthermore, reconstructed electron densities deviate greatly (by about 30%) from their expected values. The scatter correction method removes the cupping artefacts and decreases the deviations from 30% down to about 8%.
Exact BPF and FBP algorithms for nonstandard saddle curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu Hengyong; Zhao Shiying; Ye Yangbo
2005-11-15
A hot topic in cone-beam CT research is exact cone-beam reconstruction from a general scanning trajectory. Particularly, a nonstandard saddle curve attracts attention, as this construct allows the continuous periodic scanning of a volume-of-interest (VOI). Here we evaluate two algorithms for reconstruction from data collected along a nonstandard saddle curve, which are in the filtered backprojection (FBP) and backprojection filtration (BPF) formats, respectively. Both the algorithms are implemented in a chord-based coordinate system. Then, a rebinning procedure is utilized to transform the reconstructed results into the natural coordinate system. The simulation results demonstrate that the FBP algorithm produces better imagemore » quality than the BPF algorithm, while both the algorithms exhibit similar noise characteristics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Min, Jonghwan; Pua, Rizza; Cho, Seungryong, E-mail: scho@kaist.ac.kr
Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in amore » circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the proposed scanning method and image reconstruction algorithm can effectively estimate the scatter in cone-beam projections and produce tomographic images of nearly scatter-free quality. The authors believe that the proposed method would provide a fast and efficient CBCT scanning option to various applications particularly including head-and-neck scan.« less
Graded-Index Optics are Matched to Optical Geometry in the Superposition Eyes of Scarab Beetles
NASA Astrophysics Data System (ADS)
McIntyre, P.; Caveney, S.
1985-11-01
Detailed measurements were made of the gradients of refractive index (g.r.i.) and relevant optical properties of the lens components in the ventral superposition eyes of three crepuscular species of the dung-beetle genus Onitis (Scarabaeinae). Each ommatidial lens has two components, a corneal facet and a crystalline cone; in both of these, the gradients provide a significant proportion of the refractive power. The spatial relationship between the lenses and the retina (optical geometry) was also determined. A computer ray-trace model based on these data was used to analyse the optical properties of the lenses and of the eye as a whole. Ray traces were done in two and three dimensions. The ommatidial lenses in all three species are afocal g.r.i. telescopes of low angular magnification. Parallel incident rays emerge approximately parallel for all angles of incidence up to the maximum. The superposition image of a distant point source is a small patch of light about the size of a rhabdom. There are obvious differences in the lens properties of the three species, most significantly in the shape of the refractive-index gradients in the crystalline cone, in the extent of the g.r.i. region in the two lens components and in the front-surface curvature of the corneal facet lens. These give rise to different angular magnifications M of the ommatidial lenses, the values for the three species being 1.7, 1.3, 1.0. This variation in M is matched by a variation in optical geometry, most evident in the different clear-zone widths. As a result, the level of the best superposition image lies close to the retina in the model eyes of all three species. The angular magnification also sets the maximum aperture or pupil of the eye and hence the brightness of the image on the retina. The smaller M, the larger the aperture and the brighter the image. By adopting a suitable value for M and the appropriate eye geometry, an eye can set image brightness and hence sensitivity within a certain range. Differences in the eye design are related to when the beetles fly at dusk. Flight experiments comparing two of the species show that the species with the higher value for M and corresponding lower sensitivity, initiates and terminates its flight earlier in the dusk than the other species with 2.8 times the sensitivity.
Optical threshold secret sharing scheme based on basic vector operations and coherence superposition
NASA Astrophysics Data System (ADS)
Deng, Xiaopeng; Wen, Wei; Mi, Xianwu; Long, Xuewen
2015-04-01
We propose, to our knowledge for the first time, a simple optical algorithm for secret image sharing with the (2,n) threshold scheme based on basic vector operations and coherence superposition. The secret image to be shared is firstly divided into n shadow images by use of basic vector operations. In the reconstruction stage, the secret image can be retrieved by recording the intensity of the coherence superposition of any two shadow images. Compared with the published encryption techniques which focus narrowly on information encryption, the proposed method can realize information encryption as well as secret sharing, which further ensures the safety and integrality of the secret information and prevents power from being kept centralized and abused. The feasibility and effectiveness of the proposed method are demonstrated by numerical results.
NASA Astrophysics Data System (ADS)
Yoon, Kyung-Beom; Park, Won-Hee
2015-04-01
The convective heat transfer coefficient and surface emissivity before and after flame occurrence on a wood specimen surface and the flame heat flux were estimated using the repulsive particle swarm optimization algorithm and cone heater test results. The cone heater specified in the ISO 5660 standards was used, and six cone heater heat fluxes were tested. Preservative-treated Douglas fir 21 mm in thickness was used as the wood specimen in the tests. This study confirmed that the surface temperature of the specimen, which was calculated using the convective heat transfer coefficient, surface emissivity and flame heat flux on the wood specimen by a repulsive particle swarm optimization algorithm, was consistent with the measured temperature. Considering the measurement errors in the surface temperature of the specimen, the applicability of the optimization method considered in this study was evaluated.
Intelligent Network-Centric Sensors Development Program
2012-07-31
Image sensor Configuration: ; Cone 360 degree LWIR PFx Sensor: •■. Image sensor . Configuration: Image MWIR Configuration; Cone 360 degree... LWIR PFx Sensor: Video Configuration: Cone 360 degree SW1R, 2. Reasoning Process to Match Sensor Systems to Algorithms The ontological...effects of coherent imaging because of aberrations. Another reason is the specular nature of active imaging. Both contribute to the nonuniformity
A Complete Description of Cones and Polytopes Including Hypervolumes of All Facets of a Polytope
ERIC Educational Resources Information Center
Jubete, F.; Castillo, E.
2007-01-01
In this paper methods and algorithms for identifying the main elements (edges and facets of any dimension) of a cone and a polytope, and calculating the corresponding hypervolumes are presented. The cones and polytopes are supposed to be given as the non-negative linear combination and the convex hull generated by a, not necessarily minimal, set…
Li, Zong-Tao; Wu, Tie-Jun; Lin, Can-Long; Ma, Long-Hua
2011-01-01
A new generalized optimum strapdown algorithm with coning and sculling compensation is presented, in which the position, velocity and attitude updating operations are carried out based on the single-speed structure in which all computations are executed at a single updating rate that is sufficiently high to accurately account for high frequency angular rate and acceleration rectification effects. Different from existing algorithms, the updating rates of the coning and sculling compensations are unrelated with the number of the gyro incremental angle samples and the number of the accelerometer incremental velocity samples. When the output sampling rate of inertial sensors remains constant, this algorithm allows increasing the updating rate of the coning and sculling compensation, yet with more numbers of gyro incremental angle and accelerometer incremental velocity in order to improve the accuracy of system. Then, in order to implement the new strapdown algorithm in a single FPGA chip, the parallelization of the algorithm is designed and its computational complexity is analyzed. The performance of the proposed parallel strapdown algorithm is tested on the Xilinx ISE 12.3 software platform and the FPGA device XC6VLX550T hardware platform on the basis of some fighter data. It is shown that this parallel strapdown algorithm on the FPGA platform can greatly decrease the execution time of algorithm to meet the real-time and high precision requirements of system on the high dynamic environment, relative to the existing implemented on the DSP platform. PMID:22164058
Enhanced calculation of eigen-stress field and elastic energy in atomistic interdiffusion of alloys
NASA Astrophysics Data System (ADS)
Cecilia, José M.; Hernández-Díaz, A. M.; Castrillo, Pedro; Jiménez-Alonso, J. F.
2017-02-01
The structural evolution of alloys is affected by the elastic energy associated to eigen-stress fields. However, efficient calculations of the elastic energy in evolving geometries are actually a great challenge in promising atomistic simulation techniques such as Kinetic Monte Carlo (KMC) methods. In this paper, we report two complementary algorithms to calculate the eigen-stress field by linear superposition (a.k.a. LSA, Lineal Superposition Algorithm) and the elastic energy modification in atomistic interdiffusion of alloys (the Atom Exchange Elastic Energy Evaluation (AE4) Algorithm). LSA is shown to be appropriated for fast incremental stress calculation in highly nanostructured materials, whereas AE4 provides the required input for KMC and, additionally, it can be used to evaluate the accuracy of the eigen-stress field calculated by LSA. Consequently, they are suitable to be used on-the-fly with KMC. Both algorithms are massively parallel by their definition and thus well-suited for their parallelization on modern Graphics Processing Units (GPUs). Our computational studies confirm that we can obtain significant improvements compared to conventional Finite Element Methods, and the utilization of GPUs opens up new possibilities for the development of these methods in atomistic simulation of materials.
Shor's quantum factoring algorithm on a photonic chip.
Politi, Alberto; Matthews, Jonathan C F; O'Brien, Jeremy L
2009-09-04
Shor's quantum factoring algorithm finds the prime factors of a large number exponentially faster than any other known method, a task that lies at the heart of modern information security, particularly on the Internet. This algorithm requires a quantum computer, a device that harnesses the massive parallelism afforded by quantum superposition and entanglement of quantum bits (or qubits). We report the demonstration of a compiled version of Shor's algorithm on an integrated waveguide silica-on-silicon chip that guides four single-photon qubits through the computation to factor 15.
NASA Astrophysics Data System (ADS)
Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr
2017-12-01
There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.
On sufficient statistics of least-squares superposition of vector sets.
Konagurthu, Arun S; Kasarapu, Parthan; Allison, Lloyd; Collier, James H; Lesk, Arthur M
2015-06-01
The problem of superposition of two corresponding vector sets by minimizing their sum-of-squares error under orthogonal transformation is a fundamental task in many areas of science, notably structural molecular biology. This problem can be solved exactly using an algorithm whose time complexity grows linearly with the number of correspondences. This efficient solution has facilitated the widespread use of the superposition task, particularly in studies involving macromolecular structures. This article formally derives a set of sufficient statistics for the least-squares superposition problem. These statistics are additive. This permits a highly efficient (constant time) computation of superpositions (and sufficient statistics) of vector sets that are composed from its constituent vector sets under addition or deletion operation, where the sufficient statistics of the constituent sets are already known (that is, the constituent vector sets have been previously superposed). This results in a drastic improvement in the run time of the methods that commonly superpose vector sets under addition or deletion operations, where previously these operations were carried out ab initio (ignoring the sufficient statistics). We experimentally demonstrate the improvement our work offers in the context of protein structural alignment programs that assemble a reliable structural alignment from well-fitting (substructural) fragment pairs. A C++ library for this task is available online under an open-source license.
Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe
2014-01-01
Purpose To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Methods Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. Results The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. Conclusions The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic. PMID:25203681
Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe
2014-01-01
To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic.
Classification of ligand molecules in PDB with graph match-based structural superposition.
Shionyu-Mitsuyama, Clara; Hijikata, Atsushi; Tsuji, Toshiyuki; Shirai, Tsuyoshi
2016-12-01
The fast heuristic graph match algorithm for small molecules, COMPLIG, was improved by adding a structural superposition process to verify the atom-atom matching. The modified method was used to classify the small molecule ligands in the Protein Data Bank (PDB) by their three-dimensional structures, and 16,660 types of ligands in the PDB were classified into 7561 clusters. In contrast, a classification by a previous method (without structure superposition) generated 3371 clusters from the same ligand set. The characteristic feature in the current classification system is the increased number of singleton clusters, which contained only one ligand molecule in a cluster. Inspections of the singletons in the current classification system but not in the previous one implied that the major factors for the isolation were differences in chirality, cyclic conformations, separation of substructures, and bond length. Comparisons between current and previous classification systems revealed that the superposition-based classification was effective in clustering functionally related ligands, such as drugs targeted to specific biological processes, owing to the strictness of the atom-atom matching.
Cone-beam reconstruction for the two-circles-plus-one-line trajectory
NASA Astrophysics Data System (ADS)
Lu, Yanbin; Yang, Jiansheng; Emerson, John W.; Mao, Heng; Zhou, Tie; Si, Yuanzheng; Jiang, Ming
2012-05-01
The Kodak Image Station In-Vivo FX has an x-ray module with cone-beam configuration for radiographic imaging but lacks the functionality of tomography. To introduce x-ray tomography into the system, we choose the two-circles-plus-one-line trajectory by mounting one translation motor and one rotation motor. We establish a reconstruction algorithm by applying the M-line reconstruction method. Numerical studies and preliminary physical phantom experiment demonstrate the feasibility of the proposed design and reconstruction algorithm.
SU-E-T-252: Developing a Pencil Beam Dose Calculation Algorithm for CyberKnife System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, B; Duke University Medical Center, Durham, NC; Liu, B
2015-06-15
Purpose: Currently there are two dose calculation algorithms available in the Cyberknife planning system: ray-tracing and Monte Carlo, which is either not accurate or time-consuming for irregular field shaped by the MLC that was recently introduced. The purpose of this study is to develop a fast and accurate pencil beam dose calculation algorithm which can handle irregular field. Methods: A pencil beam dose calculation algorithm widely used in Linac system is modified. The algorithm models both primary (short range) and scatter (long range) components with a single input parameter: TPR{sub 20}/{sub 10}. The TPR{sub 20}/{sub 20}/{sub 10} value was firstmore » estimated to derive an initial set of pencil beam model parameters (PBMP). The agreement between predicted and measured TPRs for all cones were evaluated using the root mean square of the difference (RMSTPR), which was then minimized by adjusting PBMPs. PBMPs are further tuned to minimize OCR RMS (RMSocr) by focusing at the outfield region. Finally, an arbitrary intensity profile is optimized by minimizing RMSocr difference at infield region. To test model validity, the PBMPs were obtained by fitting to only a subset of cones (4) and applied to all cones (12) for evaluation. Results: With RMS values normalized to the dmax and all cones combined, the average RMSTPR at build-up and descending region is 2.3% and 0.4%, respectively. The RMSocr at infield, penumbra and outfield region is 1.5%, 7.8% and 0.6%, respectively. Average DTA in penumbra region is 0.5mm. There is no trend found in TPR or OCR agreement among cones or depths. Conclusion: We have developed a pencil beam algorithm for Cyberknife system. The prediction agrees well with commissioning data. Only a subset of measurements is needed to derive the model. Further improvements are needed for TPR buildup region and OCR penumbra. Experimental validations on MLC shaped irregular field needs to be performed. This work was partially supported by the National Natural Science Foundation of China (61171005) and the China Scholarship Council (CSC)« less
Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.
2013-01-01
Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found that depth was the most dominant factor affecting the pattern of energy deposition; however, the effects of field size and off-axis distance were not negligible. For the material-specific kernels, we found that as the density of the material increased, more energy was deposited laterally by charged particles, as opposed to in the forward direction. Thus, density scaling of water kernels becomes a worse approximation as the density and the effective atomic number of the material differ more from water. Implementation of spatially variant, polyenergetic kernels increased the percent depth dose value at 25 cm depth by 2.1%–5.8% depending on the field size, while implementation of titanium kernels gave 4.9% higher dose upstream of the metal cavity (i.e., higher backscatter dose) and 8.2% lower dose downstream of the cavity. Conclusions: Of the various kernel refinements investigated, inclusion of depth-dependent and metal-specific kernels into the C/S method has the greatest potential to improve dose calculation accuracy. Implementation of spatially variant polyenergetic kernels resulted in a harder depth dose curve and thus has the potential to affect beam modeling parameters obtained in the commissioning process. For metal implants, the C/S algorithms generally underestimate the dose upstream and overestimate the dose downstream of the implant. Implementation of a metal-specific kernel mitigated both of these errors. PMID:24320507
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serin, E.; Codel, G.; Mabhouti, H.
Purpose: In small field geometries, the electronic equilibrium can be lost, making it challenging for the dose-calculation algorithm to accurately predict the dose, especially in the presence of tissue heterogeneities. In this study, dosimetric accuracy of Monte Carlo (MC) advanced dose calculation and sequential algorithms of Multiplan treatment planning system were investigated for small radiation fields incident on homogeneous and heterogeneous geometries. Methods: Small open fields of fixed cones of Cyberknife M6 unit 100 to 500 mm2 were used for this study. The fields were incident on in house phantom containing lung, air, and bone inhomogeneities and also homogeneous phantom.more » Using the same film batch, the net OD to dose calibration curve was obtained using CK with the 60 mm fixed cone by delivering 0- 800 cGy. Films were scanned 48 hours after irradiation using an Epson 1000XL flatbed scanner. The dosimetric accuracy of MC and sequential algorithms in the presence of the inhomogeneities was compared against EBT3 film dosimetry Results: Open field tests in a homogeneous phantom showed good agreement between two algorithms and film measurement For MC algorithm, the minimum gamma analysis passing rates between measured and calculated dose distributions were 99.7% and 98.3% for homogeneous and inhomogeneous fields in the case of lung and bone respectively. For sequential algorithm, the minimum gamma analysis passing rates were 98.9% and 92.5% for for homogeneous and inhomogeneous fields respectively for used all cone sizes. In the case of the air heterogeneity, the differences were larger for both calculation algorithms. Overall, when compared to measurement, the MC had better agreement than sequential algorithm. Conclusion: The Monte Carlo calculation algorithm in the Multiplan treatment planning system is an improvement over the existing sequential algorithm. Dose discrepancies were observed for in the presence of air inhomogeneities.« less
Exact consideration of data redundancies for spiral cone-beam CT
NASA Astrophysics Data System (ADS)
Lauritsch, Guenter; Katsevich, Alexander; Hirsch, Michael
2004-05-01
In multi-slice spiral computed tomography (CT) there is an obvious trend in adding more and more detector rows. The goals are numerous: volume coverage, isotropic spatial resolution, and speed. Consequently, there will be a variety of scan protocols optimizing clinical applications. Flexibility in table feed requires consideration of data redundancies to ensure efficient detector usage. Until recently this was achieved by approximate reconstruction algorithms only. However, due to the increasing cone angles there is a need of exact treatment of the cone beam geometry. A new, exact and efficient 3-PI algorithm for considering three-fold data redundancies was derived from a general, theoretical framework based on 3D Radon inversion using Grangeat's formula. The 3-PI algorithm possesses a simple and efficient structure as the 1-PI method for non-redundant data previously proposed. Filtering is one-dimensional, performed along lines with variable tilt on the detector. This talk deals with a thorough evaluation of the performance of the 3-PI algorithm in comparison to the 1-PI method. Image quality of the 3-PI algorithm is superior. The prominent spiral artifacts and other discretization artifacts are significantly reduced due to averaging effects when taking into account redundant data. Certainly signal-to-noise ratio is increased. The computational expense is comparable even to that of approximate algorithms. The 3-PI algorithm proves its practicability for applications in medical imaging. Other exact n-PI methods for n-fold data redundancies (n odd) can be deduced from the general, theoretical framework.
WE-G-18A-03: Cone Artifacts Correction in Iterative Cone Beam CT Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, H; Folkerts, M; Jiang, S
Purpose: For iterative reconstruction (IR) in cone-beam CT (CBCT) imaging, data truncation along the superior-inferior (SI) direction causes severe cone artifacts in the reconstructed CBCT volume images. Not only does it reduce the effective SI coverage of the reconstructed volume, it also hinders the IR algorithm convergence. This is particular a problem for regularization based IR, where smoothing type regularization operations tend to propagate the artifacts to a large area. It is our purpose to develop a practical cone artifacts correction solution. Methods: We found it is the missing data residing in the truncated cone area that leads to inconsistencymore » between the calculated forward projections and measured projections. We overcome this problem by using FDK type reconstruction to estimate the missing data and design weighting factors to compensate the inconsistency caused by the missing data. We validate the proposed methods in our multi-GPU low-dose CBCT reconstruction system on multiple patients' datasets. Results: Compared to the FDK reconstruction with full datasets, while IR is able to reconstruct CBCT images using a subset of projection data, the severe cone artifacts degrade overall image quality. For head-neck case under a full-fan mode, 13 out of 80 slices are contaminated. It is even more severe in pelvis case under half-fan mode, where 36 out of 80 slices are affected, leading to inferior soft-tissue delineation. By applying the proposed method, the cone artifacts are effectively corrected, with a mean intensity difference decreased from ∼497 HU to ∼39HU for those contaminated slices. Conclusion: A practical and effective solution for cone artifacts correction is proposed and validated in CBCT IR algorithm. This study is supported in part by NIH (1R01CA154747-01)« less
SU-G-TeP1-08: LINAC Head Geometry Modeling for Cyber Knife System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, B; Li, Y; Liu, B
Purpose: Knowledge of the LINAC head information is critical for model based dose calculation algorithms. However, the geometries are difficult to measure precisely. The purpose of this study is to develop linac head models for Cyber Knife system (CKS). Methods: For CKS, the commissioning data were measured in water at 800mm SAD. The measured full width at half maximum (FWHM) for each cone was found greater than the nominal value, this was further confirmed by additional film measurement in air. Diameter correction, cone shift and source shift models (DCM, CSM and SSM) are proposed to account for the differences. Inmore » DCM, a cone-specific correction is applied. For CSM and SSM, a single shift is applied to the cone or source physical position. All three models were validated with an in-house developed pencil beam dose calculation algorithm, and further evaluated by the collimator scatter factor (Sc) correction. Results: The mean square error (MSE) between nominal diameter and the FWHM derived from commissioning data and in-air measurement are 0.54mm and 0.44mm, with the discrepancy increasing with cone size. Optimal shift for CSM and SSM is found to be 9mm upward and 18mm downward, respectively. The MSE in FWHM is reduced to 0.04mm and 0.14mm for DCM and CSM (SSM). Both DCM and CSM result in the same set of Sc values. Combining all cones at SAD 600–1000mm, the average deviation from 1 in Sc of DCM (CSM) and SSM is 2.6% and 2.2%, and reduced to 0.9% and 0.7% for the cones with diameter greater than 15mm. Conclusion: We developed three geometrical models for CKS. All models can handle the discrepancy between vendor specifications and commissioning data. And SSM has the best performance for Sc correction. The study also validated that a point source can be used in CKS dose calculation algorithms.« less
Robot Behavior Acquisition Superposition and Composting of Behaviors Learned through Teleoperation
NASA Technical Reports Server (NTRS)
Peters, Richard Alan, II
2004-01-01
Superposition of a small set of behaviors, learned via teleoperation, can lead to robust completion of a simple articulated reach-and-grasp task. Results support the hypothesis that a set of learned behaviors can be combined to generate new behaviors of a similar type. This supports the hypothesis that a robot can learn to interact purposefully with its environment through a developmental acquisition of sensory-motor coordination. Teleoperation bootstraps the process by enabling the robot to observe its own sensory responses to actions that lead to specific outcomes. A reach-and-grasp task, learned by an articulated robot through a small number of teleoperated trials, can be performed autonomously with success in the face of significant variations in the environment and perturbations of the goal. Superpositioning was performed using the Verbs and Adverbs algorithm that was developed originally for the graphical animation of articulated characters. Work was performed on Robonaut at NASA-JSC.
A Simple Encryption Algorithm for Quantum Color Image
NASA Astrophysics Data System (ADS)
Li, Panchi; Zhao, Ya
2017-06-01
In this paper, a simple encryption scheme for quantum color image is proposed. Firstly, a color image is transformed into a quantum superposition state by employing NEQR (novel enhanced quantum representation), where the R,G,B values of every pixel in a 24-bit RGB true color image are represented by 24 single-qubit basic states, and each value has 8 qubits. Then, these 24 qubits are respectively transformed from a basic state into a balanced superposition state by employed the controlled rotation gates. At this time, the gray-scale values of R, G, B of every pixel are in a balanced superposition of 224 multi-qubits basic states. After measuring, the whole image is an uniform white noise, which does not provide any information. Decryption is the reverse process of encryption. The experimental results on the classical computer show that the proposed encryption scheme has better security.
A Circuit-Based Quantum Algorithm Driven by Transverse Fields for Grover's Problem
NASA Technical Reports Server (NTRS)
Jiang, Zhang; Rieffel, Eleanor G.; Wang, Zhihui
2017-01-01
We designed a quantum search algorithm, giving the same quadratic speedup achieved by Grover's original algorithm; we replace Grover's diffusion operator (hard to implement) with a product diffusion operator generated by transverse fields (easy to implement). In our algorithm, the problem Hamiltonian (oracle) and the transverse fields are applied to the system alternatively. We construct such a sequence that the corresponding unitary generates a closed transition between the initial state (even superposition of all states) and a modified target state, which has a high degree of overlap with the original target state.
NASA Astrophysics Data System (ADS)
Becker, Matthew R.
2013-10-01
I present a new algorithm, Curved-sky grAvitational Lensing for Cosmological Light conE simulatioNS (CALCLENS), for efficiently computing weak gravitational lensing shear signals from large N-body light cone simulations over a curved sky. This new algorithm properly accounts for the sky curvature and boundary conditions, is able to produce redshift-dependent shear signals including corrections to the Born approximation by using multiple-plane ray tracing and properly computes the lensed images of source galaxies in the light cone. The key feature of this algorithm is a new, computationally efficient Poisson solver for the sphere that combines spherical harmonic transform and multigrid methods. As a result, large areas of sky (˜10 000 square degrees) can be ray traced efficiently at high resolution using only a few hundred cores. Using this new algorithm and curved-sky calculations that only use a slower but more accurate spherical harmonic transform Poisson solver, I study the convergence, shear E-mode, shear B-mode and rotation mode power spectra. Employing full-sky E/B-mode decompositions, I confirm that the numerically computed shear B-mode and rotation mode power spectra are equal at high accuracy (≲1 per cent) as expected from perturbation theory up to second order. Coupled with realistic galaxy populations placed in large N-body light cone simulations, this new algorithm is ideally suited for the construction of synthetic weak lensing shear catalogues to be used to test for systematic effects in data analysis procedures for upcoming large-area sky surveys. The implementation presented in this work, written in C and employing widely available software libraries to maintain portability, is publicly available at http://code.google.com/p/calclens.
[Application of elastic registration based on Demons algorithm in cone beam CT].
Pang, Haowen; Sun, Xiaoyang
2014-02-01
We applied Demons and accelerated Demons elastic registration algorithm in radiotherapy cone beam CT (CBCT) images, We provided software support for real-time understanding of organ changes during radiotherapy. We wrote a 3D CBCT image elastic registration program using Matlab software, and we tested and verified the images of two patients with cervical cancer 3D CBCT images for elastic registration, based on the classic Demons algorithm, minimum mean square error (MSE) decreased 59.7%, correlation coefficient (CC) increased 11.0%. While for the accelerated Demons algorithm, MSE decreased 40.1%, CC increased 7.2%. The experimental verification with two methods of Demons algorithm obtained the desired results, but the small difference appeared to be lack of precision, and the total registration time was a little long. All these problems need to be further improved for accuracy and reducing of time.
NASA Astrophysics Data System (ADS)
Abu Anas, Emran Mohammad; Kim, Jae Gon; Lee, Soo Yeol; Kamrul Hasan, Md
2011-10-01
The use of an x-ray flat panel detector is increasingly becoming popular in 3D cone beam volume CT machines. Due to the deficient semiconductor array manufacturing process, the cone beam projection data are often corrupted by different types of abnormalities, which cause severe ring and radiant artifacts in a cone beam reconstruction image, and as a result, the diagnostic image quality is degraded. In this paper, a novel technique is presented for the correction of error in the 2D cone beam projections due to abnormalities often observed in 2D x-ray flat panel detectors. Template images are derived from the responses of the detector pixels using their statistical properties and then an effective non-causal derivative-based detection algorithm in 2D space is presented for the detection of defective and mis-calibrated detector elements separately. An image inpainting-based 3D correction scheme is proposed for the estimation of responses of defective detector elements, and the responses of the mis-calibrated detector elements are corrected using the normalization technique. For real-time implementation, a simplification of the proposed off-line method is also suggested. Finally, the proposed algorithms are tested using different real cone beam volume CT images and the experimental results demonstrate that the proposed methods can effectively remove ring and radiant artifacts from cone beam volume CT images compared to other reported techniques in the literature.
Statistical properties of superimposed stationary spike trains.
Deger, Moritz; Helias, Moritz; Boucsein, Clemens; Rotter, Stefan
2012-06-01
The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities-like the count variability, inter-spike interval (ISI) variability and ISI correlations-and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.
A review on quantum search algorithms
NASA Astrophysics Data System (ADS)
Giri, Pulak Ranjan; Korepin, Vladimir E.
2017-12-01
The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.
The uniqueness of the solution of cone-like inversion models for halo CMEs
NASA Astrophysics Data System (ADS)
Zhao, X. P.
2006-12-01
Most of elliptic halo CMEs are believed to be formed by the Thompson scattering of the photospheric light by the 3-D cone-like shell of the CME plasma. To obtain the real propagation direction and angular width of the halo CMEs, such cone-like inversion models as the circular cone, the elliptic cone and the ice-cream cone models have been suggested recently. Because the number of given parameters that are used to characterize 2-D elliptic halo CMEs observed by one spacecraft are less than the number of unknown parameters that are used to characterize the 3-D elliptic cone model, the solution of the elliptic cone model is not unique. Since it is difficult to determine whether or not an observed halo CME is formed by an circular cone or elliptic cone shell, the solution of circular cone model may often be not unique too. To fix the problem of the uniqueness of the solution of various 3-D cone-like inversion models, this work tries to develop the algorithm for using the data from multi-spacecraft, such as the STEREO A and B, and the Solar Sentinels.
Semi-automated identification of cones in the human retina using circle Hough transform
Bukowska, Danuta M.; Chew, Avenell L.; Huynh, Emily; Kashani, Irwin; Wan, Sue Ling; Wan, Pak Ming; Chen, Fred K
2015-01-01
A large number of human retinal diseases are characterized by a progressive loss of cones, the photoreceptors critical for visual acuity and color perception. Adaptive Optics (AO) imaging presents a potential method to study these cells in vivo. However, AO imaging in ophthalmology is a relatively new phenomenon and quantitative analysis of these images remains difficult and tedious using manual methods. This paper illustrates a novel semi-automated quantitative technique enabling registration of AO images to macular landmarks, cone counting and its radius quantification at specified distances from the foveal center. The new cone counting approach employs the circle Hough transform (cHT) and is compared to automated counting methods, as well as arbitrated manual cone identification. We explore the impact of varying the circle detection parameter on the validity of cHT cone counting and discuss the potential role of using this algorithm in detecting both cones and rods separately. PMID:26713186
NASA Astrophysics Data System (ADS)
Tang, Xiangyang
2003-05-01
In multi-slice helical CT, the single-tilted-plane-based reconstruction algorithm has been proposed to combat helical and cone beam artifacts by tilting a reconstruction plane to fit a helical source trajectory optimally. Furthermore, to improve the noise characteristics or dose efficiency of the single-tilted-plane-based reconstruction algorithm, the multi-tilted-plane-based reconstruction algorithm has been proposed, in which the reconstruction plane deviates from the pose globally optimized due to an extra rotation along the 3rd axis. As a result, the capability of suppressing helical and cone beam artifacts in the multi-tilted-plane-based reconstruction algorithm is compromised. An optomized tilted-plane-based reconstruction algorithm is proposed in this paper, in which a matched view weighting strategy is proposed to optimize the capability of suppressing helical and cone beam artifacts and noise characteristics. A helical body phantom is employed to quantitatively evaluate the imaging performance of the matched view weighting approach by tabulating artifact index and noise characteristics, showing that the matched view weighting improves both the helical artifact suppression and noise characteristics or dose efficiency significantly in comparison to the case in which non-matched view weighting is applied. Finally, it is believed that the matched view weighting approach is of practical importance in the development of multi-slive helical CT, because it maintains the computational structure of fan beam filtered backprojection and demands no extra computational services.
SIMULATION OF A REACTING POLLUTANT PUFF USING AN ADAPTIVE GRID ALGORITHM
A new dynamic solution adaptive grid algorithm DSAGA-PPM, has been developed for use in air quality modeling. In this paper, this algorithm is described and evaluated with a test problem. Cone-shaped distributions of various chemical species undergoing chemical reactions are rota...
Processing Cones: A Computational Structure for Image Analysis.
1981-12-01
image analysis applications, referred to as a processing cone, is described and sample algorithms are presented. A fundamental characteristic of the structure is its hierarchical organization into two-dimensional arrays of decreasing resolution. In this architecture, a protypical function is defined on a local window of data and applied uniformly to all windows in a parallel manner. Three basic modes of processing are supported in the cone: reduction operations (upward processing), horizontal operations (processing at a single level) and projection operations (downward
Rapid scatter estimation for CBCT using the Boltzmann transport equation
NASA Astrophysics Data System (ADS)
Sun, Mingshan; Maslowski, Alex; Davis, Ian; Wareing, Todd; Failla, Gregory; Star-Lack, Josh
2014-03-01
Scatter in cone-beam computed tomography (CBCT) is a significant problem that degrades image contrast, uniformity and CT number accuracy. One means of estimating and correcting for detected scatter is through an iterative deconvolution process known as scatter kernel superposition (SKS). While the SKS approach is efficient, clinically significant errors on the order 2-4% (20-40 HU) still remain. We have previously shown that the kernel method can be improved by perturbing the kernel parameters based on reference data provided by limited Monte Carlo simulations of a first-pass reconstruction. In this work, we replace the Monte Carlo modeling with a deterministic Boltzmann solver (AcurosCTS) to generate the reference scatter data in a dramatically reduced time. In addition, the algorithm is improved so that instead of adjusting kernel parameters, we directly perturb the SKS scatter estimates. Studies were conducted on simulated data and on a large pelvis phantom scanned on a tabletop system. The new method reduced average reconstruction errors (relative to a reference scan) from 2.5% to 1.8%, and significantly improved visualization of low contrast objects. In total, 24 projections were simulated with an AcurosCTS execution time of 22 sec/projection using an 8-core computer. We have ported AcurosCTS to the GPU, and current run-times are approximately 4 sec/projection using two GPU's running in parallel.
NASA Astrophysics Data System (ADS)
Chioran, Doina; Nicoarǎ, Adrian; Roşu, Şerban; Cǎrligeriu, Virgil; Ianeş, Emilia
2013-10-01
Digital processing of two-dimensional cone beam computer tomography slicesstarts by identification of the contour of elements within. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating and implementation of algorithms in dental 2D imagery.
Use of cone beam computed tomography in periodontology
Acar, Buket; Kamburoğlu, Kıvanç
2014-01-01
Diagnosis of periodontal disease mainly depends on clinical signs and symptoms. However, in the case of bone destruction, radiographs are valuable diagnostic tools as an adjunct to the clinical examination. Two dimensional periapical and panoramic radiographs are routinely used for diagnosing periodontal bone levels. In two dimensional imaging, evaluation of bone craters, lamina dura and periodontal bone level is limited by projection geometry and superpositions of adjacent anatomical structures. Those limitations of 2D radiographs can be eliminated by three-dimensional imaging techniques such as computed tomography. Cone beam computed tomography (CBCT) generates 3D volumetric images and is also commonly used in dentistry. All CBCT units provide axial, coronal and sagittal multi-planar reconstructed images without magnification. Also, panoramic images without distortion and magnification can be generated with curved planar reformation. CBCT displays 3D images that are necessary for the diagnosis of intra bony defects, furcation involvements and buccal/lingual bone destructions. CBCT applications provide obvious benefits in periodontics, however; it should be used only in correct indications considering the necessity and the potential hazards of the examination. PMID:24876918
Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Mascolo-Fortin, Julia, E-mail: julia.mascolo-fortin.1@ulaval.ca; Goussard, Yves, E-mail: yves.goussard@polymtl.ca
Purpose: The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. Methods: This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numericalmore » simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. Results: The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. Conclusions: The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can potentially improve the rendering of spatial features and reduce cone-beam optical CT artifacts.« less
Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT.
Matenine, Dmitri; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe
2015-11-01
The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numerical simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can potentially improve the rendering of spatial features and reduce cone-beam optical CT artifacts.
Bergeles, Christos; Dubis, Adam M; Davidson, Benjamin; Kasilian, Melissa; Kalitzeos, Angelos; Carroll, Joseph; Dubra, Alfredo; Michaelides, Michel; Ourselin, Sebastien
2017-06-01
Precise measurements of photoreceptor numerosity and spatial arrangement are promising biomarkers for the early detection of retinal pathologies and may be valuable in the evaluation of retinal therapies. Adaptive optics scanning light ophthalmoscopy (AOSLO) is a method of imaging that corrects for aberrations of the eye to acquire high-resolution images that reveal the photoreceptor mosaic. These images are typically graded manually by experienced observers, obviating the robust, large-scale use of the technology. This paper addresses unsupervised automated detection of cones in non-confocal, split-detection AOSLO images. Our algorithm leverages the appearance of split-detection images to create a cone model that is used for classification. Results show that it compares favorably to the state-of-the-art, both for images of healthy retinas and for images from patients affected by Stargardt disease. The algorithm presented also compares well to manual annotation while excelling in speed.
Dynamic cone beam CT angiography of carotid and cerebral arteries using canine model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai Weixing; Zhao Binghui; Conover, David
2012-01-15
Purpose: This research is designed to develop and evaluate a flat-panel detector-based dynamic cone beam CT system for dynamic angiography imaging, which is able to provide both dynamic functional information and dynamic anatomic information from one multirevolution cone beam CT scan. Methods: A dynamic cone beam CT scan acquired projections over four revolutions within a time window of 40 s after contrast agent injection through a femoral vein to cover the entire wash-in and wash-out phases. A dynamic cone beam CT reconstruction algorithm was utilized and a novel recovery method was developed to correct the time-enhancement curve of contrast flow.more » From the same data set, both projection-based subtraction and reconstruction-based subtraction approaches were utilized and compared to remove the background tissues and visualize the 3D vascular structure to provide the dynamic anatomic information. Results: Through computer simulations, the new recovery algorithm for dynamic time-enhancement curves was optimized and showed excellent accuracy to recover the actual contrast flow. Canine model experiments also indicated that the recovered time-enhancement curves from dynamic cone beam CT imaging agreed well with that of an IV-digital subtraction angiography (DSA) study. The dynamic vascular structures reconstructed using both projection-based subtraction and reconstruction-based subtraction were almost identical as the differences between them were comparable to the background noise level. At the enhancement peak, all the major carotid and cerebral arteries and the Circle of Willis could be clearly observed. Conclusions: The proposed dynamic cone beam CT approach can accurately recover the actual contrast flow, and dynamic anatomic imaging can be obtained with high isotropic 3D resolution. This approach is promising for diagnosis and treatment planning of vascular diseases and strokes.« less
Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera
NASA Astrophysics Data System (ADS)
Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna
2012-10-01
In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.
Cone Beam X-Ray Luminescence Tomography Imaging Based on KA-FEM Method for Small Animals.
Chen, Dongmei; Meng, Fanzhen; Zhao, Fengjun; Xu, Cao
2016-01-01
Cone beam X-ray luminescence tomography can realize fast X-ray luminescence tomography imaging with relatively low scanning time compared with narrow beam X-ray luminescence tomography. However, cone beam X-ray luminescence tomography suffers from an ill-posed reconstruction problem. First, the feasibility of experiments with different penetration and multispectra in small animal has been tested using nanophosphor material. Then, the hybrid reconstruction algorithm with KA-FEM method has been applied in cone beam X-ray luminescence tomography for small animals to overcome the ill-posed reconstruction problem, whose advantage and property have been demonstrated in fluorescence tomography imaging. The in vivo mouse experiment proved the feasibility of the proposed method.
Using Collision Cones to Asses Biological Deconiction Methods
NASA Astrophysics Data System (ADS)
Brace, Natalie
For autonomous vehicles to navigate the world as efficiently and effectively as biological species, improvements are needed in terms of control strategies and estimation algorithms. Reactive collision avoidance is one specific area where biological systems outperform engineered algorithms. To better understand the discrepancy between engineered and biological systems, a collision avoidance algorithm was applied to frames of trajectory data from three biological species (Myotis velifer, Hirundo rustica, and Danio aequipinnatus). The algorithm uses information that can be sensed through visual cues (relative position and velocity) to define collision cones which are used to determine if agents are on a collision course and if so, to find a safe velocity that requires minimal deviation from the original velocity for each individual agent. Two- and three-dimensional versions of the algorithm with constant speed and maximum speed velocity requirements were considered. The obstacles provided to the algorithm were determined by the sensing range in terms of either metric or topological distance. The calculated velocities showed good correlation with observed velocities over the range of sensing parameters, indicating that the algorithm is a good basis for comparison and could potentially be improved with further study.
Compressed sensing with gradient total variation for low-dose CBCT reconstruction
NASA Astrophysics Data System (ADS)
Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Seongchae; Huh, Young; Park, Justin C.; Lee, Byeonghun; Baek, Junghee; Kim, Eunyoung
2015-06-01
This paper describes the improvement of convergence speed with gradient total variation (GTV) in compressed sensing (CS) for low-dose cone-beam computed tomography (CBCT) reconstruction. We derive a fast algorithm for the constrained total variation (TV)-based a minimum number of noisy projections. To achieve this task we combine the GTV with a TV-norm regularization term to promote an accelerated sparsity in the X-ray attenuation characteristics of the human body. The GTV is derived from a TV and enforces more efficient computationally and faster in convergence until a desired solution is achieved. The numerical algorithm is simple and derives relatively fast convergence. We apply a gradient projection algorithm that seeks a solution iteratively in the direction of the projected gradient while enforcing a non-negatively of the found solution. In comparison with the Feldkamp, Davis, and Kress (FDK) and conventional TV algorithms, the proposed GTV algorithm showed convergence in ≤18 iterations, whereas the original TV algorithm needs at least 34 iterations in reducing 50% of the projections compared with the FDK algorithm in order to reconstruct the chest phantom images. Future investigation includes improving imaging quality, particularly regarding X-ray cone-beam scatter, and motion artifacts of CBCT reconstruction.
NASA Astrophysics Data System (ADS)
Al-Hallaq, H. A.; Reft, C. S.; Roeske, J. C.
2006-03-01
The dosimetric effects of bone and air heterogeneities in head and neck IMRT treatments were quantified. An anthropomorphic RANDO phantom was CT-scanned with 16 thermoluminescent dosimeter (TLD) chips placed in and around the target volume. A standard IMRT plan generated with CORVUS was used to irradiate the phantom five times. On average, measured dose was 5.1% higher than calculated dose. Measurements were higher by 7.1% near the heterogeneities and by 2.6% in tissue. The dose difference between measurement and calculation was outside the 95% measurement confidence interval for six TLDs. Using CORVUS' heterogeneity correction algorithm, the average difference between measured and calculated doses decreased by 1.8% near the heterogeneities and by 0.7% in tissue. Furthermore, dose differences lying outside the 95% confidence interval were eliminated for five of the six TLDs. TLD doses recalculated by Pinnacle3's convolution/superposition algorithm were consistently higher than CORVUS doses, a trend that matched our measured results. These results indicate that the dosimetric effects of air cavities are larger than those of bone heterogeneities, thereby leading to a higher delivered dose compared to CORVUS calculations. More sophisticated algorithms such as convolution/superposition or Monte Carlo should be used for accurate tailoring of IMRT dose in head and neck tumours.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hastings, Matthew B
We show how to combine the light-cone and matrix product algorithms to simulate quantum systems far from equilibrium for long times. For the case of the XXZ spin chain at {Delta} = 0.5, we simulate to a time of {approx} 22.5. While part of the long simulation time is due to the use of the light-cone method, we also describe a modification of the infinite time-evolving bond decimation algorithm with improved numerical stability, and we describe how to incorporate symmetry into this algorithm. While statistical sampling error means that we are not yet able to make a definite statement, themore » behavior of the simulation at long times indicates the appearance of either 'revivals' in the order parameter as predicted by Hastings and Levitov (e-print arXiv:0806.4283) or of a distinct shoulder in the decay of the order parameter.« less
CUDA-based high-performance computing of the S-BPF algorithm with no-waiting pipelining
NASA Astrophysics Data System (ADS)
Deng, Lin; Yan, Bin; Chang, Qingmei; Han, Yu; Zhang, Xiang; Xi, Xiaoqi; Li, Lei
2015-10-01
The backprojection-filtration (BPF) algorithm has become a good solution for local reconstruction in cone-beam computed tomography (CBCT). However, the reconstruction speed of BPF is a severe limitation for clinical applications. The selective-backprojection filtration (S-BPF) algorithm is developed to improve the parallel performance of BPF by selective backprojection. Furthermore, the general-purpose graphics processing unit (GP-GPU) is a popular tool for accelerating the reconstruction. Much work has been performed aiming for the optimization of the cone-beam back-projection. As the cone-beam back-projection process becomes faster, the data transportation holds a much bigger time proportion in the reconstruction than before. This paper focuses on minimizing the total time in the reconstruction with the S-BPF algorithm by hiding the data transportation among hard disk, CPU and GPU. And based on the analysis of the S-BPF algorithm, some strategies are implemented: (1) the asynchronous calls are used to overlap the implemention of CPU and GPU, (2) an innovative strategy is applied to obtain the DBP image to hide the transport time effectively, (3) two streams for data transportation and calculation are synchronized by the cudaEvent in the inverse of finite Hilbert transform on GPU. Our main contribution is a smart reconstruction of the S-BPF algorithm with GPU's continuous calculation and no data transportation time cost. a 5123 volume is reconstructed in less than 0.7 second on a single Tesla-based K20 GPU from 182 views projection with 5122 pixel per projection. The time cost of our implementation is about a half of that without the overlap behavior.
The Cognitive Toolkit of Programming--Algorithmic Abstraction, Decomposition-Superposition
ERIC Educational Resources Information Center
Szlávi,Péter; Zsakó, László
2017-01-01
As a programmer when solving a problem, a number of conscious and unconscious cognitive operations are being performed. Problem-solving is a gradual and cyclic activity; as the mind is adjusting the problem to its schemas formed by its previous experiences, the programmer gets closer and closer to understanding and defining the problem. The…
Sharp, G C; Kandasamy, N; Singh, H; Folkert, M
2007-10-07
This paper shows how to significantly accelerate cone-beam CT reconstruction and 3D deformable image registration using the stream-processing model. We describe data-parallel designs for the Feldkamp, Davis and Kress (FDK) reconstruction algorithm, and the demons deformable registration algorithm, suitable for use on a commodity graphics processing unit. The streaming versions of these algorithms are implemented using the Brook programming environment and executed on an NVidia 8800 GPU. Performance results using CT data of a preserved swine lung indicate that the GPU-based implementations of the FDK and demons algorithms achieve a substantial speedup--up to 80 times for FDK and 70 times for demons when compared to an optimized reference implementation on a 2.8 GHz Intel processor. In addition, the accuracy of the GPU-based implementations was found to be excellent. Compared with CPU-based implementations, the RMS differences were less than 0.1 Hounsfield unit for reconstruction and less than 0.1 mm for deformable registration.
Distance-Based Phylogenetic Methods Around a Polytomy.
Davidson, Ruth; Sullivant, Seth
2014-01-01
Distance-based phylogenetic algorithms attempt to solve the NP-hard least-squares phylogeny problem by mapping an arbitrary dissimilarity map representing biological data to a tree metric. The set of all dissimilarity maps is a Euclidean space properly containing the space of all tree metrics as a polyhedral fan. Outputs of distance-based tree reconstruction algorithms such as UPGMA and neighbor-joining are points in the maximal cones in the fan. Tree metrics with polytomies lie at the intersections of maximal cones. A phylogenetic algorithm divides the space of all dissimilarity maps into regions based upon which combinatorial tree is reconstructed by the algorithm. Comparison of phylogenetic methods can be done by comparing the geometry of these regions. We use polyhedral geometry to compare the local nature of the subdivisions induced by least-squares phylogeny, UPGMA, and neighbor-joining when the true tree has a single polytomy with exactly four neighbors. Our results suggest that in some circumstances, UPGMA and neighbor-joining poorly match least-squares phylogeny.
Mariotti, Letizia; Devaney, Nicholas; Lombardo, Giuseppe; Lombardo, Marco
2016-01-01
Although there is increasing interest in the investigation of cone reflectance variability, little is understood about its characteristics over long time scales. Cone detection and its automation is now becoming a fundamental step in the assessment and monitoring of the health of the retina and in the understanding of the photoreceptor physiology. In this work we provide an insight into the cone reflectance variability over time scales ranging from minutes to three years on the same eye, and for large areas of the retina (≥ 2.0 × 2.0 degrees) at two different retinal eccentricities using a commercial adaptive optics (AO) flood illumination retinal camera. We observed that the difference in reflectance observed in the cones increases with the time separation between the data acquisitions and this may have a negative impact on algorithms attempting to track cones over time. In addition, we determined that displacements of the light source within 0.35 mm of the pupil center, which is the farthest location from the pupil center used by operators of the AO camera to acquire high-quality images of the cone mosaic in clinical studies, does not significantly affect the cone detection and density estimation. PMID:27446708
Mariotti, Letizia; Devaney, Nicholas; Lombardo, Giuseppe; Lombardo, Marco
2016-07-01
Although there is increasing interest in the investigation of cone reflectance variability, little is understood about its characteristics over long time scales. Cone detection and its automation is now becoming a fundamental step in the assessment and monitoring of the health of the retina and in the understanding of the photoreceptor physiology. In this work we provide an insight into the cone reflectance variability over time scales ranging from minutes to three years on the same eye, and for large areas of the retina (≥ 2.0 × 2.0 degrees) at two different retinal eccentricities using a commercial adaptive optics (AO) flood illumination retinal camera. We observed that the difference in reflectance observed in the cones increases with the time separation between the data acquisitions and this may have a negative impact on algorithms attempting to track cones over time. In addition, we determined that displacements of the light source within 0.35 mm of the pupil center, which is the farthest location from the pupil center used by operators of the AO camera to acquire high-quality images of the cone mosaic in clinical studies, does not significantly affect the cone detection and density estimation.
Algorithms of walking and stability for an anthropomorphic robot
NASA Astrophysics Data System (ADS)
Sirazetdinov, R. T.; Devaev, V. M.; Nikitina, D. V.; Fadeev, A. Y.; Kamalov, A. R.
2017-09-01
Autonomous movement of an anthropomorphic robot is considered as a superposition of a set of typical elements of movement - so-called patterns, each of which can be considered as an agent of some multi-agent system [ 1 ]. To control the AP-601 robot, an information and communication infrastructure has been created that represents some multi-agent system that allows the development of algorithms for individual patterns of moving and run them in the system as a set of independently executed and interacting agents. The algorithms of lateral movement of the anthropomorphic robot AP-601 series with active stability due to the stability pattern are presented.
NASA Astrophysics Data System (ADS)
Sramek, Benjamin Koerner
The ability to deliver conformal dose distributions in radiation therapy through intensity modulation and the potential for tumor dose escalation to improve treatment outcome has necessitated an increase in localization accuracy of inter- and intra-fractional patient geometry. Megavoltage cone-beam CT imaging using the treatment beam and onboard electronic portal imaging device is one option currently being studied for implementation in image-guided radiation therapy. However, routine clinical use is predicated upon continued improvements in image quality and patient dose delivered during acquisition. The formal statement of hypothesis for this investigation was that the conformity of planned to delivered dose distributions in image-guided radiation therapy could be further enhanced through the application of kilovoltage scatter correction and intermediate view estimation techniques to megavoltage cone-beam CT imaging, and that normalized dose measurements could be acquired and inter-compared between multiple imaging geometries. The specific aims of this investigation were to: (1) incorporate the Feldkamp, Davis and Kress filtered backprojection algorithm into a program to reconstruct a voxelized linear attenuation coefficient dataset from a set of acquired megavoltage cone-beam CT projections, (2) characterize the effects on megavoltage cone-beam CT image quality resulting from the application of Intermediate View Interpolation and Intermediate View Reprojection techniques to limited-projection datasets, (3) incorporate the Scatter and Primary Estimation from Collimator Shadows (SPECS) algorithm into megavoltage cone-beam CT image reconstruction and determine the set of SPECS parameters which maximize image quality and quantitative accuracy, and (4) evaluate the normalized axial dose distributions received during megavoltage cone-beam CT image acquisition using radiochromic film and thermoluminescent dosimeter measurements in anthropomorphic pelvic and head and neck phantoms. The conclusions of this investigation were: (1) the implementation of intermediate view estimation techniques to megavoltage cone-beam CT produced improvements in image quality, with the largest impact occurring for smaller numbers of initially-acquired projections, (2) the SPECS scatter correction algorithm could be successfully incorporated into projection data acquired using an electronic portal imaging device during megavoltage cone-beam CT image reconstruction, (3) a large range of SPECS parameters were shown to reduce cupping artifacts as well as improve reconstruction accuracy, with application to anthropomorphic phantom geometries improving the percent difference in reconstructed electron density for soft tissue from -13.6% to -2.0%, and for cortical bone from -9.7% to 1.4%, (4) dose measurements in the anthropomorphic phantoms showed consistent agreement between planar measurements using radiochromic film and point measurements using thermoluminescent dosimeters, and (5) a comparison of normalized dose measurements acquired with radiochromic film to those calculated using multiple treatment planning systems, accelerator-detector combinations, patient geometries and accelerator outputs produced a relatively good agreement.
Cho, Nathan; Tsiamas, Panagiotis; Velarde, Esteban; Tryggestad, Erik; Jacques, Robert; Berbeco, Ross; McNutt, Todd; Kazanzides, Peter; Wong, John
2018-05-01
The Small Animal Radiation Research Platform (SARRP) has been developed for conformal microirradiation with on-board cone beam CT (CBCT) guidance. The graphics processing unit (GPU)-accelerated Superposition-Convolution (SC) method for dose computation has been integrated into the treatment planning system (TPS) for SARRP. This paper describes the validation of the SC method for the kilovoltage energy by comparing with EBT2 film measurements and Monte Carlo (MC) simulations. MC data were simulated by EGSnrc code with 3 × 10 8 -1.5 × 10 9 histories, while 21 photon energy bins were used to model the 220 kVp x-rays in the SC method. Various types of phantoms including plastic water, cork, graphite, and aluminum were used to encompass the range of densities of mouse organs. For the comparison, percentage depth dose (PDD) of SC, MC, and film measurements were analyzed. Cross beam (x,y) dosimetric profiles of SC and film measurements are also presented. Correction factors (CFz) to convert SC to MC dose-to-medium are derived from the SC and MC simulations in homogeneous phantoms of aluminum and graphite to improve the estimation. The SC method produces dose values that are within 5% of film measurements and MC simulations in the flat regions of the profile. The dose is less accurate at the edges, due to factors such as geometric uncertainties of film placement and difference in dose calculation grids. The GPU-accelerated Superposition-Convolution dose computation method was successfully validated with EBT2 film measurements and MC calculations. The SC method offers much faster computation speed than MC and provides calculations of both dose-to-water in medium and dose-to-medium in medium. © 2018 American Association of Physicists in Medicine.
Wizard CD Plus and ProTaper Universal: analysis of apical transportation using new software
GIANNASTASIO, Daiana; da ROSA, Ricardo Abreu; PERES, Bernardo Urbanetto; BARRETO, Mirela Sangoi; DOTTO, Gustavo Nogara; KUGA, Milton Carlos; PEREIRA, Jefferson Ricardo; SÓ, Marcus Vinícius Reis
2013-01-01
Objective This study has two aims: 1) to evaluate the apical transportation of the Wizard CD Plus and ProTaper Universal after preparation of simulated root canals; 2) to compare, with Adobe Photoshop, the ability of a new software (Regeemy) in superposing and subtracting images. Material and Methods Twenty five simulated root canals in acrylic-resin blocks (with 20º curvature) underwent cone beam computed tomography before and after preparation with the rotary systems (70 kVp, 4 mA, 10 s and with the 8×8 cm FoV selection). Canals were prepared up to F2 (ProTaper) and 24.04 (Wizard CD Plus) instruments and the working length was established to 15 mm. The tomographic images were imported into iCAT Vision software and CorelDraw for standardization. The superposition of pre- and post-instrumentation images from both systems was performed using Regeemy and Adobe Photoshop. The apical transportation was measured in millimetres using Image J. Five acrylic resin blocks were used to validate the superposition achieved by the software. Student's t-test for independent samples was used to evaluate the apical transportation achieved by the rotary systems using each software individually. Student's t-test for paired samples was used to compare the ability of each software in superposing and subtracting images from one rotary system per time. Results The values obtained with Regeemy and Adobe Photoshop were similar to rotary systems (P>0.05). ProTaper Universal and Wizard CD Plus promoted similar apical transportation regardless of the software used for image's superposition and subtraction (P>0.05). Conclusion Wizard CD Plus and ProTaper Universal promoted little apical transportation. Regeemy consists in a feasible software to superpose and subtract images and appears to be an alternative to Adobe Photoshop. PMID:24212994
Wizard CD Plus and ProTaper Universal: analysis of apical transportation using new software.
Giannastasio, Daiana; Rosa, Ricardo Abreu da; Peres, Bernardo Urbanetto; Barreto, Mirela Sangoi; Dotto, Gustavo Nogara; Kuga, Milton Carlos; Pereira, Jefferson Ricardo; Só, Marcus Vinícius Reis
2013-01-01
This study has two aims: 1) to evaluate the apical transportation of the Wizard CD Plus and ProTaper Universal after preparation of simulated root canals; 2) to compare, with Adobe Photoshop, the ability of a new software (Regeemy) in superposing and subtracting images. Twenty five simulated root canals in acrylic-resin blocks (with 20º curvature) underwent cone beam computed tomography before and after preparation with the rotary systems (70 kVp, 4 mA, 10 s and with the 8×8 cm FoV selection). Canals were prepared up to F2 (ProTaper) and 24.04 (Wizard CD Plus) instruments and the working length was established to 15 mm. The tomographic images were imported into iCAT Vision software and CorelDraw for standardization. The superposition of pre- and post-instrumentation images from both systems was performed using Regeemy and Adobe Photoshop. The apical transportation was measured in millimetres using Image J. Five acrylic resin blocks were used to validate the superposition achieved by the software. Student's t-test for independent samples was used to evaluate the apical transportation achieved by the rotary systems using each software individually. Student's t-test for paired samples was used to compare the ability of each software in superposing and subtracting images from one rotary system per time. The values obtained with Regeemy and Adobe Photoshop were similar to rotary systems (P>0.05). ProTaper Universal and Wizard CD Plus promoted similar apical transportation regardless of the software used for image's superposition and subtraction (P>0.05). Wizard CD Plus and ProTaper Universal promoted little apical transportation. Regeemy consists in a feasible software to superpose and subtract images and appears to be an alternative to Adobe Photoshop.
Takeda, Atsuya; Sanuki, Naoko; Kunieda, Etsuo; Ohashi, Toshio; Oku, Yohei; Takeda, Toshiaki; Shigematsu, Naoyuki; Kubo, Atsushi
2009-02-01
To retrospectively analyze the clinical outcomes of stereotactic body radiotherapy (SBRT) for patients with Stages 1A and 1B non-small-cell lung cancer. We reviewed the records of patients with non-small-cell lung cancer treated with curative intent between Dec 2001 and May 2007. All patients had histopathologically or cytologically confirmed disease, increased levels of tumor markers, and/or positive findings on fluorodeoxyglucose positron emission tomography. Staging studies identified their disease as Stage 1A or 1B. Performance status was 2 or less according to World Health Organization guidelines in all cases. The prescribed dose of 50 Gy total in five fractions, calculated by using a superposition algorithm, was defined for the periphery of the planning target volume. One hundred twenty-one patients underwent SBRT during the study period, and 63 were eligible for this analysis. Thirty-eight patients had Stage 1A (T1N0M0) and 25 had Stage 1B (T2N0M0). Forty-nine patients were not appropriate candidates for surgery because of chronic pulmonary disease. Median follow-up of these 49 patients was 31 months (range, 10-72 months). The 3-year local control, disease-free, and overall survival rates in patients with Stages 1A and 1B were 93% and 96% (p = 0.86), 76% and 77% (p = 0.83), and 90% and 63% (p = 0.09), respectively. No acute toxicity was observed. Grade 2 or higher radiation pneumonitis was experienced by 3 patients, and 1 of them had fatal bacterial pneumonia. The SBRT at 50 Gy total in five fractions to the periphery of the planning target volume calculated by using a superposition algorithm is feasible. High local control rates were achieved for both T2 and T1 tumors.
An Improved Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm
NASA Astrophysics Data System (ADS)
Jacques, Robert; McNutt, Todd
2014-03-01
Purpose: To improve the accuracy of convolution/superposition (C/S) in heterogeneous material by developing a new algorithm: heterogeneity compensated superposition (HCS). Methods: C/S has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to the faster fall-off and re-buildup of dose. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to C/S. We implemented the effective density function as a multivariate first-order recursive filter and incorporated it into GPU-accelerated, multi-energetic C/S implementation. We compared HCS against C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. Results: Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. We defined the per-voxel error, %|mm, as the minimum of the distance to agreement in mm and the dosimetric percentage error relative to the maximum MC dose. HCS improved the average mean error by 0.79 %|mm for the patient volumes; reducing the average mean error from 1.93 %|mm to 1.14 %|mm. Very low densities (i.e. < 0.1 g / cm3) remained problematic, but may be solvable with a better filter function. Conclusions: HCS improved upon C/S's density scaled heterogeneity correction with a position and direction sensitive density filter. This method significantly improved the accuracy of the GPU based algorithm reaching the accuracy levels of Monte Carlo based methods with performance in a few tenths of seconds per beam. Acknowledgement: Funding for this research was provided by the NSF Cooperative Agreement EEC9731748, Elekta / IMPAC Medical Systems, Inc. and the Johns Hopkins University. James Satterthwaite provided the Monte Carlo benchmark simulations.
GPU-accelerated Monte Carlo convolution/superposition implementation for dose calculation.
Zhou, Bo; Yu, Cedric X; Chen, Danny Z; Hu, X Sharon
2010-11-01
Dose calculation is a key component in radiation treatment planning systems. Its performance and accuracy are crucial to the quality of treatment plans as emerging advanced radiation therapy technologies are exerting ever tighter constraints on dose calculation. A common practice is to choose either a deterministic method such as the convolution/superposition (CS) method for speed or a Monte Carlo (MC) method for accuracy. The goal of this work is to boost the performance of a hybrid Monte Carlo convolution/superposition (MCCS) method by devising a graphics processing unit (GPU) implementation so as to make the method practical for day-to-day usage. Although the MCCS algorithm combines the merits of MC fluence generation and CS fluence transport, it is still not fast enough to be used as a day-to-day planning tool. To alleviate the speed issue of MC algorithms, the authors adopted MCCS as their target method and implemented a GPU-based version. In order to fully utilize the GPU computing power, the MCCS algorithm is modified to match the GPU hardware architecture. The performance of the authors' GPU-based implementation on an Nvidia GTX260 card is compared to a multithreaded software implementation on a quad-core system. A speedup in the range of 6.7-11.4x is observed for the clinical cases used. The less than 2% statistical fluctuation also indicates that the accuracy of the authors' GPU-based implementation is in good agreement with the results from the quad-core CPU implementation. This work shows that GPU is a feasible and cost-efficient solution compared to other alternatives such as using cluster machines or field-programmable gate arrays for satisfying the increasing demands on computation speed and accuracy of dose calculation. But there are also inherent limitations of using GPU for accelerating MC-type applications, which are also analyzed in detail in this article.
Discrete surface roughness effects on a blunt hypersonic cone in a quiet tunnel
NASA Astrophysics Data System (ADS)
Sharp, Nicole; White, Edward
2013-11-01
The mechanisms by which surface roughness creates boundary-layer disturbances in hypersonic flow are little understood. Work by Reshotko (AIAA 2008-4294) and others suggests that transient growth, resulting from the superposition of decaying non-orthogonal modes, may be responsible. The present study examines transient growth experimentally using a smooth 5-degree half-angle conic frustum paired with blunted nosetips with and without an azimuthal array of discrete roughness elements. A combination of hotwire anemometry and Pitot measurements in the low-disturbance Mach 6 Quiet Tunnel are used for boundary layer profiles downstream of the ring of roughness elements as well as azimuthal measurements to examine the high- and low-speed streaks characteristic of transient growth of stationary roughness-induced disturbances.
GPU-accelerated regularized iterative reconstruction for few-view cone beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Goussard, Yves, E-mail: yves.goussard@polymtl.ca; Després, Philippe, E-mail: philippe.despres@phy.ulaval.ca
2015-04-15
Purpose: The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. Methods: The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it ismore » implemented on a graphics processing unit, using parallelization to accelerate computations. Results: The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1–2 min and are compatible with the typical clinical workflow for nonreal-time applications. Conclusions: Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks.« less
Yu, Jian-Hong; Lo, Lun-Jou; Hsu, Pin-Hsin
2017-01-01
This study integrates cone-beam computed tomography (CBCT)/laser scan image superposition, computer-aided design (CAD), and 3D printing (3DP) to develop a technology for producing customized dental (orthodontic) miniscrew surgical templates using polymer material. Maxillary bone solid models with the bone and teeth reconstructed using CBCT images and teeth and mucosa outer profile acquired using laser scanning were superimposed to allow miniscrew visual insertion planning and permit surgical template fabrication. The customized surgical template CAD model was fabricated offset based on the teeth/mucosa/bracket contour profiles in the superimposition model and exported to duplicate the plastic template using the 3DP technique and polymer material. An anterior retraction and intrusion clinical test for the maxillary canines/incisors showed that two miniscrews were placed safely and did not produce inflammation or other discomfort symptoms one week after surgery. The fitness between the mucosa and template indicated that the average gap sizes were found smaller than 0.5 mm and confirmed that the surgical template presented good holding power and well-fitting adaption. This study addressed integrating CBCT and laser scan image superposition; CAD and 3DP techniques can be applied to fabricate an accurate customized surgical template for dental orthodontic miniscrews. PMID:28280726
TU-AB-303-08: GPU-Based Software Platform for Efficient Image-Guided Adaptive Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, S; Robinson, A; McNutt, T
2015-06-15
Purpose: In this study, we develop an integrated software platform for adaptive radiation therapy (ART) that combines fast and accurate image registration, segmentation, and dose computation/accumulation methods. Methods: The proposed system consists of three key components; 1) deformable image registration (DIR), 2) automatic segmentation, and 3) dose computation/accumulation. The computationally intensive modules including DIR and dose computation have been implemented on a graphics processing unit (GPU). All required patient-specific data including the planning CT (pCT) with contours, daily cone-beam CTs, and treatment plan are automatically queried and retrieved from their own databases. To improve the accuracy of DIR between pCTmore » and CBCTs, we use the double force demons DIR algorithm in combination with iterative CBCT intensity correction by local intensity histogram matching. Segmentation of daily CBCT is then obtained by propagating contours from the pCT. Daily dose delivered to the patient is computed on the registered pCT by a GPU-accelerated superposition/convolution algorithm. Finally, computed daily doses are accumulated to show the total delivered dose to date. Results: Since the accuracy of DIR critically affects the quality of the other processes, we first evaluated our DIR method on eight head-and-neck cancer cases and compared its performance. Normalized mutual-information (NMI) and normalized cross-correlation (NCC) computed as similarity measures, and our method produced overall NMI of 0.663 and NCC of 0.987, outperforming conventional methods by 3.8% and 1.9%, respectively. Experimental results show that our registration method is more consistent and roust than existing algorithms, and also computationally efficient. Computation time at each fraction took around one minute (30–50 seconds for registration and 15–25 seconds for dose computation). Conclusion: We developed an integrated GPU-accelerated software platform that enables accurate and efficient DIR, auto-segmentation, and dose computation, thus supporting an efficient ART workflow. This work was supported by NIH/NCI under grant R42CA137886.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, J; Followill, D; Howell, R
2015-06-15
Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titaniummore » and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus, these two strategies do have the potential to improve accuracy for patients with metal implants in certain scenarios. This work was supported by Public Health Service grants CA 180803 and CA 10953 awarded by the National Cancer Institute, United States of Health and Human Services, and in part by Mobius Medical Systems.« less
NASA Technical Reports Server (NTRS)
Borisenko, V. I.; Chesalin, L. S.
1980-01-01
The algorithm, block diagram, complete text, and instructions are given for the use of a computer program to separate formations whose spectral characteristics are constant on the average. The initial material for operating the computer program presented is video information in a standard color-superposition format.
NASA Astrophysics Data System (ADS)
Rysavy, Steven; Flores, Arturo; Enciso, Reyes; Okada, Kazunori
2008-03-01
This paper presents an experimental study for assessing the applicability of general-purpose 3D segmentation algorithms for analyzing dental periapical lesions in cone-beam computed tomography (CBCT) scans. In the field of Endodontics, clinical studies have been unable to determine if a periapical granuloma can heal with non-surgical methods. Addressing this issue, Simon et al. recently proposed a diagnostic technique which non-invasively classifies target lesions using CBCT. Manual segmentation exploited in their study, however, is too time consuming and unreliable for real world adoption. On the other hand, many technically advanced algorithms have been proposed to address segmentation problems in various biomedical and non-biomedical contexts, but they have not yet been applied to the field of dentistry. Presented in this paper is a novel application of such segmentation algorithms to the clinically-significant dental problem. This study evaluates three state-of-the-art graph-based algorithms: a normalized cut algorithm based on a generalized eigen-value problem, a graph cut algorithm implementing energy minimization techniques, and a random walks algorithm derived from discrete electrical potential theory. In this paper, we extend the original 2D formulation of the above algorithms to segment 3D images directly and apply the resulting algorithms to the dental CBCT images. We experimentally evaluate quality of the segmentation results for 3D CBCT images, as well as their 2D cross sections. The benefits and pitfalls of each algorithm are highlighted.
Axial Cone-Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering.
Tang, Shaojie; Tang, Xiangyang
2016-09-01
The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone-beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane, determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. The solution is an integration of three-dimensional (3-D) weighted axial CB-BPF/DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting the reconstruction accuracy, and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate the performance of the proposed algorithm. Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3-D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Integrated with orthogonal butterfly filtering, the 3-D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3-D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. The proposed 3-D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications.
Active impulsive noise control using maximum correntropy with adaptive kernel size
NASA Astrophysics Data System (ADS)
Lu, Lu; Zhao, Haiquan
2017-03-01
The active noise control (ANC) based on the principle of superposition is an attractive method to attenuate the noise signals. However, the impulsive noise in the ANC systems will degrade the performance of the controller. In this paper, a filtered-x recursive maximum correntropy (FxRMC) algorithm is proposed based on the maximum correntropy criterion (MCC) to reduce the effect of outliers. The proposed FxRMC algorithm does not requires any priori information of the noise characteristics and outperforms the filtered-x least mean square (FxLMS) algorithm for impulsive noise. Meanwhile, in order to adjust the kernel size of FxRMC algorithm online, a recursive approach is proposed through taking into account the past estimates of error signals over a sliding window. Simulation and experimental results in the context of active impulsive noise control demonstrate that the proposed algorithms achieve much better performance than the existing algorithms in various noise environments.
An efficient quantum algorithm for spectral estimation
NASA Astrophysics Data System (ADS)
Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth
2017-03-01
We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.
Contact stresses in gear teeth: A new method of analysis
NASA Technical Reports Server (NTRS)
Somprakit, Paisan; Huston, Ronald L.; Oswald, Fred B.
1991-01-01
A new, innovative procedure called point load superposition for determining the contact stresses in mating gear teeth. It is believed that this procedure will greatly extend both the range of applicability and the accuracy of gear contact stress analysis. Point load superposition is based upon fundamental solutions from the theory of elasticity. It is an iterative numerical procedure which has distinct advantages over the classical Hertz method, the finite element method, and over existing applications with the boundary element method. Specifically, friction and sliding effects, which are either excluded from or difficult to study with the classical methods, are routinely handled with the new procedure. Presented here are the basic theory and the algorithms. Several examples are given. Results are consistent with those of the classical theories. Applications to spur gears are discussed.
Compensating the intensity fall-off effect in cone-beam tomography by an empirical weight formula.
Chen, Zikuan; Calhoun, Vince D; Chang, Shengjiang
2008-11-10
The Feldkamp-David-Kress (FDK) algorithm is widely adopted for cone-beam reconstruction due to its one-dimensional filtered backprojection structure and parallel implementation. In a reconstruction volume, the conspicuous cone-beam artifact manifests as intensity fall-off along the longitudinal direction (the gantry rotation axis). This effect is inherent to circular cone-beam tomography due to the fact that a cone-beam dataset acquired from circular scanning fails to meet the data sufficiency condition for volume reconstruction. Upon observations of the intensity fall-off phenomenon associated with the FDK reconstruction of a ball phantom, we propose an empirical weight formula to compensate for the fall-off degradation. Specifically, a reciprocal cosine can be used to compensate the voxel values along longitudinal direction during three-dimensional backprojection reconstruction, in particular for boosting the values of voxels at positions with large cone angles. The intensity degradation within the z plane, albeit insignificant, can also be compensated by using the same weight formula through a parameter for radial distance dependence. Computer simulations and phantom experiments are presented to demonstrate the compensation effectiveness of the fall-off effect inherent in circular cone-beam tomography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukherjee, S; Yao, W
2015-06-15
Purpose: To study different noise-reduction algorithms and to improve the image quality of low dose cone beam CT for patient positioning in radiation therapy. Methods: In low-dose cone-beam CT, the reconstructed image is contaminated with excessive quantum noise. In this study, three well-developed noise reduction algorithms namely, a) penalized weighted least square (PWLS) method, b) split-Bregman total variation (TV) method, and c) compressed sensing (CS) method were studied and applied to the images of a computer–simulated “Shepp-Logan” phantom and a physical CATPHAN phantom. Up to 20% additive Gaussian noise was added to the Shepp-Logan phantom. The CATPHAN phantom was scannedmore » by a Varian OBI system with 100 kVp, 4 ms and 20 mA. For comparing the performance of these algorithms, peak signal-to-noise ratio (PSNR) of the denoised images was computed. Results: The algorithms were shown to have the potential in reducing the noise level for low-dose CBCT images. For Shepp-Logan phantom, an improvement of PSNR of 2 dB, 3.1 dB and 4 dB was observed using PWLS, TV and CS respectively, while for CATPHAN, the improvement was 1.2 dB, 1.8 dB and 2.1 dB, respectively. Conclusion: Penalized weighted least square, total variation and compressed sensing methods were studied and compared for reducing the noise on a simulated phantom and a physical phantom scanned by low-dose CBCT. The techniques have shown promising results for noise reduction in terms of PSNR improvement. However, reducing the noise without compromising the smoothness and resolution of the image needs more extensive research.« less
NASA Astrophysics Data System (ADS)
Becker, Matthew Rand
I present a new algorithm, CALCLENS, for efficiently computing weak gravitational lensing shear signals from large N-body light cone simulations over a curved sky. This new algorithm properly accounts for the sky curvature and boundary conditions, is able to produce redshift- dependent shear signals including corrections to the Born approximation by using multiple- plane ray tracing, and properly computes the lensed images of source galaxies in the light cone. The key feature of this algorithm is a new, computationally efficient Poisson solver for the sphere that combines spherical harmonic transform and multigrid methods. As a result, large areas of sky (~10,000 square degrees) can be ray traced efficiently at high-resolution using only a few hundred cores. Using this new algorithm and curved-sky calculations that only use a slower but more accurate spherical harmonic transform Poisson solver, I study the convergence, shear E-mode, shear B-mode and rotation mode power spectra. Employing full-sky E/B-mode decompositions, I confirm that the numerically computed shear B-mode and rotation mode power spectra are equal at high accuracy ( ≲ 1%) as expected from perturbation theory up to second order. Coupled with realistic galaxy populations placed in large N-body light cone simulations, this new algorithm is ideally suited for the construction of synthetic weak lensing shear catalogs to be used to test for systematic effects in data analysis procedures for upcoming large-area sky surveys. The implementation presented in this work, written in C and employing widely available software libraries to maintain portability, is publicly available at http://code.google.com/p/calclens.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaskowiak, J; Ahmad, S; Ali, I
Purpose: To investigate quantitatively the performance of different deformable-image-registration algorithms (DIR) with helical (HCT), axial (ACT) and cone-beam CT (CBCT) by evaluating the variations in the CT-numbers and lengths of targets moving with controlled motion-patterns. Methods: Four DIR-algorithms including demons, fast-demons, Horn-Schunk and Locas-Kanade from the DIRART-software are used to register CT-images of a mobile-phantom. A mobile-phantom is scanned with different imaging techniques that include helical, axial and cone-beam CT. The phantom includes three targets with different lengths that are made from water-equivalent material and inserted in low-density-foam which is moved with adjustable motion-amplitudes and frequencies. Results: Most of themore » DIR-algorithms are able to produce the lengths of the stationary-targets, however, they do not produce the CT-number values in CBCT. The image-artifacts induced by motion are more regular in CBCT imaging where the mobile-target elongation increases linearly with motion-amplitude. In ACT and HCT, the motion-artifacts are irregular where some mobile -targets are elongated or shrunk depending on the motion-phase during imaging. The DIR-algorithms are successful in deforming the images of the mobile-targets to the images of the stationary-targets producing the CT-number values and length of the target for motion-amplitudes < 20 mm. Similarly in ACT, all DIR-algorithms produced the actual CT-number and length of the stationary-targets for motion-amplitudes < 15 mm. As stronger motion-artifacts are induced in HCT and ACT, DIR-algorithms fail to produce CT-values and shape of the stationary-targets and fast-demons-algorithm has worst performance. Conclusion: Most of DIR-algorithms produce the CT-number values and lengths of the stationary-targets in HCT and ACT images that has motion-artifacts induced by small motion-amplitudes. As motion-amplitudes increase, the DIR-algorithms fail to deform mobile-target images to the stationary-images in HCT and ACT. In CBCT, DIR-algorithms are successful in producing length and shape of the stationary-targets, however, they fail to produce the accurate CT-number level.« less
Combined algorithmic and GPU acceleration for ultra-fast circular conebeam backprojection
NASA Astrophysics Data System (ADS)
Brokish, Jeffrey; Sack, Paul; Bresler, Yoram
2010-04-01
In this paper, we describe the first implementation and performance of a fast O(N3logN) hierarchical backprojection algorithm for cone beam CT with a circular trajectory1,developed on a modern Graphics Processing Unit (GPU). The resulting tomographic backprojection system for 3D cone beam geometry combines speedup through algorithmic improvements provided by the hierarchical backprojection algorithm with speedup from a massively parallel hardware accelerator. For data parameters typical in diagnostic CT and using a mid-range GPU card, we report reconstruction speeds of up to 360 frames per second, and relative speedup of almost 6x compared to conventional backprojection on the same hardware. The significance of these results is twofold. First, they demonstrate that the reduction in operation counts demonstrated previously for the FHBP algorithm can be translated to a comparable run-time improvement in a massively parallel hardware implementation, while preserving stringent diagnostic image quality. Second, the dramatic speedup and throughput numbers achieved indicate the feasibility of systems based on this technology, which achieve real-time 3D reconstruction for state-of-the art diagnostic CT scanners with small footprint, high-reliability, and affordable cost.
Projection matrix acquisition for cone-beam computed tomography iterative reconstruction
NASA Astrophysics Data System (ADS)
Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Shi, Wenlong; Zhang, Caixin; Gao, Zongzhao
2017-02-01
Projection matrix is an essential and time-consuming part in computed tomography (CT) iterative reconstruction. In this article a novel calculation algorithm of three-dimensional (3D) projection matrix is proposed to quickly acquire the matrix for cone-beam CT (CBCT). The CT data needed to be reconstructed is considered as consisting of the three orthogonal sets of equally spaced and parallel planes, rather than the individual voxels. After getting the intersections the rays with the surfaces of the voxels, the coordinate points and vertex is compared to obtain the index value that the ray traversed. Without considering ray-slope to voxel, it just need comparing the position of two points. Finally, the computer simulation is used to verify the effectiveness of the algorithm.
A fast 4D cone beam CT reconstruction method based on the OSC-TV algorithm.
Mascolo-Fortin, Julia; Matenine, Dmitri; Archambault, Louis; Després, Philippe
2018-01-01
Four-dimensional cone beam computed tomography allows for temporally resolved imaging with useful applications in radiotherapy, but raises particular challenges in terms of image quality and computation time. The purpose of this work is to develop a fast and accurate 4D algorithm by adapting a GPU-accelerated ordered subsets convex algorithm (OSC), combined with the total variation minimization regularization technique (TV). Different initialization schemes were studied to adapt the OSC-TV algorithm to 4D reconstruction: each respiratory phase was initialized either with a 3D reconstruction or a blank image. Reconstruction algorithms were tested on a dynamic numerical phantom and on a clinical dataset. 4D iterations were implemented for a cluster of 8 GPUs. All developed methods allowed for an adequate visualization of the respiratory movement and compared favorably to the McKinnon-Bates and adaptive steepest descent projection onto convex sets algorithms, while the 4D reconstructions initialized from a prior 3D reconstruction led to better overall image quality. The most suitable adaptation of OSC-TV to 4D CBCT was found to be a combination of a prior FDK reconstruction and a 4D OSC-TV reconstruction with a reconstruction time of 4.5 minutes. This relatively short reconstruction time could facilitate a clinical use.
Computational models of human vision with applications
NASA Technical Reports Server (NTRS)
Wandell, B. A.
1985-01-01
Perceptual problems in aeronautics were studied. The mechanism by which color constancy is achieved in human vision was examined. A computable algorithm was developed to model the arrangement of retinal cones in spatial vision. The spatial frequency spectra are similar to the spectra of actual cone mosaics. The Hartley transform as a tool of image processing was evaluated and it is suggested that it could be used in signal processing applications, GR image processing.
Cone-Beam Computed Tomography for Image-Guided Radiation Therapy of Prostate Cancer
2008-01-01
forexa t volumetri image re onstru tion. As a onsequense, images re onstru ted by approx-imate algorithms, mostly based on the Feldkamp algorithm...patient dose from CBCT. Reverse heli al CBCT has been developed for exa tre onstru tion of volumetri images, region-of-interest (ROI) re onstru tion...algorithm with a priori informa-tion in few-view CBCT for IGRT. We expe t the proposed algorithm an redu e the numberof proje tions needed for volumetri
Fast generation of computer-generated holograms using wavelet shrinkage.
Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2017-01-09
Computer-generated holograms (CGHs) are generated by superimposing complex amplitudes emitted from a number of object points. However, this superposition process remains very time-consuming even when using the latest computers. We propose a fast calculation algorithm for CGHs that uses a wavelet shrinkage method, eliminating small wavelet coefficient values to express approximated complex amplitudes using only a few representative wavelet coefficients.
NASA Astrophysics Data System (ADS)
Tang, Shaojie; Tang, Xiangyang
2016-03-01
Axial cone beam (CB) computed tomography (CT) reconstruction is still the most desirable in clinical applications. As the potential candidates with analytic form for the task, the back projection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical and axial reconstruction from CB and fan beam projection data, respectively. These two algorithms have been heuristically extended for axial CB reconstruction via adoption of virtual PI-line segments. Unfortunately, however, streak artifacts are induced along the Hilbert filtering direction, since these algorithms are no longer accurate on the virtual PI-line segments. We have proposed to cascade the extended BPF/DBPF algorithm with orthogonal butterfly filtering for image reconstruction (namely axial CB-BPP/DBPF cascaded with orthogonal butterfly filtering), in which the orientation-specific artifacts caused by post-BP Hilbert transform can be eliminated, at a possible expense of losing the BPF/DBPF's capability of dealing with projection data truncation. Our preliminary results have shown that this is not the case in practice. Hence, in this work, we carry out an algorithmic analysis and experimental study to investigate the performance of the axial CB-BPP/DBPF cascaded with adequately oriented orthogonal butterfly filtering for three-dimensional (3D) reconstruction in region of interest (ROI).
Strong-field ionization with twisted laser pulses
NASA Astrophysics Data System (ADS)
Paufler, Willi; Böning, Birger; Fritzsche, Stephan
2018-04-01
We apply quantum trajectory Monte Carlo computations in order to model strong-field ionization of atoms by twisted Bessel pulses and calculate photoelectron momentum distributions (PEMD). Since Bessel beams can be considered as an infinite superposition of circularly polarized plane waves with the same helicity, whose wave vectors lie on a cone, we compared the PEMD of such Bessel pulses to those of a circularly polarized pulse. We focus on the momentum distributions in propagation direction of the pulse and show how these momentum distributions are affected by experimental accessible parameters, such as the opening angle of the beam or the impact parameter of the atom with regard to the beam axis. In particular, we show that we can find higher momenta of the photoelectrons, if the opening angle is increased.
Twisted light transmission over 143 km
Krenn, Mario; Handsteiner, Johannes; Fink, Matthias; Fickler, Robert; Ursin, Rupert; Zeilinger, Anton
2016-01-01
Spatial modes of light can potentially carry a vast amount of information, making them promising candidates for both classical and quantum communication. However, the distribution of such modes over large distances remains difficult. Intermodal coupling complicates their use with common fibers, whereas free-space transmission is thought to be strongly influenced by atmospheric turbulence. Here, we show the transmission of orbital angular momentum modes of light over a distance of 143 km between two Canary Islands, which is 50× greater than the maximum distance achieved previously. As a demonstration of the transmission quality, we use superpositions of these modes to encode a short message. At the receiver, an artificial neural network is used for distinguishing between the different twisted light superpositions. The algorithm is able to identify different mode superpositions with an accuracy of more than 80% up to the third mode order and decode the transmitted message with an error rate of 8.33%. Using our data, we estimate that the distribution of orbital angular momentum entanglement over more than 100 km of free space is feasible. Moreover, the quality of our free-space link can be further improved by the use of state-of-the-art adaptive optics systems. PMID:27856744
Twisted light transmission over 143 km
NASA Astrophysics Data System (ADS)
Krenn, Mario; Handsteiner, Johannes; Fink, Matthias; Fickler, Robert; Ursin, Rupert; Malik, Mehul; Zeilinger, Anton
2016-11-01
Spatial modes of light can potentially carry a vast amount of information, making them promising candidates for both classical and quantum communication. However, the distribution of such modes over large distances remains difficult. Intermodal coupling complicates their use with common fibers, whereas free-space transmission is thought to be strongly influenced by atmospheric turbulence. Here, we show the transmission of orbital angular momentum modes of light over a distance of 143 km between two Canary Islands, which is 50× greater than the maximum distance achieved previously. As a demonstration of the transmission quality, we use superpositions of these modes to encode a short message. At the receiver, an artificial neural network is used for distinguishing between the different twisted light superpositions. The algorithm is able to identify different mode superpositions with an accuracy of more than 80% up to the third mode order and decode the transmitted message with an error rate of 8.33%. Using our data, we estimate that the distribution of orbital angular momentum entanglement over more than 100 km of free space is feasible. Moreover, the quality of our free-space link can be further improved by the use of state-of-the-art adaptive optics systems.
Twisted light transmission over 143 km.
Krenn, Mario; Handsteiner, Johannes; Fink, Matthias; Fickler, Robert; Ursin, Rupert; Malik, Mehul; Zeilinger, Anton
2016-11-29
Spatial modes of light can potentially carry a vast amount of information, making them promising candidates for both classical and quantum communication. However, the distribution of such modes over large distances remains difficult. Intermodal coupling complicates their use with common fibers, whereas free-space transmission is thought to be strongly influenced by atmospheric turbulence. Here, we show the transmission of orbital angular momentum modes of light over a distance of 143 km between two Canary Islands, which is 50× greater than the maximum distance achieved previously. As a demonstration of the transmission quality, we use superpositions of these modes to encode a short message. At the receiver, an artificial neural network is used for distinguishing between the different twisted light superpositions. The algorithm is able to identify different mode superpositions with an accuracy of more than 80% up to the third mode order and decode the transmitted message with an error rate of 8.33%. Using our data, we estimate that the distribution of orbital angular momentum entanglement over more than 100 km of free space is feasible. Moreover, the quality of our free-space link can be further improved by the use of state-of-the-art adaptive optics systems.
Grover's unstructured search by using a transverse field
NASA Astrophysics Data System (ADS)
Jiang, Zhang; Rieffel, Eleanor; Wang, Zhihui
2017-04-01
We design a circuit-based quantum algorithm to search for a needle in a haystack, giving the same quadratic speedup achieved by Grover's original algorithm. In our circuit-based algorithm, the problem Hamiltonian (oracle) and a transverse field (instead of Grover's diffusion operator) are applied to the system alternatively. We construct a periodic time sequence such that the resultant unitary drives a closed transition between two states, which have high degrees of overlap with the initial state (even superposition of all states) and the target state, respectively. Let N =2n be the size of the search space. The transition rate in our algorithm is of order Θ(1 /√{ N}) , and the overlaps are of order Θ(1) , yielding a nearly optimal query complexity of T =√{ N}(π / 2√{ 2}) . Our algorithm is inspired by a class of algorithms proposed by Farhi et al., namely the Quantum Approximate Optimization Algorithm (QAOA); our method offers a route to optimizing the parameters in QAOA by restricting them to be periodic in time.
Color enhancement for portable LCD displays in low-power mode
NASA Astrophysics Data System (ADS)
Shih, Kuang-Tsu; Huang, Tai-Hsiang; Chen, Homer H.
2011-09-01
Switching the backlight of handheld devices to low power mode saves energy but affects the color appearance of an image. In this paper, we consider the chroma degradation problem and propose an enhancement algorithm that incorporates the CIECAM02 appearance model to quantitatively characterize the problem. In the proposed algorithm, we enhance the color appearance of the image in low power mode by weighted linear superposition of the chroma of the image and that of the estimated dim-backlight image. Subjective tests are carried out to determine the perceptually optimal weighting and prove the effectiveness of our framework.
Axial Cone Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering
Tang, Shaojie; Tang, Xiangyang
2016-01-01
Goal The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. Methods The solution is an integration of three-dimensional (3D) weighted axial CB-BPF/ DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting reconstruction accuracy and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate performance of the proposed algorithm. Results Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Conclusion Integrated with orthogonal butterfly filtering, the 3D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. Significance The proposed 3D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications. PMID:26660512
FAST-PT: a novel algorithm to calculate convolution integrals in cosmological perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.
2016-09-01
We present a novel algorithm, FAST-PT, for performing convolution or mode-coupling integrals that appear in nonlinear cosmological perturbation theory. The algorithm uses several properties of gravitational structure formation—the locality of the dark matter equations and the scale invariance of the problem—as well as Fast Fourier Transforms to describe the input power spectrum as a superposition of power laws. This yields extremely fast performance, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation. We describe the algorithm and demonstrate its application to calculating nonlinear corrections to the matter power spectrum, including one-loop standard perturbation theorymore » and the renormalization group approach. We also describe our public code (in Python) to implement this algorithm. The code, along with a user manual and example implementations, is available at https://github.com/JoeMcEwen/FAST-PT.« less
Operating Quantum States in Single Magnetic Molecules: Implementation of Grover's Quantum Algorithm.
Godfrin, C; Ferhat, A; Ballou, R; Klyatskaya, S; Ruben, M; Wernsdorfer, W; Balestro, F
2017-11-03
Quantum algorithms use the principles of quantum mechanics, such as, for example, quantum superposition, in order to solve particular problems outperforming standard computation. They are developed for cryptography, searching, optimization, simulation, and solving large systems of linear equations. Here, we implement Grover's quantum algorithm, proposed to find an element in an unsorted list, using a single nuclear 3/2 spin carried by a Tb ion sitting in a single molecular magnet transistor. The coherent manipulation of this multilevel quantum system (qudit) is achieved by means of electric fields only. Grover's search algorithm is implemented by constructing a quantum database via a multilevel Hadamard gate. The Grover sequence then allows us to select each state. The presented method is of universal character and can be implemented in any multilevel quantum system with nonequal spaced energy levels, opening the way to novel quantum search algorithms.
Operating Quantum States in Single Magnetic Molecules: Implementation of Grover's Quantum Algorithm
NASA Astrophysics Data System (ADS)
Godfrin, C.; Ferhat, A.; Ballou, R.; Klyatskaya, S.; Ruben, M.; Wernsdorfer, W.; Balestro, F.
2017-11-01
Quantum algorithms use the principles of quantum mechanics, such as, for example, quantum superposition, in order to solve particular problems outperforming standard computation. They are developed for cryptography, searching, optimization, simulation, and solving large systems of linear equations. Here, we implement Grover's quantum algorithm, proposed to find an element in an unsorted list, using a single nuclear 3 /2 spin carried by a Tb ion sitting in a single molecular magnet transistor. The coherent manipulation of this multilevel quantum system (qudit) is achieved by means of electric fields only. Grover's search algorithm is implemented by constructing a quantum database via a multilevel Hadamard gate. The Grover sequence then allows us to select each state. The presented method is of universal character and can be implemented in any multilevel quantum system with nonequal spaced energy levels, opening the way to novel quantum search algorithms.
Classifying bilinear differential equations by linear superposition principle
NASA Astrophysics Data System (ADS)
Zhang, Lijun; Khalique, Chaudry Masood; Ma, Wen-Xiu
2016-09-01
In this paper, we investigate the linear superposition principle of exponential traveling waves to construct a sub-class of N-wave solutions of Hirota bilinear equations. A necessary and sufficient condition for Hirota bilinear equations possessing this specific sub-class of N-wave solutions is presented. We apply this result to find N-wave solutions to the (2+1)-dimensional KP equation, a (3+1)-dimensional generalized Kadomtsev-Petviashvili (KP) equation, a (3+1)-dimensional generalized BKP equation and the (2+1)-dimensional BKP equation. The inverse question, i.e., constructing Hirota Bilinear equations possessing N-wave solutions, is considered and a refined 3-step algorithm is proposed. As examples, we construct two very general kinds of Hirota bilinear equations of order 4 possessing N-wave solutions among which one satisfies dispersion relation and another does not satisfy dispersion relation.
Tang, Min; Curtis, Sean; Yoon, Sung-Eui; Manocha, Dinesh
2009-01-01
We present an interactive algorithm for continuous collision detection between deformable models. We introduce multiple techniques to improve the culling efficiency and the overall performance of continuous collision detection. First, we present a novel formulation for continuous normal cones and use these normal cones to efficiently cull large regions of the mesh as part of self-collision tests. Second, we introduce the concept of "procedural representative triangles" to remove all redundant elementary tests between nonadjacent triangles. Finally, we exploit the mesh connectivity and introduce the concept of "orphan sets" to eliminate redundant elementary tests between adjacent triangle primitives. In practice, we can reduce the number of elementary tests by two orders of magnitude. These culling techniques have been combined with bounding volume hierarchies and can result in one order of magnitude performance improvement as compared to prior collision detection algorithms for deformable models. We highlight the performance of our algorithm on several benchmarks, including cloth simulations, N-body simulations, and breaking objects.
Feasibility study of low-dose intra-operative cone-beam CT for image-guided surgery
NASA Astrophysics Data System (ADS)
Han, Xiao; Shi, Shuanghe; Bian, Junguo; Helm, Patrick; Sidky, Emil Y.; Pan, Xiaochuan
2011-03-01
Cone-beam computed tomography (CBCT) has been increasingly used during surgical procedures for providing accurate three-dimensional anatomical information for intra-operative navigation and verification. High-quality CBCT images are in general obtained through reconstruction from projection data acquired at hundreds of view angles, which is associated with a non-negligible amount of radiation exposure to the patient. In this work, we have applied a novel image-reconstruction algorithm, the adaptive-steepest-descent-POCS (ASD-POCS) algorithm, to reconstruct CBCT images from projection data at a significantly reduced number of view angles. Preliminary results from experimental studies involving both simulated data and real data show that images of comparable quality to those presently available in clinical image-guidance systems can be obtained by use of the ASD-POCS algorithm from a fraction of the projection data that are currently used. The result implies potential value of the proposed reconstruction technique for low-dose intra-operative CBCT imaging applications.
Poludniowski, Gavin; Webb, Steve; Evans, Philip M
2012-03-01
Artifacts in treatment-room cone-beam reconstructions have been observed at the authors' center when cone-beam acquisition is simultaneous with radio frequency (RF) transponder tracking using the Calypso 4D system (Calypso Medical, Seattle, WA). These artifacts manifest as CT-number modulations and increased CT-noise. The authors present a method for the suppression of the artifacts. The authors propose a three-stage postprocessing technique that can be applied to image volumes previously reconstructed by a cone-beam system. The stages are (1) segmentation of voxels into air, soft-tissue, and bone; (2) application of a 2D spatial-filter in the axial plane to the soft-tissue voxels; and (3) normalization to remove streaking along the axial-direction. The algorithm was tested on patient data acquired with Synergy XVI cone-beam CT systems (Elekta, Crawley, United Kingdom). The computational demands of the suggested correction are small, taking less than 15 s per cone-beam reconstruction on a desktop PC. For a moderate loss of spatial-resolution, the artifacts are strongly suppressed and low-contrast visibility is improved. The correction technique proposed is fast and effective in removing the artifacts caused by simultaneous cone-beam imaging and RF-transponder tracking.
NASA Astrophysics Data System (ADS)
Sudicky, E. A.; Unger, A. J. A.; Lacombe, S.
1995-02-01
A noniterative algorithm for handling prescribed well bore boundary conditions while pumping or injecting fluid in a three-dimensional heterogeneous aquifer is described. The algorithm is formulated by superimposing conductive one-dimensional line elements representing the well screen onto the three-dimensional matrix elements epresenting the aquifer. Storage in the well casing is also naturally accommodated by the superposition of the line elements. The numerical algorithm is verified by comparison with results obtained from the solution of Papadopulos and Cooper (1967). A large-scale example problem involving groundwater extraction from a partially penetrating pumping well located in a highly heterogeneous confined aquifer is presented to demonstrate the utility of the approach.
Unified algorithm of cone optics to compute solar flux on central receiver
NASA Astrophysics Data System (ADS)
Grigoriev, Victor; Corsi, Clotilde
2017-06-01
Analytical algorithms to compute flux distribution on central receiver are considered as a faster alternative to ray tracing. They have quite too many modifications, with HFLCAL and UNIZAR being the most recognized and verified. In this work, a generalized algorithm is presented which is valid for arbitrary sun shape of radial symmetry. Heliostat mirrors can have a nonrectangular profile, and the effects of shading and blocking, strong defocusing and astigmatism can be taken into account. The algorithm is suitable for parallel computing and can benefit from hardware acceleration of polygon texturing.
An efficient Monte Carlo-based algorithm for scatter correction in keV cone-beam CT
NASA Astrophysics Data System (ADS)
Poludniowski, G.; Evans, P. M.; Hansen, V. N.; Webb, S.
2009-06-01
A new method is proposed for scatter-correction of cone-beam CT images. A coarse reconstruction is used in initial iteration steps. Modelling of the x-ray tube spectra and detector response are included in the algorithm. Photon diffusion inside the imaging subject is calculated using the Monte Carlo method. Photon scoring at the detector is calculated using forced detection to a fixed set of node points. The scatter profiles are then obtained by linear interpolation. The algorithm is referred to as the coarse reconstruction and fixed detection (CRFD) technique. Scatter predictions are quantitatively validated against a widely used general-purpose Monte Carlo code: BEAMnrc/EGSnrc (NRCC, Canada). Agreement is excellent. The CRFD algorithm was applied to projection data acquired with a Synergy XVI CBCT unit (Elekta Limited, Crawley, UK), using RANDO and Catphan phantoms (The Phantom Laboratory, Salem NY, USA). The algorithm was shown to be effective in removing scatter-induced artefacts from CBCT images, and took as little as 2 min on a desktop PC. Image uniformity was greatly improved as was CT-number accuracy in reconstructions. This latter improvement was less marked where the expected CT-number of a material was very different to the background material in which it was embedded.
Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning
NASA Astrophysics Data System (ADS)
Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.
2008-02-01
Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Vincent W.C., E-mail: htvinwu@polyu.edu.hk; Tse, Teddy K.H.; Ho, Cola L.M.
2013-07-01
Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each casemore » by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time.« less
SU-F-J-23: Field-Of-View Expansion in Cone-Beam CT Reconstruction by Use of Prior Information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haga, A; Magome, T; Nakano, M
Purpose: Cone-beam CT (CBCT) has become an integral part of online patient setup in an image-guided radiation therapy (IGRT). In addition, the utility of CBCT for dose calculation has actively been investigated. However, the limited size of field-of-view (FOV) and resulted CBCT image with a lack of peripheral area of patient body prevents the reliability of dose calculation. In this study, we aim to develop an FOV expanded CBCT in IGRT system to allow the dose calculation. Methods: Three lung cancer patients were selected in this study. We collected the cone-beam projection images in the CBCT-based IGRT system (X-ray volumemore » imaging unit, ELEKTA), where FOV size of the provided CBCT with these projections was 410 × 410 mm{sup 2} (normal FOV). Using these projections, CBCT with a size of 728 × 728 mm{sup 2} was reconstructed by a posteriori estimation algorithm including a prior image constrained compressed sensing (PICCS). The treatment planning CT was used as a prior image. To assess the effectiveness of FOV expansion, a dose calculation was performed on the expanded CBCT image with region-of-interest (ROI) density mapping method, and it was compared with that of treatment planning CT as well as that of CBCT reconstructed by filtered back projection (FBP) algorithm. Results: A posteriori estimation algorithm with PICCS clearly visualized an area outside normal FOV, whereas the FBP algorithm yielded severe streak artifacts outside normal FOV due to under-sampling. The dose calculation result using the expanded CBCT agreed with that using treatment planning CT very well; a maximum dose difference was 1.3% for gross tumor volumes. Conclusion: With a posteriori estimation algorithm, FOV in CBCT can be expanded. Dose comparison results suggested that the use of expanded CBCTs is acceptable for dose calculation in adaptive radiation therapy. This study has been supported by KAKENHI (15K08691).« less
Hsieh, Jiang; Nilsen, Roy A.; McOlash, Scott M.
2006-01-01
A three-dimensional (3D) weighted helical cone beam filtered backprojection (CB-FBP) algorithm (namely, original 3D weighted helical CB-FBP algorithm) has already been proposed to reconstruct images from the projection data acquired along a helical trajectory in angular ranges up to [0, 2 π]. However, an overscan is usually employed in the clinic to reconstruct tomographic images with superior noise characteristics at the most challenging anatomic structures, such as head and spine, extremity imaging, and CT angiography as well. To obtain the most achievable noise characteristics or dose efficiency in a helical overscan, we extended the 3D weighted helical CB-FBP algorithm to handle helical pitches that are smaller than 1: 1 (namely extended 3D weighted helical CB-FBP algorithm). By decomposing a helical over scan with an angular range of [0, 2π + Δβ] into a union of full scans corresponding to an angular range of [0, 2π], the extended 3D weighted function is a summation of all 3D weighting functions corresponding to each full scan. An experimental evaluation shows that the extended 3D weighted helical CB-FBP algorithm can improve noise characteristics or dose efficiency of the 3D weighted helical CB-FBP algorithm at a helical pitch smaller than 1: 1, while its reconstruction accuracy and computational efficiency are maintained. It is believed that, such an efficient CB reconstruction algorithm that can provide superior noise characteristics or dose efficiency at low helical pitches may find its extensive applications in CT medical imaging. PMID:23165031
NASA Technical Reports Server (NTRS)
Bhandari, P.; Wu, Y. C.; Roschke, E. J.
1989-01-01
A simple solar flux calculation algorithm for a cylindrical cavity type solar receiver has been developed and implemented on an IBM PC-AT. Using cone optics, the contour error method is utilized to handle the slope error of a paraboloidal concentrator. The flux distribution on the side wall is calculated by integration of the energy incident from cones emanating from all the differential elements on the concentrator. The calculations are done for any set of dimensions and properties of the receiver and the concentrator, and account for any spillover on the aperture plate. The results of this algorithm compared excellently with those predicted by more complicated programs. Because of the utilization of axial symmetry and overall simplification, it is extremely fast. It can be esily extended to other axisymmetric receiver geometries.
Dixit, Anant; Claudot, Julien; Lebègue, Sébastien; Rocca, Dario
2017-06-07
By using a formulation based on the dynamical polarizability, we propose a novel implementation of second-order Møller-Plesset perturbation (MP2) theory within a plane wave (PW) basis set. Because of the intrinsic properties of PWs, this method is not affected by basis set superposition errors. Additionally, results are converged without relying on complete basis set extrapolation techniques; this is achieved by using the eigenvectors of the static polarizability as an auxiliary basis set to compactly and accurately represent the response functions involved in the MP2 equations. Summations over the large number of virtual states are avoided by using a formalism inspired by density functional perturbation theory, and the Lanczos algorithm is used to include dynamical effects. To demonstrate this method, applications to three weakly interacting dimers are presented.
Cone-Beam Computed Tomography for Image-Guided Radiation Therapy of Prostate Cancer
2010-01-01
ltered ba kproje tion (FBP) al-gorithm that does not depend upon the hords hastherefore been developed for volumetri image re- onstru tion in a...reproje tion of the rst re onstru ted volumetri image. The NCAT 6 phantom images re onstru ted by the tandem algorithm are shown in Fig. 3. The paper...algorithm has been applied to a ir ular one-beam mi ro-CT for volumetri images of the ROIwith a higher spatial resolution and at a redu edexposure to
Vorburger, Robert S; Habeck, Christian G; Narkhede, Atul; Guzman, Vanessa A; Manly, Jennifer J; Brickman, Adam M
2016-01-01
Diffusion tensor imaging suffers from an intrinsic low signal-to-noise ratio. Bootstrap algorithms have been introduced to provide a non-parametric method to estimate the uncertainty of the measured diffusion parameters. To quantify the variability of the principal diffusion direction, bootstrap-derived metrics such as the cone of uncertainty have been proposed. However, bootstrap-derived metrics are not independent of the underlying diffusion profile. A higher mean diffusivity causes a smaller signal-to-noise ratio and, thus, increases the measurement uncertainty. Moreover, the goodness of the tensor model, which relies strongly on the complexity of the underlying diffusion profile, influences bootstrap-derived metrics as well. The presented simulations clearly depict the cone of uncertainty as a function of the underlying diffusion profile. Since the relationship of the cone of uncertainty and common diffusion parameters, such as the mean diffusivity and the fractional anisotropy, is not linear, the cone of uncertainty has a different sensitivity. In vivo analysis of the fornix reveals the cone of uncertainty to be a predictor of memory function among older adults. No significant correlation occurs with the common diffusion parameters. The present work not only demonstrates the cone of uncertainty as a function of the actual diffusion profile, but also discloses the cone of uncertainty as a sensitive predictor of memory function. Future studies should incorporate bootstrap-derived metrics to provide more comprehensive analysis.
Alternative method of quantum state tomography toward a typical target via a weak-value measurement
NASA Astrophysics Data System (ADS)
Chen, Xi; Dai, Hong-Yi; Yang, Le; Zhang, Ming
2018-03-01
There is usually a limitation of weak interaction on the application of weak-value measurement. This limitation dominates the performance of the quantum state tomography toward a typical target in the finite and high-dimensional complex-valued superposition of its basis states, especially when the compressive sensing technique is also employed. Here we propose an alternative method of quantum state tomography, presented as a general model, toward such typical target via weak-value measurement to overcome such limitation. In this model the pointer for the weak-value measurement is a qubit, and the target-pointer coupling interaction is no longer needed within the weak interaction limitation, meanwhile this interaction under the compressive sensing can be described with the Taylor series of the unitary evolution operator. The postselection state at the target is the equal superposition of all basis states, and the pointer readouts are gathered under multiple Pauli operator measurements. The reconstructed quantum state is generated from an optimization algorithm of total variation augmented Lagrangian alternating direction algorithm. Furthermore, we demonstrate an example of this general model for the quantum state tomography toward the planar laser-energy distribution and discuss the relations among some parameters at both our general model and the original first-order approximate model for this tomography.
Quantum reinforcement learning.
Dong, Daoyi; Chen, Chunlin; Li, Hanxiong; Tarn, Tzyh-Jong
2008-10-01
The key approaches for machine learning, particularly learning in unknown probabilistic environments, are new representations and computation mechanisms. In this paper, a novel quantum reinforcement learning (QRL) method is proposed by combining quantum theory and reinforcement learning (RL). Inspired by the state superposition principle and quantum parallelism, a framework of a value-updating algorithm is introduced. The state (action) in traditional RL is identified as the eigen state (eigen action) in QRL. The state (action) set can be represented with a quantum superposition state, and the eigen state (eigen action) can be obtained by randomly observing the simulated quantum state according to the collapse postulate of quantum measurement. The probability of the eigen action is determined by the probability amplitude, which is updated in parallel according to rewards. Some related characteristics of QRL such as convergence, optimality, and balancing between exploration and exploitation are also analyzed, which shows that this approach makes a good tradeoff between exploration and exploitation using the probability amplitude and can speedup learning through the quantum parallelism. To evaluate the performance and practicability of QRL, several simulated experiments are given, and the results demonstrate the effectiveness and superiority of the QRL algorithm for some complex problems. This paper is also an effective exploration on the application of quantum computation to artificial intelligence.
Hydraulic containment: analytical and semi-analytical models for capture zone curve delineation
NASA Astrophysics Data System (ADS)
Christ, John A.; Goltz, Mark N.
2002-05-01
We present an efficient semi-analytical algorithm that uses complex potential theory and superposition to delineate the capture zone curves of extraction wells. This algorithm is more flexible than previously published techniques and allows the user to determine the capture zone for a number of arbitrarily positioned extraction wells pumping at different rates. The algorithm is applied to determine the capture zones and optimal well spacing of two wells pumping at different flow rates and positioned at various orientations to the direction of regional groundwater flow. The algorithm is also applied to determine capture zones for non-colinear three-well configurations as well as to determine optimal well spacing for up to six wells pumping at the same rate. We show that the optimal well spacing is found by minimizing the difference in the stream function evaluated at the stagnation points.
Iteration and superposition encryption scheme for image sequences based on multi-dimensional keys
NASA Astrophysics Data System (ADS)
Han, Chao; Shen, Yuzhen; Ma, Wenlin
2017-12-01
An iteration and superposition encryption scheme for image sequences based on multi-dimensional keys is proposed for high security, big capacity and low noise information transmission. Multiple images to be encrypted are transformed into phase-only images with the iterative algorithm and then are encrypted by different random phase, respectively. The encrypted phase-only images are performed by inverse Fourier transform, respectively, thus new object functions are generated. The new functions are located in different blocks and padded zero for a sparse distribution, then they propagate to a specific region at different distances by angular spectrum diffraction, respectively and are superposed in order to form a single image. The single image is multiplied with a random phase in the frequency domain and then the phase part of the frequency spectrums is truncated and the amplitude information is reserved. The random phase, propagation distances, truncated phase information in frequency domain are employed as multiple dimensional keys. The iteration processing and sparse distribution greatly reduce the crosstalk among the multiple encryption images. The superposition of image sequences greatly improves the capacity of encrypted information. Several numerical experiments based on a designed optical system demonstrate that the proposed scheme can enhance encrypted information capacity and make image transmission at a highly desired security level.
NASA Astrophysics Data System (ADS)
Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.
2016-03-01
Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.
NASA Technical Reports Server (NTRS)
Daywitt, J.; Kutler, P.; Anderson, D.
1977-01-01
The technique of floating shock fitting is adapted to the computation of the inviscid flowfield about circular cones in a supersonic free stream at angles of attack that exceed the cone half-angle. The resulting equations are applicable over the complete range of free-stream Mach numbers, angles of attack and cone half-angles for which the bow shock is attached. A finite difference algorithm is used to obtain the solution by an unsteady relaxation approach. The bow shock, embedded cross-flow shock, and vortical singularity in the leeward symmetry plane are treated as floating discontinuities in a fixed computational mesh. Where possible, the flowfield is partitioned into windward, shoulder, and leeward regions with each region computed separately to achieve maximum computational efficiency. An alternative shock fitting technique which treats the bow shock as a computational boundary is developed and compared with the floating-fitting approach. Several surface boundary condition schemes are also analyzed.
Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh
2018-01-01
Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images.
Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh
2018-01-01
Background: Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. Methods: In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Results: Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Conclusions: Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images. PMID:29535920
Ion mobility spectrometry: A personal view of its development at UCSB
2014-09-15
molecules. As we progressed we realized that new, more accurate algorithms were needed to augment our early projection approximation (PA) for determining...required. The goal was to maintain some of the speed of the projection approximation and retain the accuracy of the trajectory method. Christian...Bleiholder, while a postdoc in my group, did just that by development of the projection superposition approximation (PSA) [31–35]. This new method is 100
Objective identification of residue ranges for the superposition of protein structures
2011-01-01
Background The automation of objectively selecting amino acid residue ranges for structure superpositions is important for meaningful and consistent protein structure analyses. So far there is no widely-used standard for choosing these residue ranges for experimentally determined protein structures, where the manual selection of residue ranges or the use of suboptimal criteria remain commonplace. Results We present an automated and objective method for finding amino acid residue ranges for the superposition and analysis of protein structures, in particular for structure bundles resulting from NMR structure calculations. The method is implemented in an algorithm, CYRANGE, that yields, without protein-specific parameter adjustment, appropriate residue ranges in most commonly occurring situations, including low-precision structure bundles, multi-domain proteins, symmetric multimers, and protein complexes. Residue ranges are chosen to comprise as many residues of a protein domain that increasing their number would lead to a steep rise in the RMSD value. Residue ranges are determined by first clustering residues into domains based on the distance variance matrix, and then refining for each domain the initial choice of residues by excluding residues one by one until the relative decrease of the RMSD value becomes insignificant. A penalty for the opening of gaps favours contiguous residue ranges in order to obtain a result that is as simple as possible, but not simpler. Results are given for a set of 37 proteins and compared with those of commonly used protein structure validation packages. We also provide residue ranges for 6351 NMR structures in the Protein Data Bank. Conclusions The CYRANGE method is capable of automatically determining residue ranges for the superposition of protein structure bundles for a large variety of protein structures. The method correctly identifies ordered regions. Global structure superpositions based on the CYRANGE residue ranges allow a clear presentation of the structure, and unnecessary small gaps within the selected ranges are absent. In the majority of cases, the residue ranges from CYRANGE contain fewer gaps and cover considerably larger parts of the sequence than those from other methods without significantly increasing the RMSD values. CYRANGE thus provides an objective and automatic method for standardizing the choice of residue ranges for the superposition of protein structures. PMID:21592348
Geometrical study on two tilting arcs based exact cone-beam CT for breast imaging
NASA Astrophysics Data System (ADS)
Zeng, Kai; Yu, Hengyong; Fajardo, Laurie L.; Wang, Ge
2006-08-01
Breast cancer is the second leading cause of cancer death in women in the United States. Currently, X-ray mammography is the method of choice for screening and diagnosing breast cancer. However, this 2D projective modality is far from perfect; with up to 17% breast cancer going unidentified. Over past several years, there has been an increasing interest in cone-beam CT for breast imaging. However, previous methods utilizing cone-beam CT only produce approximate reconstructions. Following Katsevich's recent work, we propose a new scanning mode and associated exact cone-beam CT method for breast imaging. In our design, cone-beam scans are performed along two tilting arcs for collection of a sufficient amount of data for exact reconstruction. In our Katsevich-type algorithm, conebeam data is filtered in a shift-invariant fashion and then backprojected in 3D for the final reconstruction. This approach has several desirable features. First, it allows data truncation unavoidable in practice. Second, it optimizes image quality for quantitative analysis. Third, it is efficient for sequential/parallel computation. Furthermore, we analyze the reconstruction region and the detection window in detail, which are important for numerical implementation.
Liu, Jianfei; Jung, HaeWon; Dubra, Alfredo; Tam, Johnny
2017-09-01
Adaptive optics scanning light ophthalmoscopy (AOSLO) has enabled quantification of the photoreceptor mosaic in the living human eye using metrics such as cell density and average spacing. These rely on the identification of individual cells. Here, we demonstrate a novel approach for computer-aided identification of cone photoreceptors on nonconfocal split detection AOSLO images. Algorithms for identification of cone photoreceptors were developed, based on multiscale circular voting (MSCV) in combination with a priori knowledge that split detection images resemble Nomarski differential interference contrast images, in which dark and bright regions are present on the two sides of each cell. The proposed algorithm locates dark and bright region pairs, iteratively refining the identification across multiple scales. Identification accuracy was assessed in data from 10 subjects by comparing automated identifications with manual labeling, followed by computation of density and spacing metrics for comparison to histology and published data. There was good agreement between manual and automated cone identifications with overall recall, precision, and F1 score of 92.9%, 90.8%, and 91.8%, respectively. On average, computed density and spacing values using automated identification were within 10.7% and 11.2% of the expected histology values across eccentricities ranging from 0.5 to 6.2 mm. There was no statistically significant difference between MSCV-based and histology-based density measurements (P = 0.96, Kolmogorov-Smirnov 2-sample test). MSCV can accurately detect cone photoreceptors on split detection images across a range of eccentricities, enabling quick, objective estimation of photoreceptor mosaic metrics, which will be important for future clinical trials utilizing adaptive optics.
NASA Astrophysics Data System (ADS)
Ji, Y.; Shen, C.
2014-03-01
With consideration of magnetic field line curvature (FLC) pitch angle scattering and charge exchange reactions, the O+ (>300 keV) in the inner magnetosphere loss rates are investigated by using an eigenfunction analysis. The FLC scattering provides a mechanism for the ring current O+ to enter the loss cone and influence the loss rates caused by charge exchange reactions. Assuming that the pitch angle change is small for each scattering event, the diffusion equation including a charge exchange term is constructed and solved; the eigenvalues of the equation are identified. The resultant loss rates of O+ are approximately equal to the linear superposition of the loss rate without considering the charge exchange reactions and the loss rate associated with charge exchange reactions alone. The loss time is consistent with the observations from the early recovery phases of magnetic storms.
Generalized Fourier slice theorem for cone-beam image reconstruction.
Zhao, Shuang-Ren; Jiang, Dazong; Yang, Kevin; Yang, Kang
2015-01-01
The cone-beam reconstruction theory has been proposed by Kirillov in 1961, Tuy in 1983, Feldkamp in 1984, Smith in 1985, Pierre Grangeat in 1990. The Fourier slice theorem is proposed by Bracewell 1956, which leads to the Fourier image reconstruction method for parallel-beam geometry. The Fourier slice theorem is extended to fan-beam geometry by Zhao in 1993 and 1995. By combining the above mentioned cone-beam image reconstruction theory and the above mentioned Fourier slice theory of fan-beam geometry, the Fourier slice theorem in cone-beam geometry is proposed by Zhao 1995 in short conference publication. This article offers the details of the derivation and implementation of this Fourier slice theorem for cone-beam geometry. Especially the problem of the reconstruction from Fourier domain has been overcome, which is that the value of in the origin of Fourier space is 0/0. The 0/0 type of limit is proper handled. As examples, the implementation results for the single circle and two perpendicular circle source orbits are shown. In the cone-beam reconstruction if a interpolation process is considered, the number of the calculations for the generalized Fourier slice theorem algorithm is
Digital breast tomosynthesis geometry calibration
NASA Astrophysics Data System (ADS)
Wang, Xinying; Mainprize, James G.; Kempston, Michael P.; Mawdsley, Gordon E.; Yaffe, Martin J.
2007-03-01
Digital Breast Tomosynthesis (DBT) is a 3D x-ray technique for imaging the breast. The x-ray tube, mounted on a gantry, moves in an arc over a limited angular range around the breast while 7-15 images are acquired over a period of a few seconds. A reconstruction algorithm is used to create a 3D volume dataset from the projection images. This procedure reduces the effects of tissue superposition, often responsible for degrading the quality of projection mammograms. This may help improve sensitivity of cancer detection, while reducing the number of false positive results. For DBT, images are acquired at a set of gantry rotation angles. The image reconstruction process requires several geometrical factors associated with image acquisition to be known accurately, however, vibration, encoder inaccuracy, the effects of gravity on the gantry arm and manufacturing tolerances can produce deviations from the desired acquisition geometry. Unlike cone-beam CT, in which a complete dataset is acquired (500+ projections over 180°), tomosynthesis reconstruction is challenging in that the angular range is narrow (typically from 20°-45°) and there are fewer projection images (~7-15). With such a limited dataset, reconstruction is very sensitive to geometric alignment. Uncertainties in factors such as detector tilt, gantry angle, focal spot location, source-detector distance and source-pivot distance can produce several artifacts in the reconstructed volume. To accurately and efficiently calculate the location and angles of orientation of critical components of the system in DBT geometry, a suitable phantom is required. We have designed a calibration phantom for tomosynthesis and developed software for accurate measurement of the geometric parameters of a DBT system. These have been tested both by simulation and experiment. We will present estimates of the precision available with this technique for a prototype DBT system.
NASA Astrophysics Data System (ADS)
Ma, Ming; Wang, Huafeng; Liu, Yan; Zhang, Hao; Gu, Xianfeng; Liang, Zhengrong
2014-03-01
Cone-beam computed tomography (CBCT) has attracted growing interest of researchers in image reconstruction. The mAs level of the X-ray tube current, in practical application of CBCT, is mitigated in order to reduce the CBCT dose. The lowering of the X-ray tube current, however, results in the degradation of image quality. Thus, low-dose CBCT image reconstruction is in effect a noise problem. To acquire clinically acceptable quality of image, and keep the X-ray tube current as low as achievable in the meanwhile, some penalized weighted least-squares (PWLS)-based image reconstruction algorithms have been developed. One representative strategy in previous work is to model the prior information for solution regularization using an anisotropic penalty term. To enhance the edge preserving and noise suppressing in a finer scale, a novel algorithm combining the local binary pattern (LBP) with penalized weighted leastsquares (PWLS), called LBP-PWLS-based image reconstruction algorithm, is proposed in this work. The proposed LBP-PWLS-based algorithm adaptively encourages strong diffusion on the local spot/flat region around a voxel and less diffusion on edge/corner ones by adjusting the penalty for cost function, after the LBP is utilized to detect the region around the voxel as spot, flat and edge ones. The LBP-PWLS-based reconstruction algorithm was evaluated using the sinogram data acquired by a clinical CT scanner from the CatPhan® 600 phantom. Experimental results on the noiseresolution tradeoff measurement and other quantitative measurements demonstrated its feasibility and effectiveness in edge preserving and noise suppressing in comparison with a previous PWLS reconstruction algorithm.
SU-E-T-117: Analysis of the ArcCHECK Dosimetry Gamma Failure Using the 3DVH System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, S; Choi, W; Lee, H
2015-06-15
Purpose: To evaluate gamma analysis failure for the VMAT patient specific QA using ArcCHECK cylindrical phantom. The 3DVH system(Sun Nuclear, FL) was used to analyze the dose difference statistic between measured dose and treatment planning system calculated dose. Methods: Four case of gamma analysis failure were selected retrospectively. Our institution gamma analysis indexes were absolute dose, 3%/3mm and 90%pass rate in the ArcCHECK dosimetry. The collapsed cone convolution superposition (CCCS) dose calculation algorithm for VMAT was used. Dose delivery was performed with Elekta Agility. The A1SL(standard imaging, WI) and cavity plug were used for point dose measurement. Delivery QA plansmore » and images were used for 3DVH Reference data instead of patient plan and image. The measured data of ‘.txt’ file was used for comparison at diodes to acquire a global dose level. The,.acml’ file was used for AC-PDP and to calculated point dose. Results: The global dose of 3DVH was calculated as 1.10 Gy, 1.13, 1.01 and 0.2 Gy respectively. The global dose of 0.2 Gy case was induced by distance discrepancy. The TPS calculated point dose of was 2.33 Gy to 2.77 Gy and 3DVH calculated dose was 2.33 Gy to 2.68 Gy. The maximum dose differences were −2.83% and −3.1% for TPS vs. measured dose and TPS vs. 3DVH calculated respectively in the same case. The difference between measured and 3DVH was 0.1% in that case. The 3DVH gamma pass rate was 98% to 99.7%. Conclusion: We found the TPS calculation error by 3DVH calculation using ArcCHECK measured dose. It seemed that our CCCS algorithm RTP system over estimated at the central region and underestimated scattering at the peripheral diode detector point. The relative gamma analysis and point dose measurement would be recommended for VMAT DQA in the gamma failure case of ArcCHECK dosimetry.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Bin; Li, Yongbao; Liu, Bo
Purpose: CyberKnife system is initially equipped with fixed circular cones for stereotactic radiosurgery. Two dose calculation algorithms, Ray-Tracing and Monte Carlo, are available in the supplied treatment planning system. A multileaf collimator system was recently introduced in the latest generation of system, capable of arbitrarily shaped treatment field. The purpose of this study is to develop a model based dose calculation algorithm to better handle the lateral scatter in an irregularly shaped small field for the CyberKnife system. Methods: A pencil beam dose calculation algorithm widely used in linac based treatment planning system was modified. The kernel parameters and intensitymore » profile were systematically determined by fitting to the commissioning data. The model was tuned using only a subset of measured data (4 out of 12 cones) and applied to all fixed circular cones for evaluation. The root mean square (RMS) of the difference between the measured and calculated tissue-phantom-ratios (TPRs) and off-center-ratio (OCR) was compared. Three cone size correction techniques were developed to better fit the OCRs at the penumbra region, which are further evaluated by the output factors (OFs). The pencil beam model was further validated against measurement data on the variable dodecagon-shaped Iris collimators and a half-beam blocked field. Comparison with Ray-Tracing and Monte Carlo methods was also performed on a lung SBRT case. Results: The RMS between the measured and calculated TPRs is 0.7% averaged for all cones, with the descending region at 0.5%. The RMSs of OCR at infield and outfield regions are both at 0.5%. The distance to agreement (DTA) at the OCR penumbra region is 0.2 mm. All three cone size correction models achieve the same improvement in OCR agreement, with the effective source shift model (SSM) preferred, due to their ability to predict more accurately the OF variations with the source to axis distance (SAD). In noncircular field validation, the pencil beam calculated results agreed well with the film measurement of both Iris collimators and the half-beam blocked field, fared much better than the Ray-Tracing calculation. Conclusions: The authors have developed a pencil beam dose calculation model for the CyberKnife system. The dose calculation accuracy is better than the standard linac based system because the model parameters were specifically tuned to the CyberKnife system and geometry correction factors. The model handles better the lateral scatter and has the potential to be used for the irregularly shaped fields. Comprehensive validations on MLC equipped system are necessary for its clinical implementation. It is reasonably fast enough to be used during plan optimization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersen, A; Casares-Magaz, O; Elstroem, U
Purpose: Cone-beam CT (CBCT) imaging may enable image- and dose-guided proton therapy, but is challenged by image artefacts. The aim of this study was to demonstrate the general applicability of a previously developed a priori scatter correction algorithm to allow CBCT-based proton dose calculations. Methods: The a priori scatter correction algorithm used a plan CT (pCT) and raw cone-beam projections acquired with the Varian On-Board Imager. The projections were initially corrected for bow-tie filtering and beam hardening and subsequently reconstructed using the Feldkamp-Davis-Kress algorithm (rawCBCT). The rawCBCTs were intensity normalised before a rigid and deformable registration were applied on themore » pCTs to the rawCBCTs. The resulting images were forward projected onto the same angles as the raw CB projections. The two projections were subtracted from each other, Gaussian and median filtered, and then subtracted from the raw projections and finally reconstructed to the scatter-corrected CBCTs. For evaluation, water equivalent path length (WEPL) maps (from anterior to posterior) were calculated on different reconstructions of three data sets (CB projections and pCT) of three parts of an Alderson phantom. Finally, single beam spot scanning proton plans (0–360 deg gantry angle in steps of 5 deg; using PyTRiP) treating a 5 cm central spherical target in the pCT were re-calculated on scatter-corrected CBCTs with identical targets. Results: The scatter-corrected CBCTs resulted in sub-mm mean WEPL differences relative to the rigid registration of the pCT for all three data sets. These differences were considerably smaller than what was achieved with the regular Varian CBCT reconstruction algorithm (1–9 mm mean WEPL differences). Target coverage in the re-calculated plans was generally improved using the scatter-corrected CBCTs compared to the Varian CBCT reconstruction. Conclusion: We have demonstrated the general applicability of a priori CBCT scatter correction, potentially opening for CBCT-based image/dose-guided proton therapy, including adaptive strategies. Research agreement with Varian Medical Systems, not connected to the present project.« less
Certification of windshear performance with RTCA class D radomes
NASA Technical Reports Server (NTRS)
Mathews, Bruce D.; Miller, Fran; Rittenhouse, Kirk; Barnett, Lee; Rowe, William
1994-01-01
Superposition testing of detection range performance forms a digital signal for input into a simulation of signal and data processing equipment and algorithms to be employed in a sensor system for advanced warning of hazardous windshear. For suitable pulse-Doppler radar, recording of the digital data at the input to the digital signal processor furnishes a realistic operational scenario and environmentally responsive clutter signal including all sidelobe clutter, ground moving target indications (GMTI), and large signal spurious due to mainbeam clutter and/or RFI respective of the urban airport clutter and aircraft scenarios (approach and landing antenna pointing). For linear radar system processes, a signal at the same point in the process from a hazard phenomena may be calculated from models of the scattering phenomena, for example, as represented in fine 3 dimensional reflectivity and velocity grid structures. Superposition testing furnishes a competing signal environment for detection and warning time performance confirmation of phenomena uncontrollable in a natural environment.
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.; Leban, F.; Mashiko, S.
1986-01-01
Single-channel pilot manual control output in closed-tracking tasks is modeled in terms of linear discrete transfer functions which are parsimonious and guaranteed stable. The transfer functions are found by applying a modified super-position time series generation technique. A Levinson-Durbin algorithm is used to determine the filter which prewhitens the input and a projective (least squares) fit of pulse response estimates is used to guarantee identified model stability. Results from two case studies are compared to previous findings, where the source of data are relatively short data records, approximately 25 seconds long. Time delay effects and pilot seasonalities are discussed and analyzed. It is concluded that single-channel time series controller modeling is feasible on short records, and that it is important for the analyst to determine a criterion for best time domain fit which allows association of model parameter values, such as pure time delay, with actual physical and physiological constraints. The purpose of the modeling is thus paramount.
NASA Astrophysics Data System (ADS)
Je, Uikyu; Cho, Hyosung; Lee, Minsik; Oh, Jieun; Park, Yeonok; Hong, Daeki; Park, Cheulkyu; Cho, Heemoon; Choi, Sungil; Koo, Yangseo
2014-06-01
Recently, reducing radiation doses has become an issue of critical importance in the broader radiological community. As a possible technical approach, especially, in dental cone-beam computed tomography (CBCT), reconstruction from limited-angle view data (< 360°) would enable fast scanning with reduced doses to the patient. In this study, we investigated and implemented an efficient reconstruction algorithm based on compressed-sensing (CS) theory for the scan geometry and performed systematic simulation works to investigate the image characteristics. We also performed experimental works by applying the algorithm to a commercially-available dental CBCT system to demonstrate its effectiveness for image reconstruction in incomplete data problems. We successfully reconstructed CBCT images with incomplete projections acquired at selected scan angles of 120, 150, 180, and 200° with a fixed angle step of 1.2° and evaluated the reconstruction quality quantitatively. Both simulation and experimental demonstrations of the CS-based reconstruction from limited-angle view data show that the algorithm can be applied directly to current dental CBCT systems for reducing the imaging doses and further improving the image quality.
NASA Astrophysics Data System (ADS)
Viard, Clément; Nakashima, Kiyoko; Lamory, Barbara; Pâques, Michel; Levecq, Xavier; Château, Nicolas
2011-03-01
This research is aimed at characterizing in vivo differences between healthy and pathological retinal tissues at the microscopic scale using a compact adaptive optics (AO) retinal camera. Tests were performed in 120 healthy eyes and 180 eyes suffering from 19 different pathological conditions, including age-related maculopathy (ARM), glaucoma and rare diseases such as inherited retinal dystrophies. Each patient was first examined using SD-OCT and infrared SLO. Retinal areas of 4°x4° were imaged using an AO flood-illumination retinal camera based on a large-stroke deformable mirror. Contrast was finally enhanced by registering and averaging rough images using classical algorithms. Cellular-resolution images could be obtained in most cases. In ARM, AO images revealed granular contents in drusen, which were invisible in SLO or OCT images, and allowed the observation of the cone mosaic between drusen. In glaucoma cases, visual field was correlated to changes in cone visibility. In inherited retinal dystrophies, AO helped to evaluate cone loss across the retina. Other microstructures, slightly larger in size than cones, were also visible in several retinas. AO provided potentially useful diagnostic and prognostic information in various diseases. In addition to cones, other microscopic structures revealed by AO images may also be of interest in monitoring retinal diseases.
Point spread function modeling and image restoration for cone-beam CT
NASA Astrophysics Data System (ADS)
Zhang, Hua; Huang, Kui-Dong; Shi, Yi-Kai; Xu, Zhe
2015-03-01
X-ray cone-beam computed tomography (CT) has such notable features as high efficiency and precision, and is widely used in the fields of medical imaging and industrial non-destructive testing, but the inherent imaging degradation reduces the quality of CT images. Aimed at the problems of projection image degradation and restoration in cone-beam CT, a point spread function (PSF) modeling method is proposed first. The general PSF model of cone-beam CT is established, and based on it, the PSF under arbitrary scanning conditions can be calculated directly for projection image restoration without the additional measurement, which greatly improved the application convenience of cone-beam CT. Secondly, a projection image restoration algorithm based on pre-filtering and pre-segmentation is proposed, which can make the edge contours in projection images and slice images clearer after restoration, and control the noise in the equivalent level to the original images. Finally, the experiments verified the feasibility and effectiveness of the proposed methods. Supported by National Science and Technology Major Project of the Ministry of Industry and Information Technology of China (2012ZX04007021), Young Scientists Fund of National Natural Science Foundation of China (51105315), Natural Science Basic Research Program of Shaanxi Province of China (2013JM7003) and Northwestern Polytechnical University Foundation for Fundamental Research (JC20120226, 3102014KYJD022)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spadea, Maria Francesca, E-mail: mfspadea@unicz.it; Verburg, Joost Mathias; Seco, Joao
2014-01-15
Purpose: The aim of the study was to evaluate the dosimetric impact of low-Z and high-Z metallic implants on IMRT plans. Methods: Computed tomography (CT) scans of three patients were analyzed to study effects due to the presence of Titanium (low-Z), Platinum and Gold (high-Z) inserts. To eliminate artifacts in CT images, a sinogram-based metal artifact reduction algorithm was applied. IMRT dose calculations were performed on both the uncorrected and corrected images using a commercial planning system (convolution/superposition algorithm) and an in-house Monte Carlo platform. Dose differences between uncorrected and corrected datasets were computed and analyzed using gamma index (Pγ{submore » <1}) and setting 2 mm and 2% as distance to agreement and dose difference criteria, respectively. Beam specific depth dose profiles across the metal were also examined. Results: Dose discrepancies between corrected and uncorrected datasets were not significant for low-Z material. High-Z materials caused under-dosage of 20%–25% in the region surrounding the metal and over dosage of 10%–15% downstream of the hardware. Gamma index test yielded Pγ{sub <1}>99% for all low-Z cases; while for high-Z cases it returned 91% < Pγ{sub <1}< 99%. Analysis of the depth dose curve of a single beam for low-Z cases revealed that, although the dose attenuation is altered inside the metal, it does not differ downstream of the insert. However, for high-Z metal implants the dose is increased up to 10%–12% around the insert. In addition, Monte Carlo method was more sensitive to the presence of metal inserts than superposition/convolution algorithm. Conclusions: The reduction in terms of dose of metal artifacts in CT images is relevant for high-Z implants. In this case, dose distribution should be calculated using Monte Carlo algorithms, given their superior accuracy in dose modeling in and around the metal. In addition, the knowledge of the composition of metal inserts improves the accuracy of the Monte Carlo dose calculation significantly.« less
Glaser, Adam K; Andreozzi, Jacqueline M; Zhang, Rongxiao; Pogue, Brian W; Gladstone, David J
2015-07-01
To test the use of a three-dimensional (3D) optical cone beam computed tomography reconstruction algorithm, for estimation of the imparted 3D dose distribution from megavoltage photon beams in a water tank for quality assurance, by imaging the induced Cherenkov-excited fluorescence (CEF). An intensified charge-coupled device coupled to a standard nontelecentric camera lens was used to tomographically acquire two-dimensional (2D) projection images of CEF from a complex multileaf collimator (MLC) shaped 6 MV linear accelerator x-ray photon beam operating at a dose rate of 600 MU/min. The resulting projections were used to reconstruct the 3D CEF light distribution, a potential surrogate of imparted dose, using a Feldkamp-Davis-Kress cone beam back reconstruction algorithm. Finally, the reconstructed light distributions were compared to the expected dose values from one-dimensional diode scans, 2D film measurements, and the 3D distribution generated from the clinical Varian ECLIPSE treatment planning system using a gamma index analysis. A Monte Carlo derived correction was applied to the Cherenkov reconstructions to account for beam hardening artifacts. 3D light volumes were successfully reconstructed over a 400 × 400 × 350 mm(3) volume at a resolution of 1 mm. The Cherenkov reconstructions showed agreement with all comparative methods and were also able to recover both inter- and intra-MLC leaf leakage. Based upon a 3%/3 mm criterion, the experimental Cherenkov light measurements showed an 83%-99% pass fraction depending on the chosen threshold dose. The results from this study demonstrate the use of optical cone beam computed tomography using CEF for the profiling of the imparted dose distribution from large area megavoltage photon beams in water.
Liu, Jianfei; Jung, HaeWon; Dubra, Alfredo; Tam, Johnny
2017-01-01
Purpose Adaptive optics scanning light ophthalmoscopy (AOSLO) has enabled quantification of the photoreceptor mosaic in the living human eye using metrics such as cell density and average spacing. These rely on the identification of individual cells. Here, we demonstrate a novel approach for computer-aided identification of cone photoreceptors on nonconfocal split detection AOSLO images. Methods Algorithms for identification of cone photoreceptors were developed, based on multiscale circular voting (MSCV) in combination with a priori knowledge that split detection images resemble Nomarski differential interference contrast images, in which dark and bright regions are present on the two sides of each cell. The proposed algorithm locates dark and bright region pairs, iteratively refining the identification across multiple scales. Identification accuracy was assessed in data from 10 subjects by comparing automated identifications with manual labeling, followed by computation of density and spacing metrics for comparison to histology and published data. Results There was good agreement between manual and automated cone identifications with overall recall, precision, and F1 score of 92.9%, 90.8%, and 91.8%, respectively. On average, computed density and spacing values using automated identification were within 10.7% and 11.2% of the expected histology values across eccentricities ranging from 0.5 to 6.2 mm. There was no statistically significant difference between MSCV-based and histology-based density measurements (P = 0.96, Kolmogorov-Smirnov 2-sample test). Conclusions MSCV can accurately detect cone photoreceptors on split detection images across a range of eccentricities, enabling quick, objective estimation of photoreceptor mosaic metrics, which will be important for future clinical trials utilizing adaptive optics. PMID:28873173
Directional view interpolation for compensation of sparse angular sampling in cone-beam CT.
Bertram, Matthias; Wiegert, Jens; Schafer, Dirk; Aach, Til; Rose, Georg
2009-07-01
In flat detector cone-beam computed tomography and related applications, sparse angular sampling frequently leads to characteristic streak artifacts. To overcome this problem, it has been suggested to generate additional views by means of interpolation. The practicality of this approach is investigated in combination with a dedicated method for angular interpolation of 3-D sinogram data. For this purpose, a novel dedicated shape-driven directional interpolation algorithm based on a structure tensor approach is developed. Quantitative evaluation shows that this method clearly outperforms conventional scene-based interpolation schemes. Furthermore, the image quality trade-offs associated with the use of interpolated intermediate views are systematically evaluated for simulated and clinical cone-beam computed tomography data sets of the human head. It is found that utilization of directionally interpolated views significantly reduces streak artifacts and noise, at the expense of small introduced image blur.
Investigation of BPF algorithm in cone-beam CT with 2D general trajectories.
Zou, Jing; Gui, Jianbao; Rong, Junyan; Hu, Zhanli; Zhang, Qiyang; Xia, Dan
2012-01-01
A mathematical derivation was conducted to illustrate that exact 3D image reconstruction could be achieved for z-homogeneous phantoms from data acquired with 2D general trajectories using the back projection filtration (BPF) algorithm. The conclusion was verified by computer simulation and experimental result with a circular scanning trajectory. Furthermore, the effect of the non-uniform degree along z-axis of the phantoms on the accuracy of the 3D reconstruction by BPF algorithm was investigated by numerical simulation with a gradual-phantom and a disk-phantom. The preliminary result showed that the performance of BPF algorithm improved with the z-axis homogeneity of the scanned object.
Torsion effect of swing frame on the measurement of horizontal two-plane balancing machine
NASA Astrophysics Data System (ADS)
Wang, Qiuxiao; Wang, Dequan; He, Bin; Jiang, Pan; Wu, Zhaofu; Fu, Xiaoyan
2017-03-01
In this paper, the vibration model of swing frame of two-plane balancing machine is established to calculate the vibration center position of swing frame first. The torsional stiffness formula of spring plate twisting around the vibration center is then deduced by using superposition principle. Finally, the dynamic balancing experiments prove the irrationality of A-B-C algorithm which ignores the torsion effect, and show that the torsional stiffness deduced by experiments is consistent with the torsional stiffness calculated by theory. The experimental datas show the influence of the torsion effect of swing frame on the separation ratio of sided balancing machines, which reveals the sources of measurement error and assesses the application scope of A-B-C algorithm.
Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong
2010-01-01
With the use of adaptive optics (AO), high-resolution microscopic imaging of living human retina in the single cell level has been achieved. In an adaptive optics confocal scanning laser ophthalmoscope (AOSLO) system, with a small field size (about 1 degree, 280 μm), the motion of the eye severely affects the stabilization of the real-time video images and results in significant distortions of the retina images. In this paper, Scale-Invariant Feature Transform (SIFT) is used to abstract stable point features from the retina images. Kanade-Lucas-Tomasi(KLT) algorithm is applied to track the features. With the tracked features, the image distortion in each frame is removed by the second-order polynomial transformation, and 10 successive frames are co-added to enhance the image quality. Features of special interest in an image can also be selected manually and tracked by KLT. A point on a cone is selected manually, and the cone is tracked from frame to frame. PMID:21258443
4D Cone-beam CT reconstruction using a motion model based on principal component analysis
Staub, David; Docef, Alen; Brock, Robert S.; Vaman, Constantin; Murphy, Martin J.
2011-01-01
Purpose: To provide a proof of concept validation of a novel 4D cone-beam CT (4DCBCT) reconstruction algorithm and to determine the best methods to train and optimize the algorithm. Methods: The algorithm animates a patient fan-beam CT (FBCT) with a patient specific parametric motion model in order to generate a time series of deformed CTs (the reconstructed 4DCBCT) that track the motion of the patient anatomy on a voxel by voxel scale. The motion model is constrained by requiring that projections cast through the deformed CT time series match the projections of the raw patient 4DCBCT. The motion model uses a basis of eigenvectors that are generated via principal component analysis (PCA) of a training set of displacement vector fields (DVFs) that approximate patient motion. The eigenvectors are weighted by a parameterized function of the patient breathing trace recorded during 4DCBCT. The algorithm is demonstrated and tested via numerical simulation. Results: The algorithm is shown to produce accurate reconstruction results for the most complicated simulated motion, in which voxels move with a pseudo-periodic pattern and relative phase shifts exist between voxels. The tests show that principal component eigenvectors trained on DVFs from a novel 2D/3D registration method give substantially better results than eigenvectors trained on DVFs obtained by conventionally registering 4DCBCT phases reconstructed via filtered backprojection. Conclusions: Proof of concept testing has validated the 4DCBCT reconstruction approach for the types of simulated data considered. In addition, the authors found the 2D/3D registration approach to be our best choice for generating the DVF training set, and the Nelder-Mead simplex algorithm the most robust optimization routine. PMID:22149852
Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy
NASA Astrophysics Data System (ADS)
Bian, Junguo; Sharp, Gregory C.; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges
2016-05-01
It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications.
Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy.
Bian, Junguo; Sharp, Gregory C; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges
2016-05-07
It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications.
Jia, Xun; Lou, Yifei; Li, Ruijiang; Song, William Y; Jiang, Steve B
2010-04-01
Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.
Hemanth, Thayyullathil; Rajesh, Langoju; Padmaram, Renganathan; Vasu, R Mohan; Rajan, Kanjirodan; Patnaik, Lalit M
2004-07-20
We report experimental results of quantitative imaging in supersonic circular jets by using a monochromatic light probe. An expanding cone of light interrogates a three-dimensional volume of a supersonic steady-state flow from a circular jet. The distortion caused to the spherical wave by the presence of the jet is determined through our measuring normal intensity transport. A cone-beam tomographic algorithm is used to invert wave-front distortion to changes in refractive index introduced by the flow. The refractive index is converted into density whose cross sections reveal shock and other characteristics of the flow.
Vogel, Curtis R; Yang, Qiang
2006-08-21
We present two different implementations of the Fourier domain preconditioned conjugate gradient algorithm (FD-PCG) to efficiently solve the large structured linear systems that arise in optimal volume turbulence estimation, or tomography, for multi-conjugate adaptive optics (MCAO). We describe how to deal with several critical technical issues, including the cone coordinate transformation problem and sensor subaperture grid spacing. We also extend the FD-PCG approach to handle the deformable mirror fitting problem for MCAO.
NASA Astrophysics Data System (ADS)
Hu, Jicun; Tam, Kwok; Johnson, Roger H.
2004-01-01
We derive and analyse a simple algorithm first proposed by Kudo et al (2001 Proc. 2001 Meeting on Fully 3D Image Reconstruction in Radiology and Nuclear Medicine (Pacific Grove, CA) pp 7-10) for long object imaging from truncated helical cone beam data via a novel definition of region of interest (ROI). Our approach is based on the theory of short object imaging by Kudo et al (1998 Phys. Med. Biol. 43 2885-909). One of the key findings in their work is that filtering of the truncated projection can be divided into two parts: one, finite in the axial direction, results from ramp filtering the data within the Tam window. The other, infinite in the z direction, results from unbounded filtering of ray sums over PI lines only. We show that for an ROI defined by PI lines emanating from the initial and final source positions on a helical segment, the boundary data which would otherwise contaminate the reconstruction of the ROI can be completely excluded. This novel definition of the ROI leads to a simple algorithm for long object imaging. The overscan of the algorithm is analytically calculated and it is the same as that of the zero boundary method. The reconstructed ROI can be divided into two regions: one is minimally contaminated by the portion outside the ROI, while the other is reconstructed free of contamination. We validate the algorithm with a 3D Shepp-Logan phantom and a disc phantom.
Stability of hypersonic boundary-layer flows with chemistry
NASA Technical Reports Server (NTRS)
Reed, Helen L.; Stuckert, Gregory K.; Haynes, Timothy S.
1993-01-01
The effects of nonequilibrium chemistry and three dimensionality on the stability characteristics of hypersonic flows are discussed. In two-dimensional (2-D) and axisymmetric flows, the inclusion of chemistry causes a shift of the second mode of Mack to lower frequencies. This is found to be due to the increase in size of the region of relative supersonic flow because of the lower speeds of sound in the relatively cooler boundary layers. Although this shift in frequency is present in both the equilibrium and nonequilibrium air results, the equilibrium approximation predicts modes which are not observed in the nonequilibrium calculations (for the flight conditions considered). These modes are superpositions of incoming and outgoing unstable disturbances which travel supersonically relative to the boundary-layer edge velocity. Such solutions are possible because of the finite shock stand-off distance. Their corresponding wall-normal profiles exhibit an oscillatory behavior in the inviscid region between the boundary-layer edge and the bow shock. For the examination of three-dimensional (3-D) effects, a rotating cone is used as a model of a swept wing. An increase of stagnation temperature is found to be only slightly stabilizing. The correlation of transition location (N = 9) with parameters describing the crossflow profile is discussed. Transition location does not correlate with the traditional crossflow Reynolds number. A new parameter that appears to correlate for boundary-layer flow was found. A verification with experiments on a yawed cone is provided.
Cone beam CT of the musculoskeletal system: clinical applications.
Posadzy, Magdalena; Desimpel, Julie; Vanhoenacker, Filip
2018-02-01
The aim of this pictorial review is to illustrate the use of CBCT in a broad spectrum of musculoskeletal disorders and to compare its diagnostic merit with other imaging modalities, such as conventional radiography (CR), Multidetector Computed Tomography (MDCT) and Magnetic Resonance Imaging. Cone Beam Computed Tomography (CBCT) has been widely used for dental imaging for over two decades. Current CBCT equipment allows use for imaging of various musculoskeletal applications. Because of its low cost and relatively low irradiation, CBCT may have an emergent role in making a more precise diagnosis, assessment of local extent and follow-up of fractures and dislocations of small bones and joints. Due to its exquisite high spatial resolution, CBCT in combination with arthrography may be the preferred technique for detection and local staging of cartilage lesions in small joints. Evaluation of degenerative joint disorders may be facilitated by CBCT compared to CR, particularly in those anatomical areas in which there is much superposition of adjacent bony structures. The use of CBCT in evaluation of osteomyelitis is restricted to detection of sequestrum formation in chronic osteomyelitis. Miscellaneous applications include assessment of (symptomatic) variants, detection and characterization of tumour and tumour-like conditions of bone. • Review the spectrum of MSK disorders in which CBCT may be complementary to other imaging techniques. • Compare the advantages and drawbacks of CBCT compared to other imaging techniques. • Define the present and future role of CBCT in musculoskeletal imaging.
Wang, S W; Li, M; Yang, H F; Zhao, Y J; Wang, Y; Liu, Y
2016-04-18
To compare the accuracyof interactive closet point (ICP) algorithm, Procrustes analysis (PA) algorithm,and a landmark-independent method to construct the mid-sagittal plane (MSP) of the cone beam computed tomography.To provide theoretical basis for establishing coordinate systemof CBCT images and symmetric analysis. Ten patients were selected and scanned by CBCT before orthodontic treatment.The scan data was imported into Mimics 10.0 to reconstructthree dimensional skulls.And the MSP of each skull was generated by ICP algorithm, PA algorithm and landmark-independent method. MSP extracted by ICP algorithm or PA algorithm involvedthree steps. First, the 3D skull processing was performed by reverse engineering software geomagic studio 2012 to obtain the mirror skull. Then, the original and its mirror skull was registered separately by ICP algorithm in geomagic studio 2012 and PA algorithm in NX Imageware 11.0. Finally, the registered data were united into new data to calculate the MSP of the originaldata in geomagic studio 2012. The mid-sagittal plane was determined by SELLA (S), nasion (N), basion (Ba) as traditional landmark-dependent methodconducted in software InVivoDental 5.0. The distance from 9 pairs of symmetric anatomical marked points to three sagittal plane were measured and calculated to compare the differences of the absolute value. The one-way ANOVA test was used to analyze the variable differences among the 3 MSPs. The pairwise comparison was performed with LSD method. MSPs calculated by the three methods were available for clinic analysis, which could be concluded from the front view.However, there was significant differences among the distances from the 9 pairs of symmetric anatomical marked points to the MSPs (F=10.932,P=0.001).LSD test showed there was no significant difference between the ICP algorithm and landmark-independent method (P=0.11), while there was significant difference between the PA algorithm and landmark-independent methods (P=0.01) . Mid-sagittal plane of 3D skulls could be generated base on ICP algorithm or PA algorithm. There was no significant difference between the ICP algorithm and landmark-independent method. For the subjects with no evident asymmetry, ICP algorithm is feasible in clinical analysis.
A comparison of upwind schemes for computation of three-dimensional hypersonic real-gas flows
NASA Technical Reports Server (NTRS)
Gerbsch, R. A.; Agarwal, R. K.
1992-01-01
The method of Suresh and Liou (1992) is extended, and the resulting explicit noniterative upwind finite-volume algorithm is applied to the integration of 3D parabolized Navier-Stokes equations to model 3D hypersonic real-gas flowfields. The solver is second-order accurate in the marching direction and employs flux-limiters to make the algorithm second-order accurate, with total variation diminishing in the cross-flow direction. The algorithm is used to compute hypersonic flow over a yawed cone and over the Ames All-Body Hypersonic Vehicle. The solutions obtained agree well with other computational results and with experimental data.
Marx, Astrid; Godinez, William J.; Tsimashchuk, Vasil; Bankhead, Peter; Rohr, Karl; Engel, Ulrike
2013-01-01
Dynamic microtubules (MTs) are required for neuronal guidance, in which axons extend directionally toward their target tissues. We found that depletion of the MT-binding protein Xenopus cytoplasmic linker–associated protein 1 (XCLASP1) or treatment with the MT drug Taxol reduced axon outgrowth in spinal cord neurons. To quantify the dynamic distribution of MTs in axons, we developed an automated algorithm to detect and track MT plus ends that have been fluorescently labeled by end-binding protein 3 (EB3). XCLASP1 depletion reduced MT advance rates in neuronal growth cones, very much like treatment with Taxol, demonstrating a potential link between MT dynamics in the growth cone and axon extension. Automatic tracking of EB3 comets in different compartments revealed that MTs increasingly slowed as they passed from the axon shaft into the growth cone and filopodia. We used speckle microscopy to demonstrate that MTs experience retrograde flow at the leading edge. Microtubule advance in growth cone and filopodia was strongly reduced in XCLASP1-depleted axons as compared with control axons, but actin retrograde flow remained unchanged. Instead, we found that XCLASP1-depleted growth cones lacked lamellipodial actin organization characteristic of protrusion. Lamellipodial architecture depended on XCLASP1 and its capacity to associate with MTs, highlighting the importance of XCLASP1 in actin–microtubule interactions. PMID:23515224
Nonlinear Fourier algorithm applied to solving equations of gravitational gas dynamics
NASA Technical Reports Server (NTRS)
Kolosov, B. I.
1979-01-01
Two dimensional gas flow problems were reduced to an approximating system of common differential equations, which were solved by a standard procedure of the Runge-Kutta type. A theorem of the existence of stationary conical shock waves with the cone vertex in the gravitating center was proved.
Pengpen, T; Soleimani, M
2015-06-13
Cone beam computed tomography (CBCT) is an imaging modality that has been used in image-guided radiation therapy (IGRT). For applications such as lung radiation therapy, CBCT images are greatly affected by the motion artefacts. This is mainly due to low temporal resolution of CBCT. Recently, a dual modality of electrical impedance tomography (EIT) and CBCT has been proposed, in which the high temporal resolution EIT imaging system provides motion data to a motion-compensated algebraic reconstruction technique (ART)-based CBCT reconstruction software. High computational time associated with ART and indeed other variations of ART make it less practical for real applications. This paper develops a motion-compensated conjugate gradient least-squares (CGLS) algorithm for CBCT. A motion-compensated CGLS offers several advantages over ART-based methods, including possibilities for explicit regularization, rapid convergence and parallel computations. This paper for the first time demonstrates motion-compensated CBCT reconstruction using CGLS and reconstruction results are shown in limited data CBCT considering only a quarter of the full dataset. The proposed algorithm is tested using simulated motion data in generic motion-compensated CBCT as well as measured EIT data in dual EIT-CBCT imaging. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
High quality 4D cone-beam CT reconstruction using motion-compensated total variation regularization
NASA Astrophysics Data System (ADS)
Zhang, Hua; Ma, Jianhua; Bian, Zhaoying; Zeng, Dong; Feng, Qianjin; Chen, Wufan
2017-04-01
Four dimensional cone-beam computed tomography (4D-CBCT) has great potential clinical value because of its ability to describe tumor and organ motion. But the challenge in 4D-CBCT reconstruction is the limited number of projections at each phase, which result in a reconstruction full of noise and streak artifacts with the conventional analytical algorithms. To address this problem, in this paper, we propose a motion compensated total variation regularization approach which tries to fully explore the temporal coherence of the spatial structures among the 4D-CBCT phases. In this work, we additionally conduct motion estimation/motion compensation (ME/MC) on the 4D-CBCT volume by using inter-phase deformation vector fields (DVFs). The motion compensated 4D-CBCT volume is then viewed as a pseudo-static sequence, of which the regularization function was imposed on. The regularization used in this work is the 3D spatial total variation minimization combined with 1D temporal total variation minimization. We subsequently construct a cost function for a reconstruction pass, and minimize this cost function using a variable splitting algorithm. Simulation and real patient data were used to evaluate the proposed algorithm. Results show that the introduction of additional temporal correlation along the phase direction can improve the 4D-CBCT image quality.
High-resolution computed tomography of single breast cancer microcalcifications in vivo.
Inoue, Kazumasa; Liu, Fangbing; Hoppin, Jack; Lunsford, Elaine P; Lackas, Christian; Hesterman, Jacob; Lenkinski, Robert E; Fujii, Hirofumi; Frangioni, John V
2011-08-01
Microcalcification is a hallmark of breast cancer and a key diagnostic feature for mammography. We recently described the first robust animal model of breast cancer microcalcification. In this study, we hypothesized that high-resolution computed tomography (CT) could potentially detect the genesis of a single microcalcification in vivo and quantify its growth over time. Using a commercial CT scanner, we systematically optimized acquisition and reconstruction parameters. Two ray-tracing image reconstruction algorithms were tested: a voxel-driven "fast" cone beam algorithm (FCBA) and a detector-driven "exact" cone beam algorithm (ECBA). By optimizing acquisition and reconstruction parameters, we were able to achieve a resolution of 104 μm full width at half-maximum (FWHM). At an optimal detector sampling frequency, the ECBA provided a 28 μm (21%) FWHM improvement in resolution over the FCBA. In vitro, we were able to image a single 300 μm × 100 μm hydroxyapatite crystal. In a syngeneic rat model of breast cancer, we were able to detect the genesis of a single microcalcification in vivo and follow its growth longitudinally over weeks. Taken together, this study provides an in vivo "gold standard" for the development of calcification-specific contrast agents and a model system for studying the mechanism of breast cancer microcalcification.
Multi-mounted X-ray cone-beam computed tomography
NASA Astrophysics Data System (ADS)
Fu, Jian; Wang, Jingzheng; Guo, Wei; Peng, Peng
2018-04-01
As a powerful nondestructive inspection technique, X-ray computed tomography (X-CT) has been widely applied to clinical diagnosis, industrial production and cutting-edge research. Imaging efficiency is currently one of the major obstacles for the applications of X-CT. In this paper, a multi-mounted three dimensional cone-beam X-CT (MM-CBCT) method is reported. It consists of a novel multi-mounted cone-beam scanning geometry and the corresponding three dimensional statistical iterative reconstruction algorithm. The scanning geometry is the most iconic design and significantly different from the current CBCT systems. Permitting the cone-beam scanning of multiple objects simultaneously, the proposed approach has the potential to achieve an imaging efficiency orders of magnitude greater than the conventional methods. Although multiple objects can be also bundled together and scanned simultaneously by the conventional CBCT methods, it will lead to the increased penetration thickness and signal crosstalk. In contrast, MM-CBCT avoids substantially these problems. This work comprises a numerical study of the method and its experimental verification using a dataset measured with a developed MM-CBCT prototype system. This technique will provide a possible solution for the CT inspection in a large scale.
Cone-Probe Rake Design and Calibration for Supersonic Wind Tunnel Models
NASA Technical Reports Server (NTRS)
Won, Mark J.
1999-01-01
A series of experimental investigations were conducted at the NASA Langley Unitary Plan Wind Tunnel (UPWT) to calibrate cone-probe rakes designed to measure the flow field on 1-2% scale, high-speed wind tunnel models from Mach 2.15 to 2.4. The rakes were developed from a previous design that exhibited unfavorable measurement characteristics caused by a high probe spatial density and flow blockage from the rake body. Calibration parameters included Mach number, total pressure recovery, and flow angularity. Reference conditions were determined from a localized UPWT test section flow survey using a 10deg supersonic wedge probe. Test section Mach number and total pressure were determined using a novel iterative technique that accounted for boundary layer effects on the wedge surface. Cone-probe measurements were correlated to the surveyed flow conditions using analytical functions and recursive algorithms that resolved Mach number, pressure recovery, and flow angle to within +/-0.01, +/-1% and +/-0.1deg , respectively, for angles of attack and sideslip between +/-8deg. Uncertainty estimates indicated the overall cone-probe calibration accuracy was strongly influenced by the propagation of measurement error into the calculated results.
Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny
2011-01-01
Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required. Copyright © 2011 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
Rapid automated superposition of shapes and macromolecular models using spherical harmonics.
Konarev, Petr V; Petoukhov, Maxim V; Svergun, Dmitri I
2016-06-01
A rapid algorithm to superimpose macromolecular models in Fourier space is proposed and implemented ( SUPALM ). The method uses a normalized integrated cross-term of the scattering amplitudes as a proximity measure between two three-dimensional objects. The reciprocal-space algorithm allows for direct matching of heterogeneous objects including high- and low-resolution models represented by atomic coordinates, beads or dummy residue chains as well as electron microscopy density maps and inhomogeneous multi-phase models ( e.g. of protein-nucleic acid complexes). Using spherical harmonics for the computation of the amplitudes, the method is up to an order of magnitude faster than the real-space algorithm implemented in SUPCOMB by Kozin & Svergun [ J. Appl. Cryst. (2001 ▸), 34 , 33-41]. The utility of the new method is demonstrated in a number of test cases and compared with the results of SUPCOMB . The spherical harmonics algorithm is best suited for low-resolution shape models, e.g . those provided by solution scattering experiments, but also facilitates a rapid cross-validation against structural models obtained by other methods.
Rabow, A. A.; Scheraga, H. A.
1996-01-01
We have devised a Cartesian combination operator and coding scheme for improving the performance of genetic algorithms applied to the protein folding problem. The genetic coding consists of the C alpha Cartesian coordinates of the protein chain. The recombination of the genes of the parents is accomplished by: (1) a rigid superposition of one parent chain on the other, to make the relation of Cartesian coordinates meaningful, then, (2) the chains of the children are formed through a linear combination of the coordinates of their parents. The children produced with this Cartesian combination operator scheme have similar topology and retain the long-range contacts of their parents. The new scheme is significantly more efficient than the standard genetic algorithm methods for locating low-energy conformations of proteins. The considerable superiority of genetic algorithms over Monte Carlo optimization methods is also demonstrated. We have also devised a new dynamic programming lattice fitting procedure for use with the Cartesian combination operator method. The procedure finds excellent fits of real-space chains to the lattice while satisfying bond-length, bond-angle, and overlap constraints. PMID:8880904
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A
2016-04-01
The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.
Alsbou, Nesreen; Ahmad, Salahuddin; Ali, Imad
2016-05-17
A motion algorithm has been developed to extract length, CT number level and motion amplitude of a mobile target from cone-beam CT (CBCT) images. The algorithm uses three measurable parameters: Apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm are tested with mobile targets having different well-known sizes that are made from tissue-equivalent gel which is inserted into a thorax phantom. The phantom moves sinusoidally in one-direction to simulate respiratory motion using eight amplitudes ranging 0-20 mm. Using this motion algorithm, three unknown parameters are extracted that include: Length of the target, CT number level, speed or motion amplitude for the mobile targets from CBCT images. The motion algorithm solves for the three unknown parameters using measured length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agrees with the measured lengths which are dependent on the target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, the target length and motion amplitude. Motion frequency and phase do not affect the elongation and CT number distribution of the mobile target and could not be determined. A motion algorithm has been developed to extract three parameters that include length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement of motion tracking and sorting of the images into different breathing phases. The motion model developed here works well for tumors that have simple shapes, high contrast relative to surrounding tissues and move nearly in regular motion pattern that can be approximated with a simple sinusoidal function. This algorithm has potential applications in diagnostic CT imaging and radiotherapy in terms of motion management.
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.
2016-01-01
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582
Design Optimization of Space Launch Vehicles Using a Genetic Algorithm
2007-06-01
function until no improvement in the objective function could be made. The search space is modeled in a geometric form such as a polyhedron . The simplex... database . AeroDesign assumes that there are no boundary layers and that no separation occurs. AeroDesign can analyze either a cone or ogive shape
Developing design methods of concrete mix with microsilica additives for road construction
NASA Astrophysics Data System (ADS)
Dmitrienko, Vladimir; Shrivel, Igor; Kokunko, Irina; Pashkova, Olga
2017-10-01
Based on the laboratory test results, regression equations having standard cone and concrete strength, to determine the available amount of cement, water and microsilica were obtained. The joint solution of these equations allowed the researchers to develop the algorithm of designing heavy concrete compositions with microsilica additives for road construction.
NASA Astrophysics Data System (ADS)
Melis, M. T.; Mundula, F.; DessÌ, F.; Cioni, R.; Funedda, A.
2014-09-01
Unequivocal delimitation of landforms is an important issue for different purposes, from science-driven morphometric analysis to legal issues related to land conservation. This study is aimed at giving a new contribution to the morphometric approach for the delineation of the boundaries of volcanic edifices, applied to 13 monogenetic volcanoes (scoria cones) related to the Pliocene-Pleistocene volcanic cycle in Sardinia (Italy). External boundary delimitation of the edifices is discussed based on an integrated methodology using automatic elaboration of digital elevation models together with geomorphological and geological observations. Different elaborations of surface slope and profile curvature have been proposed and discussed; among them, two algorithms based on simple mathematical functions combining slope and profile curvature well fit the requirements of this study. One of theses algorithms is a modification of a function introduced by Grosse et al. (2011), which better performs for recognizing and tracing the boundary between the volcanic scoria cone and its basement. Although the geological constraints still drive the final decision, the proposed method improves the existing tools for a semi-automatic tracing of the boundaries.
NASA Astrophysics Data System (ADS)
Melis, M. T.; Mundula, F.; Dessì, F.; Cioni, R.; Funedda, A.
2014-05-01
Unequivocal delimitation of landforms is an important issue for different purposes, from science-driven morphometric analysis to legal issues related to land conservation. This study is aimed at giving a new contribution to the morphometric approach for the delineation of the boundaries of volcanic edifices, applied to 13 monogenetic volcanoes (scoria cones) related to the Pliocene-Pleistocene volcanic cycle in Sardinia (Italy). External boundary delimitation of the edifices is discussed based on an integrated methodology using automatic elaboration of digital elevation models together with geomorphological and geological observations. Different elaborations of surface slope and profile curvature have been proposed and discussed; among them, two algorithms based on simple mathematical functions combining slope and profile curvature well fit the requirements of this study. One of theses algorithms is a modification of a function already discussed by Grosse et al. (2011), which better perform for recognizing and tracing the boundary between the volcanic scoria cone and its basement. Although the geological constraints still drive the final decision, the proposed method improves the existing tools for a semi-automatic tracing of the boundaries.
pyRMSD: a Python package for efficient pairwise RMSD matrix calculation and handling.
Gil, Víctor A; Guallar, Víctor
2013-09-15
We introduce pyRMSD, an open source standalone Python package that aims at offering an integrative and efficient way of performing Root Mean Square Deviation (RMSD)-related calculations of large sets of structures. It is specially tuned to do fast collective RMSD calculations, as pairwise RMSD matrices, implementing up to three well-known superposition algorithms. pyRMSD provides its own symmetric distance matrix class that, besides the fact that it can be used as a regular matrix, helps to save memory and increases memory access speed. This last feature can dramatically improve the overall performance of any Python algorithm using it. In addition, its extensibility, testing suites and documentation make it a good choice to those in need of a workbench for developing or testing new algorithms. The source code (under MIT license), installer, test suites and benchmarks can be found at https://pele.bsc.es/ under the tools section. victor.guallar@bsc.es Supplementary data are available at Bioinformatics online.
Directed Field Ionization: A Genetic Algorithm for Evolving Electric Field Pulses
NASA Astrophysics Data System (ADS)
Kang, Xinyue; Rowley, Zoe A.; Carroll, Thomas J.; Noel, Michael W.
2017-04-01
When an ionizing electric field pulse is applied to a Rydberg atom, the electron's amplitude traverses many avoided crossings among the Stark levels as the field increases. The resulting superposition determines the shape of the time resolved field ionization spectrum at a detector. An engineered electric field pulse that sweeps back and forth through avoided crossings can control the phase evolution so as to determine the electron's path through the Stark map. In the region of n = 35 in rubidium there are hundreds of potential avoided crossings; this yields a large space of possible pulses. We use a genetic algorithm to search this space and evolve electric field pulses to direct the ionization of the Rydberg electron in rubidium. We present the algorithm along with a comparison of simulated and experimental results. This work was supported by the National Science Foundation under Grants No. 1607335 and No. 1607377 and used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation Grant Number OCI-1053575.
Resource Theory of Superposition
NASA Astrophysics Data System (ADS)
Theurer, T.; Killoran, N.; Egloff, D.; Plenio, M. B.
2017-12-01
The superposition principle lies at the heart of many nonclassical properties of quantum mechanics. Motivated by this, we introduce a rigorous resource theory framework for the quantification of superposition of a finite number of linear independent states. This theory is a generalization of resource theories of coherence. We determine the general structure of operations which do not create superposition, find a fundamental connection to unambiguous state discrimination, and propose several quantitative superposition measures. Using this theory, we show that trace decreasing operations can be completed for free which, when specialized to the theory of coherence, resolves an outstanding open question and is used to address the free probabilistic transformation between pure states. Finally, we prove that linearly independent superposition is a necessary and sufficient condition for the faithful creation of entanglement in discrete settings, establishing a strong structural connection between our theory of superposition and entanglement theory.
Overcoming Sequence Misalignments with Weighted Structural Superposition
Khazanov, Nickolay A.; Damm-Ganamet, Kelly L.; Quang, Daniel X.; Carlson, Heather A.
2012-01-01
An appropriate structural superposition identifies similarities and differences between homologous proteins that are not evident from sequence alignments alone. We have coupled our Gaussian-weighted RMSD (wRMSD) tool with a sequence aligner and seed extension (SE) algorithm to create a robust technique for overlaying structures and aligning sequences of homologous proteins (HwRMSD). HwRMSD overcomes errors in the initial sequence alignment that would normally propagate into a standard RMSD overlay. SE can generate a corrected sequence alignment from the improved structural superposition obtained by wRMSD. HwRMSD’s robust performance and its superiority over standard RMSD are demonstrated over a range of homologous proteins. Its better overlay results in corrected sequence alignments with good agreement to HOMSTRAD. Finally, HwRMSD is compared to established structural alignment methods: FATCAT, SSM, CE, and Dalilite. Most methods are comparable at placing residue pairs within 2 Å, but HwRMSD places many more residue pairs within 1 Å, providing a clear advantage. Such high accuracy is essential in drug design, where small distances can have a large impact on computational predictions. This level of accuracy is also needed to correct sequence alignments in an automated fashion, especially for omics-scale analysis. HwRMSD can align homologs with low sequence identity and large conformational differences, cases where both sequence-based and structural-based methods may fail. The HwRMSD pipeline overcomes the dependency of structural overlays on initial sequence pairing and removes the need to determine the best sequence-alignment method, substitution matrix, and gap parameters for each unique pair of homologs. PMID:22733542
Superposition and alignment of labeled point clouds.
Fober, Thomas; Glinca, Serghei; Klebe, Gerhard; Hüllermeier, Eyke
2011-01-01
Geometric objects are often represented approximately in terms of a finite set of points in three-dimensional euclidean space. In this paper, we extend this representation to what we call labeled point clouds. A labeled point cloud is a finite set of points, where each point is not only associated with a position in three-dimensional space, but also with a discrete class label that represents a specific property. This type of model is especially suitable for modeling biomolecules such as proteins and protein binding sites, where a label may represent an atom type or a physico-chemical property. Proceeding from this representation, we address the question of how to compare two labeled points clouds in terms of their similarity. Using fuzzy modeling techniques, we develop a suitable similarity measure as well as an efficient evolutionary algorithm to compute it. Moreover, we consider the problem of establishing an alignment of the structures in the sense of a one-to-one correspondence between their basic constituents. From a biological point of view, alignments of this kind are of great interest, since mutually corresponding molecular constituents offer important information about evolution and heredity, and can also serve as a means to explain a degree of similarity. In this paper, we therefore develop a method for computing pairwise or multiple alignments of labeled point clouds. To this end, we proceed from an optimal superposition of the corresponding point clouds and construct an alignment which is as much as possible in agreement with the neighborhood structure established by this superposition. We apply our methods to the structural analysis of protein binding sites.
NASA Astrophysics Data System (ADS)
Anderson, J.; Johnson, J. B.; Arechiga, R. O.; Edens, H. E.; Thomas, R. J.
2011-12-01
We use radio frequency (VHF) pulse locations mapped with the New Mexico Tech Lightning Mapping Array (LMA) to study the distribution of thunder sources in lightning channels. A least squares inversion is used to fit channel acoustic energy radiation with broadband (0.01 to 500 Hz) acoustic recordings using microphones deployed local (< 10 km) to the lightning. We model the thunder (acoustic) source as a superposition of line segments connecting the LMA VHF pulses. An optimum branching algorithm is used to reconstruct conductive channels delineated by VHF sources, which we discretize as a superposition of finely-spaced (0.25 m) acoustic point sources. We consider total radiated thunder as a weighted superposition of acoustic waves from individual channels, each with a constant current along its length that is presumed to be proportional to acoustic energy density radiated per unit length. Merged channels are considered as a linear sum of current-carrying branches and radiate proportionally greater acoustic energy. Synthetic energy time series for a given microphone location are calculated for each independent channel. We then use a non-negative least squares inversion to solve for channel energy densities to match the energy time series determined from broadband acoustic recordings across a 4-station microphone network. Events analyzed by this method have so far included 300-1000 VHF sources, and correlations as high as 0.5 between synthetic and recorded thunder energy were obtained, despite the presence of wind noise and 10-30 m uncertainty in VHF source locations.
A novel imaging technique for measuring kinematics of light-weight flexible structures.
Zakaria, Mohamed Y; Eliethy, Ahmed S; Canfield, Robert A; Hajj, Muhammad R
2016-07-01
A new imaging algorithm is proposed to capture the kinematics of flexible, thin, light structures including frequencies and motion amplitudes for real time analysis. The studied case is a thin flexible beam that is preset at different angles of attack in a wind tunnel. As the angle of attack is increased beyond a critical value, the beam was observed to undergo a static deflection that is ensued by limit cycle oscillations. Imaging analysis of the beam vibrations shows that the motion consists of a superposition of the bending and torsion modes. The proposed algorithm was able to capture the oscillation amplitudes as well as the frequencies of both bending and torsion modes. The analysis results are validated through comparison with measurements from a piezoelectric sensor that is attached to the beam at its root.
Quantum Associative Neural Network with Nonlinear Search Algorithm
NASA Astrophysics Data System (ADS)
Zhou, Rigui; Wang, Huian; Wu, Qian; Shi, Yang
2012-03-01
Based on analysis on properties of quantum linear superposition, to overcome the complexity of existing quantum associative memory which was proposed by Ventura, a new storage method for multiply patterns is proposed in this paper by constructing the quantum array with the binary decision diagrams. Also, the adoption of the nonlinear search algorithm increases the pattern recalling speed of this model which has multiply patterns to O( {log2}^{2^{n -t}} ) = O( n - t ) time complexity, where n is the number of quantum bit and t is the quantum information of the t quantum bit. Results of case analysis show that the associative neural network model proposed in this paper based on quantum learning is much better and optimized than other researchers' counterparts both in terms of avoiding the additional qubits or extraordinary initial operators, storing pattern and improving the recalling speed.
Single shot multi-wavelength phase retrieval with coherent modulation imaging.
Dong, Xue; Pan, Xingchen; Liu, Cheng; Zhu, Jianqiang
2018-04-15
A single shot multi-wavelength phase retrieval method is proposed by combining common coherent modulation imaging (CMI) and a low rank mixed-state algorithm together. A radiation beam consisting of multi-wavelength is illuminated on the sample to be observed, and the exiting field is incident on a random phase plate to form speckle patterns, which is the incoherent superposition of diffraction patterns of each wavelength. The exiting complex amplitude of the sample including both the modulus and phase of each wavelength can be reconstructed simultaneously from the recorded diffraction intensity using a low rank mixed-state algorithm. The feasibility of this proposed method was verified with visible light experimentally. This proposed method not only makes CMI realizable with partially coherent illumination but also can extend its application to various traditionally unrelated fields, where several wavelengths should be considered simultaneously.
A novel imaging technique for measuring kinematics of light-weight flexible structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakaria, Mohamed Y., E-mail: zakaria@vt.edu; Eliethy, Ahmed S.; Canfield, Robert A.
2016-07-15
A new imaging algorithm is proposed to capture the kinematics of flexible, thin, light structures including frequencies and motion amplitudes for real time analysis. The studied case is a thin flexible beam that is preset at different angles of attack in a wind tunnel. As the angle of attack is increased beyond a critical value, the beam was observed to undergo a static deflection that is ensued by limit cycle oscillations. Imaging analysis of the beam vibrations shows that the motion consists of a superposition of the bending and torsion modes. The proposed algorithm was able to capture the oscillationmore » amplitudes as well as the frequencies of both bending and torsion modes. The analysis results are validated through comparison with measurements from a piezoelectric sensor that is attached to the beam at its root.« less
Macroscopicity of quantum superpositions on a one-parameter unitary path in Hilbert space
NASA Astrophysics Data System (ADS)
Volkoff, T. J.; Whaley, K. B.
2014-12-01
We analyze quantum states formed as superpositions of an initial pure product state and its image under local unitary evolution, using two measurement-based measures of superposition size: one based on the optimal quantum binary distinguishability of the branches of the superposition and another based on the ratio of the maximal quantum Fisher information of the superposition to that of its branches, i.e., the relative metrological usefulness of the superposition. A general formula for the effective sizes of these states according to the branch-distinguishability measure is obtained and applied to superposition states of N quantum harmonic oscillators composed of Gaussian branches. Considering optimal distinguishability of pure states on a time-evolution path leads naturally to a notion of distinguishability time that generalizes the well-known orthogonalization times of Mandelstam and Tamm and Margolus and Levitin. We further show that the distinguishability time provides a compact operational expression for the superposition size measure based on the relative quantum Fisher information. By restricting the maximization procedure in the definition of this measure to an appropriate algebra of observables, we show that the superposition size of, e.g., NOON states and hierarchical cat states, can scale linearly with the number of elementary particles comprising the superposition state, implying precision scaling inversely with the total number of photons when these states are employed as probes in quantum parameter estimation of a 1-local Hamiltonian in this algebra.
An improved exact inversion formula for solenoidal fields in cone beam vector tomography
NASA Astrophysics Data System (ADS)
Katsevich, Alexander; Rothermel, Dimitri; Schuster, Thomas
2017-06-01
In this paper we present an improved inversion formula for the 3D cone beam transform of vector fields supported in the unit ball which is exact for solenoidal fields. It is well known that only the solenoidal part of a vector field can be determined from the longitudinal ray transform of a vector field in cone beam geometry. The inversion formula, as it was developed in Katsevich and Schuster (2013 An exact inversion formula for cone beam vector tomography Inverse Problems 29 065013), consists of two parts. The first part is of the filtered backprojection type, whereas the second part is a costly 4D integration and very inefficient. In this article we tackle this second term and obtain an improved formula, which is easy to implement and saves one order of integration. We also show that the first part contains all information about the curl of the field, whereas the second part has information about the boundary values. More precisely, the second part vanishes if the solenoidal part of the original field is tangential at the boundary. A number of numerical tests presented in the paper confirm the theoretical results and the exactness of the formula. Also, we obtain an inversion algorithm that works for general convex domains.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, I; Ahmad, S; Alsbou, N
Purpose: To develop 4D-cone-beam CT (CBCT) algorithm by motion modeling that extracts actual length, CT numbers level and motion amplitude of a mobile target retrospective to image reconstruction by motion modeling. Methods: The algorithm used three measurable parameters: apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine actual length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm were tested with mobile targets that with different well-known sizes made from tissue-equivalent gel which was inserted into a thorax phantom. The phantom moved sinusoidally in one-direction to simulatemore » respiratory motion using eight amplitudes ranging 0–20mm. Results: Using this 4D-CBCT algorithm, three unknown parameters were extracted that include: length of the target, CT number level, speed or motion amplitude for the mobile targets retrospective to image reconstruction. The motion algorithms solved for the three unknown parameters using measurable apparent length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on the actual target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, actual target length and motion amplitude. Motion frequency and phase did not affect the elongation and CT number distribution of the mobile target and could not be determined. Conclusion: A 4D-CBCT motion algorithm was developed to extract three parameters that include actual length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement to motion tracking and sorting of the images into different breathing phases which has potential applications in diagnostic CT imaging and radiotherapy.« less
Crookshank, Meghan C; Beek, Maarten; Singh, Devin; Schemitsch, Emil H; Whyne, Cari M
2013-07-01
Accurate alignment of femoral shaft fractures treated with intramedullary nailing remains a challenge for orthopaedic surgeons. The aim of this study is to develop and validate a cone-beam CT-based, semi-automated algorithm to quantify the malalignment in six degrees of freedom (6DOF) using a surface matching and principal axes-based approach. Complex comminuted diaphyseal fractures were created in nine cadaveric femora and cone-beam CT images were acquired (27 cases total). Scans were cropped and segmented using intensity-based thresholding, producing superior, inferior and comminution volumes. Cylinders were fit to estimate the long axes of the superior and inferior fragments. The angle and distance between the two cylindrical axes were calculated to determine flexion/extension and varus/valgus angulation and medial/lateral and anterior/posterior translations, respectively. Both surfaces were unwrapped about the cylindrical axes. Three methods of matching the unwrapped surface for determination of periaxial rotation were compared based on minimizing the distance between features. The calculated corrections were compared to the input malalignment conditions. All 6DOF were calculated to within current clinical tolerances for all but two cases. This algorithm yielded accurate quantification of malalignment of femoral shaft fractures for fracture gaps up to 60 mm, based on a single CBCT image of the fractured limb. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
Metal artefact reduction with cone beam CT: an in vitro study
Bechara, BB; Moore, WS; McMahan, CA; Noujeim, M
2012-01-01
Background Metal in a patient's mouth has been shown to cause artefacts that can interfere with the diagnostic quality of cone beam CT. Recently, a manufacturer has made an algorithm and software available which reduces metal streak artefact (Picasso Master 3D® machine; Vatech, Hwaseong, Republic of Korea). Objectives The purpose of this investigation was to determine whether or not the metal artefact reduction algorithm was effective and enhanced the contrast-to-noise ratio. Methods A phantom was constructed incorporating three metallic beads and three epoxy resin-based bone substitutes to simulate bone next to metal. The phantom was placed in the centre of the field of view and at the periphery. 10 data sets were acquired at 50–90 kVp. The images obtained were analysed using a public domain software ImageJ (NIH Image, Bethesda, MD). Profile lines were used to evaluate grey level changes and area histograms were used to evaluate contrast. The contrast-to-noise ratio was calculated. Results The metal artefact reduction option reduced grey value variation and increased the contrast-to-noise ratio. The grey value varied least when the phantom was in the middle of the volume and the metal artefact reduction was activated. The image quality improved as the peak kilovoltage increased. Conclusion Better images of a phantom were obtained when the metal artefact reduction algorithm was used. PMID:22241878
NASA Astrophysics Data System (ADS)
Joshi, K. D.; Marchant, T. E.; Moore, C. J.
2017-03-01
A shading correction algorithm for the improvement of cone-beam CT (CBCT) images (Phys. Med. Biol. 53 5719{33) has been further developed, optimised and validated extensively using 135 clinical CBCT images of patients undergoing radiotherapy treatment of the pelvis, lungs and head and neck. An automated technique has been developed to efficiently analyse the large number of clinical images. Small regions of similar tissue (for example fat tissue) are automatically identified using CT images. The same regions on the corresponding CBCT image are analysed to ensure that they do not contain pixels representing multiple types of tissue. The mean value of all selected pixels and the non-uniformity, defined as the median absolute deviation of the mean values in each small region, are calculated. Comparisons between CT and raw and corrected CBCT images are then made. Analysis of fat regions in pelvis images shows an average difference in mean pixel value between CT and CBCT of 136:0 HU in raw CBCT images, which is reduced to 2:0 HU after the application of the shading correction algorithm. The average difference in non-uniformity of fat pixels is reduced from 33:7 in raw CBCT to 2:8 in shading-corrected CBCT images. Similar results are obtained in the analysis of lung and head and neck images.
Cho, Seungryong; Pearson, Erik; Pelizzari, Charles A.; Pan, Xiaochuan
2009-01-01
Imaging plays a vital role in radiation therapy and with recent advances in technology considerable emphasis has been placed on cone-beam CT (CBCT). Attaching a kV x-ray source and a flat panel detector directly to the linear accelerator gantry has enabled progress in target localization techniques, which can include daily CBCT setup scans for some treatments. However, with an increasing number of CT scans there is also an increasing concern for patient exposure. An intensity-weighted region-of-interest (IWROI) technique, which has the potential to greatly reduce CBCT dose, in conjunction with the chord-based backprojection-filtration (BPF) reconstruction algorithm, has been developed and its feasibility in clinical use is demonstrated in this article. A nonuniform filter is placed in the x-ray beam to create regions of two different beam intensities. In this manner, regions outside the target area can be given a reduced dose but still visualized with a lower contrast to noise ratio. Image artifacts due to transverse data truncation, which would have occurred in conventional reconstruction algorithms, are avoided and image noise levels of the low- and high-intensity regions are well controlled by use of the chord-based BPF reconstruction algorithm. The proposed IWROI technique can play an important role in image-guided radiation therapy. PMID:19472624
Strom, E.W.; Oakley, W.T.
1995-01-01
The cities of Mendenhall and D'Lo, located in Simpson County, rely on ground water for their public supply and industrial needs. Most of the ground water comes from an aquifer of Miocene age. A study began in 1991 to describe the hydrogeology, analyze effects of ground-water withdrawal by making a drawdown map, and estimate the effects increased ground-water withdrawal might have on water levels in the Miocene age aquifer in the Mendenhall-D'Lo area. The most significant withdrawals of ground water in the study area are from 10 wells screened in the lower sand of the Catahoula Formation of Miocene age. Analysis of the effect of withdrawals from the 10 wells was made using the Theis non- equilibrium equation and applying the principle of superposition. Analysis of 1994 conditions was based on the pumpage history and aquifer properties deter- mined for each well. The drawdown surface resulting from the analysis indicates three general cones of depression. One cone is in the northwestern D'Lo area, one in the south-central Mendenhall area, and one about 1-1/2 miles east of Mendenhall. Calculated drawdown ranges from 21 to 47 feet. Potential drawdown-surface maps were made for 10 years and 20 years beyond 1994 using a constant pumpage. The map made for 10 years beyond 1994 indicates an average total increase in drawdown of about 5.3 feet. The map made for 20 years beyond 1994 indicates an average total increase in drawdown of about 7.3 feet.
SU-E-J-135: Feasibility of Using Quantitative Cone Beam CT for Proton Adaptive Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jingqian, W; Wang, Q; Zhang, X
2015-06-15
Purpose: To investigate the feasibility of using scatter corrected cone beam CT (CBCT) for proton adaptive planning. Methods: Phantom study was used to evaluate the CT number difference between the planning CT (pCT), quantitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units using adaptive scatter kernel superposition (ASKS) technique, and raw CBCT (rCBCT). After confirming the CT number accuracy, prostate patients, each with a pCT and several sets of weekly CBCT, were investigated for this study. Spot scanning proton treatment plans were independently generated on pCT, qCBCT and rCBCT. The treatment plans were then recalculated on all images. Dose-volume-histogrammore » (DVH) parameters and gamma analysis were used to compare between dose distributions. Results: Phantom study suggested that Hounsfield unit accuracy for different materials are within 20 HU for qCBCT and over 250 HU for rCBCT. For prostate patients, proton dose could be calculated accurately on qCBCT but not on rCBCT. When the original plan was recalculated on qCBCT, tumor coverage was maintained when anatomy was consistent with pCT. However, large dose variance was observed when patient anatomy change. Adaptive plan using qCBCT was able to recover tumor coverage and reduce dose to normal tissue. Conclusion: It is feasible to use qu antitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units for proton dose calculation and adaptive planning in proton therapy. Partly supported by Varian Medical Systems.« less
Automated patient setup and gating using cone beam computed tomography projections
NASA Astrophysics Data System (ADS)
Wan, Hanlin; Bertholet, Jenny; Ge, Jiajia; Poulsen, Per; Parikh, Parag
2016-03-01
In radiation therapy, fiducial markers are often implanted near tumors and used for patient positioning and respiratory gating purposes. These markers are then used to manually align the patients by matching the markers in the cone beam computed tomography (CBCT) reconstruction to those in the planning CT. This step is time-intensive and user-dependent, and often results in a suboptimal patient setup. We propose a fully automated, robust method based on dynamic programming (DP) for segmenting radiopaque fiducial markers in CBCT projection images, which are then used to automatically optimize the treatment couch position and/or gating window bounds. The mean of the absolute 2D segmentation error of our DP algorithm is 1.3+/- 1.0 mm for 87 markers on 39 patients. Intrafraction images were acquired every 3 s during treatment at two different institutions. For gated patients from Institution A (8 patients, 40 fractions), the DP algorithm increased the delivery accuracy (96+/- 6% versus 91+/- 11% , p < 0.01) compared to the manual setup using kV fluoroscopy. For non-gated patients from Institution B (6 patients, 16 fractions), the DP algorithm performed similarly (1.5+/- 0.8 mm versus 1.6+/- 0.9 mm, p = 0.48) compared to the manual setup matching the fiducial markers in the CBCT to the mean position. Our proposed automated patient setup algorithm only takes 1-2 s to run, requires no user intervention, and performs as well as or better than the current clinical setup.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, E; Lasio, G; Yi, B
2014-06-01
Purpose: The Iterative Subtraction Algorithm (ISA) method generates retrospectively a pre-selected motion phase cone-beam CT image from the full motion cone-beam CT acquired at standard rotation speed. This work evaluates ISA method with real lung patient data. Methods: The goal of the ISA algorithm is to extract motion and no- motion components form the full reconstruction CBCT. The workflow consists of subtracting from the full CBCT all of the undesired motion phases and obtain a motion de-blurred single-phase CBCT image, followed by iteration of this subtraction process. ISA is realized as follows: 1) The projections are sorted to various phases,more » and from all phases, a full reconstruction is performed to generate an image CTM. 2) Generate forward projections of CTM at the desired phase projection angles, the subtraction of projection and the forward projection will reconstruct a CTSub1, which diminishes the desired phase component. 3) By adding back the CTSub1 to CTm, no motion CBCT, CTS1, can be computed. 4) CTS1 still contains residual motion component. 5) This residual motion component can be further reduced by iteration.The ISA 4DCBCT technique was implemented using Varian Trilogy accelerator OBI system. To evaluate the method, a lung patient CBCT dataset was used. The reconstruction algorithm is FDK. Results: The single phase CBCT reconstruction generated via ISA successfully isolates the desired motion phase from the full motion CBCT, effectively reducing motion blur. It also shows improved image quality, with reduced streak artifacts with respect to the reconstructions from unprocessed phase-sorted projections only. Conclusion: A CBCT motion de-blurring algorithm, ISA, has been developed and evaluated with lung patient data. The algorithm allows improved visualization of a single phase motion extracted from a standard CBCT dataset. This study has been supported by National Institute of Health through R01CA133539.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cebe, M; Pacaci, P; Mabhouti, H
Purpose: In this study, the two available calculation algorithms of the Varian Eclipse treatment planning system(TPS), the electron Monte Carlo(eMC) and General Gaussian Pencil Beam(GGPB) algorithms were used to compare measured and calculated peripheral dose distribution of electron beams. Methods: Peripheral dose measurements were carried out for 6, 9, 12, 15, 18 and 22 MeV electron beams of Varian Triology machine using parallel plate ionization chamber and EBT3 films in the slab phantom. Measurements were performed for 6×6, 10×10 and 25×25cm{sup 2} cone sizes at dmax of each energy up to 20cm beyond the field edges. Using the same filmmore » batch, the net OD to dose calibration curve was obtained for each energy. Films were scanned 48 hours after irradiation using an Epson 1000XL flatbed scanner. Dose distribution measured using parallel plate ionization chamber and EBT3 film and calculated by eMC and GGPB algorithms were compared. The measured and calculated data were then compared to find which algorithm calculates peripheral dose distribution more accurately. Results: The agreement between measurement and eMC was better than GGPB. The TPS underestimated the out of field doses. The difference between measured and calculated doses increase with the cone size. The largest deviation between calculated and parallel plate ionization chamber measured dose is less than 4.93% for eMC, but it can increase up to 7.51% for GGPB. For film measurement, the minimum gamma analysis passing rates between measured and calculated dose distributions were 98.2% and 92.7% for eMC and GGPB respectively for all field sizes and energies. Conclusion: Our results show that the Monte Carlo algorithm for electron planning in Eclipse is more accurate than previous algorithms for peripheral dose distributions. It must be emphasized that the use of GGPB for planning large field treatments with 6 MeV could lead to inaccuracies of clinical significance.« less
Communication: Two measures of isochronal superposition
NASA Astrophysics Data System (ADS)
Roed, Lisa Anita; Gundermann, Ditte; Dyre, Jeppe C.; Niss, Kristine
2013-09-01
A liquid obeys isochronal superposition if its dynamics is invariant along the isochrones in the thermodynamic phase diagram (the curves of constant relaxation time). This paper introduces two quantitative measures of isochronal superposition. The measures are used to test the following six liquids for isochronal superposition: 1,2,6 hexanetriol, glycerol, polyphenyl ether, diethyl phthalate, tetramethyl tetraphenyl trisiloxane, and dibutyl phthalate. The latter four van der Waals liquids obey isochronal superposition to a higher degree than the two hydrogen-bonded liquids. This is a prediction of the isomorph theory, and it confirms findings by other groups.
Communication: Two measures of isochronal superposition.
Roed, Lisa Anita; Gundermann, Ditte; Dyre, Jeppe C; Niss, Kristine
2013-09-14
A liquid obeys isochronal superposition if its dynamics is invariant along the isochrones in the thermodynamic phase diagram (the curves of constant relaxation time). This paper introduces two quantitative measures of isochronal superposition. The measures are used to test the following six liquids for isochronal superposition: 1,2,6 hexanetriol, glycerol, polyphenyl ether, diethyl phthalate, tetramethyl tetraphenyl trisiloxane, and dibutyl phthalate. The latter four van der Waals liquids obey isochronal superposition to a higher degree than the two hydrogen-bonded liquids. This is a prediction of the isomorph theory, and it confirms findings by other groups.
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures
Theobald, Douglas L.; Wuttke, Deborah S.
2008-01-01
Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907
Brain Wiring in the Fourth Dimension.
Wernet, Mathias F; Desplan, Claude
2015-07-02
In this issue of Cell, Langen et al. use time-lapse multiphoton microscopy to show how Drosophila photoreceptor growth cones find their targets. Based on the observed dynamics, they develop a simple developmental algorithm recapitulating the highly complex connectivity pattern of these neurons, suggesting a basic framework for establishing wiring specificity. Copyright © 2015 Elsevier Inc. All rights reserved.
Petitjean, Michel
2017-10-01
Some major proteins families, such as carbonic anhydrases (CAs), have a conical cavity at the active site. No algorithm was available to compute conical cavities, so we needed to design one. The fast algorithm we designed let us show on a set of 717 CAs extracted from the PDB database that γ-CAs are characterized by active site cavity cone angles significantly larger than those of α-CAs and β-CAs: the generatrix-axis angles are greater than 60° for the γ-CAs while they are smaller than 50° for the other CAs. Free binaries of the CONICA software implementing the algorithm are available through a software repository at http://petitjeanmichel.free.fr/itoweb.petitjean.freeware.html. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Giovannetti, Vittorio; Lloyd, Seth; Maccone, Lorenzo
2008-04-25
A random access memory (RAM) uses n bits to randomly address N=2(n) distinct memory cells. A quantum random access memory (QRAM) uses n qubits to address any quantum superposition of N memory cells. We present an architecture that exponentially reduces the requirements for a memory call: O(logN) switches need be thrown instead of the N used in conventional (classical or quantum) RAM designs. This yields a more robust QRAM algorithm, as it in general requires entanglement among exponentially less gates, and leads to an exponential decrease in the power needed for addressing. A quantum optical implementation is presented.
F-wave decomposition for time of arrival profile estimation.
Han, Zhixiu; Kong, Xuan
2007-01-01
F-waves are distally recorded muscle responses that result from "backfiring" of motor neurons following stimulation of peripheral nerves. Each F-wave response is a superposition of several motor unit responses (F-wavelets). Initial deflection of the earliest F-wavelet defines the traditional F-wave latency (FWL) and earlier F-wavelet may mask F-wavelets traveling along slower (and possibly diseased) fibers. Unmasking the time of arrival (TOA) of late F-wavelets could improve the diagnostic value of the F-waves. An algorithm for F-wavelet decomposition is presented, followed by results of experimental data analysis.
Pure field theories and MACSYMA algorithms
NASA Technical Reports Server (NTRS)
Ament, W. S.
1977-01-01
A pure field theory attempts to describe physical phenomena through singularity-free solutions of field equations resulting from an action principle. The physics goes into forming the action principle and interpreting specific results. Algorithms for the intervening mathematical steps are sketched. Vacuum general relativity is a pure field theory, serving as model and providing checks for generalizations. The fields of general relativity are the 10 components of a symmetric Riemannian metric tensor; those of the Einstein-Straus generalization are the 16 components of a nonsymmetric. Algebraic properties are exploited in top level MACSYMA commands toward performing some of the algorithms of that generalization. The light cone for the theory as left by Einstein and Straus is found and simplifications of that theory are discussed.
Resolving boosted jets with XCone
Thaler, Jesse; Wilkason, Thomas F.
2015-12-01
We show how the recently proposed XCone jet algorithm smoothly interpolates between resolved and boosted kinematics. When using standard jet algorithms to reconstruct the decays of hadronic resonances like top quarks and Higgs bosons, one typically needs separate analysis strategies to handle the resolved regime of well-separated jets and the boosted regime of fat jets with substructure. XCone, by contrast, is an exclusive cone jet algorithm that always returns a fixed number of jets, so jet regions remain resolved even when (sub)jets are overlapping in the boosted regime. In this paper, we perform three LHC case studies $-$ dijet resonances,more » Higgs decays to bottom quarks, and all-hadronic top pairs$-$ that demonstrate the physics applications of XCone over a wide kinematic range.« less
An Automated Algorithm for Identifying and Tracking Transverse Waves in Solar Images
NASA Astrophysics Data System (ADS)
Weberg, Micah J.; Morton, Richard J.; McLaughlin, James A.
2018-01-01
Recent instrumentation has demonstrated that the solar atmosphere supports omnipresent transverse waves, which could play a key role in energizing the solar corona. Large-scale studies are required in order to build up an understanding of the general properties of these transverse waves. To help facilitate this, we present an automated algorithm for identifying and tracking features in solar images and extracting the wave properties of any observed transverse oscillations. We test and calibrate our algorithm using a set of synthetic data, which includes noise and rotational effects. The results indicate an accuracy of 1%–2% for displacement amplitudes and 4%–10% for wave periods and velocity amplitudes. We also apply the algorithm to data from the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory and find good agreement with previous studies. Of note, we find that 35%–41% of the observed plumes exhibit multiple wave signatures, which indicates either the superposition of waves or multiple independent wave packets observed at different times within a single structure. The automated methods described in this paper represent a significant improvement on the speed and quality of direct measurements of transverse waves within the solar atmosphere. This algorithm unlocks a wide range of statistical studies that were previously impractical.
Almatani, Turki; Hugtenburg, Richard P; Lewis, Ryan D; Barley, Susan E; Edwards, Mark A
2016-10-01
Cone beam CT (CBCT) images contain more scatter than a conventional CT image and therefore provide inaccurate Hounsfield units (HUs). Consequently, CBCT images cannot be used directly for radiotherapy dose calculation. The aim of this study is to enable dose calculations to be performed with the use of CBCT images taken during radiotherapy and evaluate the necessity of replanning. A patient with prostate cancer with bilateral metallic prosthetic hip replacements was imaged using both CT and CBCT. The multilevel threshold (MLT) algorithm was used to categorize pixel values in the CBCT images into segments of homogeneous HU. The variation in HU with position in the CBCT images was taken into consideration. This segmentation method relies on the operator dividing the CBCT data into a set of volumes where the variation in the relationship between pixel values and HUs is small. An automated MLT algorithm was developed to reduce the operator time associated with the process. An intensity-modulated radiation therapy plan was generated from CT images of the patient. The plan was then copied to the segmented CBCT (sCBCT) data sets with identical settings, and the doses were recalculated and compared. Gamma evaluation showed that the percentage of points in the rectum with γ < 1 (3%/3 mm) were 98.7% and 97.7% in the sCBCT using MLT and the automated MLT algorithms, respectively. Compared with the planning CT (pCT) plan, the MLT algorithm showed -0.46% dose difference with 8 h operator time while the automated MLT algorithm showed -1.3%, which are both considered to be clinically acceptable, when using collapsed cone algorithm. The segmentation of CBCT images using the method in this study can be used for dose calculation. For a patient with prostate cancer with bilateral hip prostheses and the associated issues with CT imaging, the MLT algorithms achieved a sufficient dose calculation accuracy that is clinically acceptable. The automated MLT algorithm reduced the operator time associated with implementing the MLT algorithm to achieve clinically acceptable accuracy. This saved time makes the automated MLT algorithm superior and easier to implement in the clinical setting. The MLT algorithm has been extended to the complex example of a patient with bilateral hip prostheses, which with the introduction of automation is feasible for use in adaptive radiotherapy, as an alternative to obtaining a new pCT and reoutlining the structures.
Improved quantum backtracking algorithms using effective resistance estimates
NASA Astrophysics Data System (ADS)
Jarret, Michael; Wan, Kianna
2018-02-01
We investigate quantum backtracking algorithms of the type introduced by Montanaro (Montanaro, arXiv:1509.02374). These algorithms explore trees of unknown structure and in certain settings exponentially outperform their classical counterparts. Some of the previous work focused on obtaining a quantum advantage for trees in which a unique marked vertex is promised to exist. We remove this restriction by recharacterizing the problem in terms of the effective resistance of the search space. In this paper, we present a generalization of one of Montanaro's algorithms to trees containing k marked vertices, where k is not necessarily known a priori. Our approach involves using amplitude estimation to determine a near-optimal weighting of a diffusion operator, which can then be applied to prepare a superposition state with support only on marked vertices and ancestors thereof. By repeatedly sampling this state and updating the input vertex, a marked vertex is reached in a logarithmic number of steps. The algorithm thereby achieves the conjectured bound of O ˜(√{T Rmax }) for finding a single marked vertex and O ˜(k √{T Rmax }) for finding all k marked vertices, where T is an upper bound on the tree size and Rmax is the maximum effective resistance encountered by the algorithm. This constitutes a speedup over Montanaro's original procedure in both the case of finding one and the case of finding multiple marked vertices in an arbitrary tree.
Parsa, Azin; Ibrahim, Norliza; Hassan, Bassam; Motroni, Alessandro; van der Stelt, Paul; Wismeijer, Daniel
2012-01-01
To assess the reliability of cone beam computed tomography (CBCT) voxel gray value measurements using Hounsfield units (HU) derived from multislice computed tomography (MSCT) as a clinical reference (gold standard). Ten partially edentulous human mandibular cadavers were scanned by two types of computed tomography (CT) modalities: multislice CT and cone beam CT. On MSCT scans, eight regions of interest (ROI) designating the site for preoperative implant placement were selected in each mandible. The datasets from both CT systems were matched using a three-dimensional (3D) registration algorithm. The mean voxel gray values of the region around the implant sites were compared between MSCT and CBCT. Significant differences between the mean gray values obtained by CBCT and HU by MSCT were found. In all the selected ROIs, CBCT showed higher mean values than MSCT. A strong correlation (R=0.968) between mean voxel gray values of CBCT and mean HU of MSCT was determined. Voxel gray values from CBCT deviate from actual HU units. However, a strong linear correlation exists, which may permit deriving actual HU units from CBCT using linear regression models.
CFD Extraction Tool for TecPlot From DPLR Solutions
NASA Technical Reports Server (NTRS)
Norman, David
2013-01-01
This invention is a TecPlot macro of a computer program in the TecPlot programming language that processes data from DPLR solutions in TecPlot format. DPLR (Data-Parallel Line Relaxation) is a NASA computational fluid dynamics (CFD) code, and TecPlot is a commercial CFD post-processing tool. The Tec- Plot data is in SI units (same as DPLR output). The invention converts the SI units into British units. The macro modifies the TecPlot data with unit conversions, and adds some extra calculations. After unit conversions, the macro cuts a slice, and adds vectors on the current plot for output format. The macro can also process surface solutions. Existing solutions use manual conversion and superposition. The conversion is complicated because it must be applied to a range of inter-related scalars and vectors to describe a 2D or 3D flow field. It processes the CFD solution to create superposition/comparison of scalars and vectors. The existing manual solution is cumbersome, open to errors, slow, and cannot be inserted into an automated process. This invention is quick and easy to use, and can be inserted into an automated data-processing algorithm.
NASA Astrophysics Data System (ADS)
Asai, Kazuto
2009-02-01
We determine essentially all partial differential equations satisfied by superpositions of tree type and of a further special type. These equations represent necessary and sufficient conditions for an analytic function to be locally expressible as an analytic superposition of the type indicated. The representability of a real analytic function by a superposition of this type is independent of whether that superposition involves real-analytic functions or C^{\\rho}-functions, where the constant \\rho is determined by the structure of the superposition. We also prove that the function u defined by u^n=xu^a+yu^b+zu^c+1 is generally non-representable in any real (resp. complex) domain as f\\bigl(g(x,y),h(y,z)\\bigr) with twice differentiable f and differentiable g, h (resp. analytic f, g, h).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, I; Jaskowiak, J; Ahmad, S
Purpose: To investigate quantitatively the displacement-vector-fields (DVF) obtained from different deformable image registration algorithms (DIR) in helical (HCT), axial (ACT) and cone-beam CT (CBCT) to register CT images of a mobile phantom and its correlation with motion amplitudes and frequencies. Methods: HCT, ACT and CBCT are used to image a mobile phantom which includes three targets with different sizes that are manufactured from water-equivalent material and embedded in low density foam. The phantom is moved with controlled motion patterns where a range of motion amplitudes (0–40mm) and frequencies (0.125–0.5Hz) are used. The CT images obtained from scanning of the mobilemore » phantom are registered with the stationary CT-images using four deformable image registration algorithms including demons, fast-demons, Horn-Schunk and Locas-Kanade from DIRART software. Results: The DVF calculated by the different algorithms correlate well with the motion amplitudes that are applied on the mobile phantom where maximal DVF increase linearly with the motion amplitudes of the mobile phantom in CBCT. Similarly in HCT, DVF increase linearly with motion amplitude, however, its correlation is weaker than CBCT. In ACT, the DVF’s do not correlate well with the motion amplitudes where motion induces strong image artifacts and DIR algorithms are not able to deform the ACT image of the mobile targets to the stationary targets. Three DIR-algorithms produce comparable values and patterns of the DVF for certain CT imaging modality. However, DVF from fast-demons deviated strongly from other algorithms at large motion amplitudes. Conclusion: In CBCT and HCT, the DVF correlate well with the motion amplitude of the mobile phantom. However, in ACT, DVF do not correlate with motion amplitudes. Correlations of DVF with motion amplitude as in CBCT and HCT imaging techniques can provide information about unknown motion parameters of the mobile organs in real patients as demonstrated in this phantom visibility study.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, T; UT Southwestern Medical Center, Dallas, TX; Yan, H
2014-06-15
Purpose: To develop a 3D dictionary learning based statistical reconstruction algorithm on graphic processing units (GPU), to improve the quality of low-dose cone beam CT (CBCT) imaging with high efficiency. Methods: A 3D dictionary containing 256 small volumes (atoms) of 3x3x3 voxels was trained from a high quality volume image. During reconstruction, we utilized a Cholesky decomposition based orthogonal matching pursuit algorithm to find a sparse representation on this dictionary basis of each patch in the reconstructed image, in order to regularize the image quality. To accelerate the time-consuming sparse coding in the 3D case, we implemented our algorithm inmore » a parallel fashion by taking advantage of the tremendous computational power of GPU. Evaluations are performed based on a head-neck patient case. FDK reconstruction with full dataset of 364 projections is used as the reference. We compared the proposed 3D dictionary learning based method with a tight frame (TF) based one using a subset data of 121 projections. The image qualities under different resolutions in z-direction, with or without statistical weighting are also studied. Results: Compared to the TF-based CBCT reconstruction, our experiments indicated that 3D dictionary learning based CBCT reconstruction is able to recover finer structures, to remove more streaking artifacts, and is less susceptible to blocky artifacts. It is also observed that statistical reconstruction approach is sensitive to inconsistency between the forward and backward projection operations in parallel computing. Using high a spatial resolution along z direction helps improving the algorithm robustness. Conclusion: 3D dictionary learning based CBCT reconstruction algorithm is able to sense the structural information while suppressing noise, and hence to achieve high quality reconstruction. The GPU realization of the whole algorithm offers a significant efficiency enhancement, making this algorithm more feasible for potential clinical application. A high zresolution is preferred to stabilize statistical iterative reconstruction. This work was supported in part by NIH(1R01CA154747-01), NSFC((No. 61172163), Research Fund for the Doctoral Program of Higher Education of China (No. 20110201110011), China Scholarship Council.« less
A Primer on Vibrational Ball Bearing Feature Generation for Prognostics and Diagnostics Algorithms
2015-03-01
Atlas -Marks (Cone-Shaped Kernel) ........................................................36 8.7.7 Hilbert-Huang Transform...bearing surface and eventually progress to the surface where the material will separate. Also known as pitting, spalling, or flaking. • Wear ...normal degradation caused by dirt and foreign particles causing abrasion of the contact surfaces over time resulting in alterations in the raceway and
Plenoptic camera image simulation for reconstruction algorithm verification
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim
2014-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.
Multimodel Kalman filtering for adaptive nonuniformity correction in infrared sensors.
Pezoa, Jorge E; Hayat, Majeed M; Torres, Sergio N; Rahman, Md Saifur
2006-06-01
We present an adaptive technique for the estimation of nonuniformity parameters of infrared focal-plane arrays that is robust with respect to changes and uncertainties in scene and sensor characteristics. The proposed algorithm is based on using a bank of Kalman filters in parallel. Each filter independently estimates state variables comprising the gain and the bias matrices of the sensor, according to its own dynamic-model parameters. The supervising component of the algorithm then generates the final estimates of the state variables by forming a weighted superposition of all the estimates rendered by each Kalman filter. The weights are computed and updated iteratively, according to the a posteriori-likelihood principle. The performance of the estimator and its ability to compensate for fixed-pattern noise is tested using both simulated and real data obtained from two cameras operating in the mid- and long-wave infrared regime.
Library-based illumination synthesis for critical CMOS patterning.
Yu, Jue-Chin; Yu, Peichen; Chao, Hsueh-Yung
2013-07-01
In optical microlithography, the illumination source for critical complementary metal-oxide-semiconductor layers needs to be determined in the early stage of a technology node with very limited design information, leading to simple binary shapes. Recently, the availability of freeform sources permits us to increase pattern fidelity and relax mask complexities with minimal insertion risks to the current manufacturing flow. However, source optimization across many patterns is often treated as a design-of-experiments problem, which may not fully exploit the benefits of a freeform source. In this paper, a rigorous source-optimization algorithm is presented via linear superposition of optimal sources for pre-selected patterns. We show that analytical solutions are made possible by using Hopkins formulation and quadratic programming. The algorithm allows synthesized illumination to be linked with assorted pattern libraries, which has a direct impact on design rule studies for early planning and design automation for full wafer optimization.
Kohno, Ryosuke; Hirano, Eriko; Kitou, Satoshi; Goka, Tomonori; Matsubara, Kana; Kameoka, Satoru; Matsuura, Taeko; Ariji, Takaki; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi
2010-07-01
In order to evaluate the usefulness of a metal oxide-silicon field-effect transistor (MOSFET) detector as a in vivo dosimeter, we performed in vivo dosimetry using the MOSFET detector with an anthropomorphic phantom. We used the RANDO phantom as an anthropomorphic phantom, and dose measurements were carried out in the abdominal, thoracic, and head and neck regions for simple square field sizes of 10 x 10, 5 x 5, and 3 x 3 cm(2) with a 6-MV photon beam. The dose measured by the MOSFET detector was verified by the dose calculations of the superposition (SP) algorithm in the XiO radiotherapy treatment-planning system. In most cases, the measured doses agreed with the results of the SP algorithm within +/-3%. Our results demonstrated the utility of the MOSFET detector for in vivo dosimetry even in the presence of clinical tissue inhomogeneities.
Optimization-based scatter estimation using primary modulation for computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao
Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function ismore » designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.« less
Technical note: RabbitCT--an open platform for benchmarking 3D cone-beam reconstruction algorithms.
Rohkohl, C; Keck, B; Hofmann, H G; Hornegger, J
2009-09-01
Fast 3D cone beam reconstruction is mandatory for many clinical workflows. For that reason, researchers and industry work hard on hardware-optimized 3D reconstruction. Backprojection is a major component of many reconstruction algorithms that require a projection of each voxel onto the projection data, including data interpolation, before updating the voxel value. This step is the bottleneck of most reconstruction algorithms and the focus of optimization in recent publications. A crucial limitation, however, of these publications is that the presented results are not comparable to each other. This is mainly due to variations in data acquisitions, preprocessing, and chosen geometries and the lack of a common publicly available test dataset. The authors provide such a standardized dataset that allows for substantial comparison of hardware accelerated backprojection methods. They developed an open platform RabbitCT (www.rabbitCT.com) for worldwide comparison in backprojection performance and ranking on different architectures using a specific high resolution C-arm CT dataset of a rabbit. This includes a sophisticated benchmark interface, a prototype implementation in C++, and image quality measures. At the time of writing, six backprojection implementations are already listed on the website. Optimizations include multithreading using Intel threading building blocks and OpenMP, vectorization using SSE, and computation on the GPU using CUDA 2.0. There is a need for objectively comparing backprojection implementations for reconstruction algorithms. RabbitCT aims to provide a solution to this problem by offering an open platform with fair chances for all participants. The authors are looking forward to a growing community and await feedback regarding future evaluations of novel software- and hardware-based acceleration schemes.
A post-reconstruction method to correct cupping artifacts in cone beam breast computed tomography
Altunbas, M. C.; Shaw, C. C.; Chen, L.; Lai, C.; Liu, X.; Han, T.; Wang, T.
2007-01-01
In cone beam breast computed tomography (CT), scattered radiation leads to nonuniform biasing of CT numbers known as a cupping artifact. Besides being visual distractions, cupping artifacts appear as background nonuniformities, which impair efficient gray scale windowing and pose a problem in threshold based volume visualization/segmentation. To overcome this problem, we have developed a background nonuniformity correction method specifically designed for cone beam breast CT. With this technique, the cupping artifact is modeled as an additive background signal profile in the reconstructed breast images. Due to the largely circularly symmetric shape of a typical breast, the additive background signal profile was also assumed to be circularly symmetric. The radial variation of the background signals were estimated by measuring the spatial variation of adipose tissue signals in front view breast images. To extract adipose tissue signals in an automated manner, a signal sampling scheme in polar coordinates and a background trend fitting algorithm were implemented. The background fits compared with targeted adipose tissue signal value (constant throughout the breast volume) to get an additive correction value for each tissue voxel. To test the accuracy, we applied the technique to cone beam CT images of mastectomy specimens. After correction, the images demonstrated significantly improved signal uniformity in both front and side view slices. The reduction of both intra-slice and inter-slice variations in adipose tissue CT numbers supported our observations. PMID:17822018
Cone Beam X-ray Luminescence Computed Tomography Based on Bayesian Method.
Zhang, Guanglei; Liu, Fei; Liu, Jie; Luo, Jianwen; Xie, Yaoqin; Bai, Jing; Xing, Lei
2017-01-01
X-ray luminescence computed tomography (XLCT), which aims to achieve molecular and functional imaging by X-rays, has recently been proposed as a new imaging modality. Combining the principles of X-ray excitation of luminescence-based probes and optical signal detection, XLCT naturally fuses functional and anatomical images and provides complementary information for a wide range of applications in biomedical research. In order to improve the data acquisition efficiency of previously developed narrow-beam XLCT, a cone beam XLCT (CB-XLCT) mode is adopted here to take advantage of the useful geometric features of cone beam excitation. Practically, a major hurdle in using cone beam X-ray for XLCT is that the inverse problem here is seriously ill-conditioned, hindering us to achieve good image quality. In this paper, we propose a novel Bayesian method to tackle the bottleneck in CB-XLCT reconstruction. The method utilizes a local regularization strategy based on Gaussian Markov random field to mitigate the ill-conditioness of CB-XLCT. An alternating optimization scheme is then used to automatically calculate all the unknown hyperparameters while an iterative coordinate descent algorithm is adopted to reconstruct the image with a voxel-based closed-form solution. Results of numerical simulations and mouse experiments show that the self-adaptive Bayesian method significantly improves the CB-XLCT image quality as compared with conventional methods.
Cone Beam X-ray Luminescence Computed Tomography Based on Bayesian Method
Liu, Fei; Luo, Jianwen; Xie, Yaoqin; Bai, Jing
2017-01-01
X-ray luminescence computed tomography (XLCT), which aims to achieve molecular and functional imaging by X-rays, has recently been proposed as a new imaging modality. Combining the principles of X-ray excitation of luminescence-based probes and optical signal detection, XLCT naturally fuses functional and anatomical images and provides complementary information for a wide range of applications in biomedical research. In order to improve the data acquisition efficiency of previously developed narrow-beam XLCT, a cone beam XLCT (CB-XLCT) mode is adopted here to take advantage of the useful geometric features of cone beam excitation. Practically, a major hurdle in using cone beam X-ray for XLCT is that the inverse problem here is seriously ill-conditioned, hindering us to achieve good image quality. In this paper, we propose a novel Bayesian method to tackle the bottleneck in CB-XLCT reconstruction. The method utilizes a local regularization strategy based on Gaussian Markov random field to mitigate the ill-conditioness of CB-XLCT. An alternating optimization scheme is then used to automatically calculate all the unknown hyperparameters while an iterative coordinate descent algorithm is adopted to reconstruct the image with a voxel-based closed-form solution. Results of numerical simulations and mouse experiments show that the self-adaptive Bayesian method significantly improves the CB-XLCT image quality as compared with conventional methods. PMID:27576245
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.
Theobald, Douglas L; Wuttke, Deborah S
2006-09-01
THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.
Liu, Xiang; Chandrasekhar, S; Winzer, P J; Chraplyvy, A R; Tkach, R W; Zhu, B; Taunay, T F; Fishteyn, M; DiGiovanni, D J
2012-08-13
Coherent superposition of light waves has long been used in various fields of science, and recent advances in digital coherent detection and space-division multiplexing have enabled the coherent superposition of information-carrying optical signals to achieve better communication fidelity on amplified-spontaneous-noise limited communication links. However, fiber nonlinearity introduces highly correlated distortions on identical signals and diminishes the benefit of coherent superposition in nonlinear transmission regime. Here we experimentally demonstrate that through coordinated scrambling of signal constellations at the transmitter, together with appropriate unscrambling at the receiver, the full benefit of coherent superposition is retained in the nonlinear transmission regime of a space-diversity fiber link based on an innovatively engineered multi-core fiber. This scrambled coherent superposition may provide the flexibility of trading communication capacity for performance in future optical fiber networks, and may open new possibilities in high-performance and secure optical communications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeda, Atsuya; Department of Radiology, Tokyo Metropolitan Hiroo General Hospital, Tokyo; Sanuki, Naoko
Purpose: To retrospectively analyze the clinical outcomes of stereotactic body radiotherapy (SBRT) for patients with Stages 1A and 1B non-small-cell lung cancer. Methods and Materials: We reviewed the records of patients with non-small-cell lung cancer treated with curative intent between Dec 2001 and May 2007. All patients had histopathologically or cytologically confirmed disease, increased levels of tumor markers, and/or positive findings on fluorodeoxyglucose positron emission tomography. Staging studies identified their disease as Stage 1A or 1B. Performance status was 2 or less according to World Health Organization guidelines in all cases. The prescribed dose of 50 Gy total in fivemore » fractions, calculated by using a superposition algorithm, was defined for the periphery of the planning target volume. Results: One hundred twenty-one patients underwent SBRT during the study period, and 63 were eligible for this analysis. Thirty-eight patients had Stage 1A (T1N0M0) and 25 had Stage 1B (T2N0M0). Forty-nine patients were not appropriate candidates for surgery because of chronic pulmonary disease. Median follow-up of these 49 patients was 31 months (range, 10-72 months). The 3-year local control, disease-free, and overall survival rates in patients with Stages 1A and 1B were 93% and 96% (p = 0.86), 76% and 77% (p = 0.83), and 90% and 63% (p = 0.09), respectively. No acute toxicity was observed. Grade 2 or higher radiation pneumonitis was experienced by 3 patients, and 1 of them had fatal bacterial pneumonia. Conclusions: The SBRT at 50 Gy total in five fractions to the periphery of the planning target volume calculated by using a superposition algorithm is feasible. High local control rates were achieved for both T2 and T1 tumors.« less
NASA Astrophysics Data System (ADS)
Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan
2017-02-01
Sonic Infrared imaging (SIR) technology is a relatively new NDE technique that has received significant acceptance in the NDE community. SIR NDE is a super-fast, wide range NDE method. The technology uses short pulses of ultrasonic excitation together with infrared imaging to detect defects in the structures under inspection. Defects become visible to the IR camera when the temperature in the crack vicinity increases due to various heating mechanisms in the specimen. Defect detection is highly affected by noise levels as well as mode patterns in the image. Mode patterns result from the superposition of sonic waves interfering within the specimen during the application of sound pulse. Mode patterns can be a serious concern, especially in composite structures. Mode patterns can either mimic real defects in the specimen, or alternatively, hide defects if they overlap. In last year's QNDE, we have presented algorithms to improve defects detectability in severe noise. In this paper, we will present our development of algorithms on defect extraction targeting specifically to mode patterns in SIR images.
Implementation of a cone-beam backprojection algorithm on the cell broadband engine processor
NASA Astrophysics Data System (ADS)
Bockenbach, Olivier; Knaup, Michael; Kachelrieß, Marc
2007-03-01
Tomographic image reconstruction is computationally very demanding. In all cases the backprojection represents the performance bottleneck due to the high operational count and due to the high demand put on the memory subsystem. In the past, solving this problem has lead to the implementation of specific architectures, connecting Application Specific Integrated Circuits (ASICs) or Field Programmable Gate Arrays (FPGAs) to memory through dedicated high speed busses. More recently, there have also been attempt to use Graphic Processing Units (GPUs) to perform the backprojection step. Originally aimed at the gaming market, IBM, Toshiba and Sony have introduced the Cell Broadband Engine (CBE) processor, often considered as a multicomputer on a chip. Clocked at 3 GHz, the Cell allows for a theoretical performance of 192 GFlops and a peak data transfer rate over the internal bus of 200 GB/s. This performance indeed makes the Cell a very attractive architecture for implementing tomographic image reconstruction algorithms. In this study, we investigate the relative performance of a perspective backprojection algorithm when implemented on a standard PC and on the Cell processor. We compare these results to the performance achievable with FPGAs based boards and high end GPUs. The cone-beam backprojection performance was assessed by backprojecting a full circle scan of 512 projections of 1024x1024 pixels into a volume of size 512x512x512 voxels. It took 3.2 minutes on the PC (single CPU) and is as fast as 13.6 seconds on the Cell.
Comparison of modal superposition methods for the analytical solution to moving load problems.
DOT National Transportation Integrated Search
1994-01-01
The response of bridge structures to moving loads is investigated using modal superposition methods. Two distinct modal superposition methods are available: the modedisplacement method and the mode-acceleration method. While the mode-displacement met...
The origin of non-classical effects in a one-dimensional superposition of coherent states
NASA Technical Reports Server (NTRS)
Buzek, V.; Knight, P. L.; Barranco, A. Vidiella
1992-01-01
We investigate the nature of the quantum fluctuations in a light field created by the superposition of coherent fields. We give a physical explanation (in terms of Wigner functions and phase-space interference) why the 1-D superposition of coherent states in the direction of the x-quadrature leads to the squeezing of fluctuations in the y-direction, and show that such a superposition can generate the squeezed vacuum and squeezed coherent states.
WE-D-9A-02: Automated Landmark-Guided CT to Cone-Beam CT Deformable Image Registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kearney, V; Gu, X; Chen, S
2014-06-15
Purpose: The anatomical changes that occur between the simulation CT and daily cone-beam CT (CBCT) are investigated using an automated landmark-guided deformable image registration (LDIR) algorithm with simultaneous intensity correction. LDIR was designed to be accurate in the presence of tissue intensity mismatch and heavy noise contamination. Method: An auto-landmark generation algorithm was used in conjunction with a local small volume (LSV) gradient matching search engine to map corresponding landmarks between the CBCT and planning CT. The LSVs offsets were used to perform an initial deformation, generate landmarks, and correct local intensity mismatch. The landmarks act as stabilizing controlpoints inmore » the Demons objective function. The accuracy of the LDIR algorithm was evaluated on one synthetic case with ground truth and data of ten head and neck cancer patients. The deformation vector field (DVF) accuracy was accessed using a synthetic case. The Root mean square error of the 3D canny edge (RMSECE), mutual information (MI), and feature similarity index metric (FSIM) were used to access the accuracy of LDIR on the patient data. The quality of the corresponding deformed contours was verified by an attending physician. Results: The resulting 90 percentile DVF error for the synthetic case was within 5.63mm for the original demons algorithm, 2.84mm for intensity correction alone, 2.45mm using controlpoints without intensity correction, and 1.48 mm for the LDIR algorithm. For the five patients the mean RMSECE of the original CT, Demons deformed CT, intensity corrected Demons CT, control-point stabilized deformed CT, and LDIR CT was 0.24, 0.26, 0.20, 0.20, and 0.16 respectively. Conclusion: LDIR is accurate in the presence of multimodal intensity mismatch and CBCT noise contamination. Since LDIR is GPU based it can be implemented with minimal additional strain on clinical resources. This project has been supported by a CPRIT individual investigator award RP11032.« less
Shading correction assisted iterative cone-beam CT reconstruction
NASA Astrophysics Data System (ADS)
Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye
2017-11-01
Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.
Motion and positional error correction for cone beam 3D-reconstruction with mobile C-arms.
Bodensteiner, C; Darolti, C; Schumacher, H; Matthäus, L; Schweikard, A
2007-01-01
CT-images acquired by mobile C-arm devices can contain artefacts caused by positioning errors. We propose a data driven method based on iterative 3D-reconstruction and 2D/3D-registration to correct projection data inconsistencies. With a 2D/3D-registration algorithm, transformations are computed to align the acquired projection images to a previously reconstructed volume. In an iterative procedure, the reconstruction algorithm uses the results of the registration step. This algorithm also reduces small motion artefacts within 3D-reconstructions. Experiments with simulated projections from real patient data show the feasibility of the proposed method. In addition, experiments with real projection data acquired with an experimental robotised C-arm device have been performed with promising results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrison, Hali; Menon, Geetha; Sloboda, Ron
Purpose: To investigate the accuracy of model-based dose calculations using a collapsed-cone algorithm for COMS eye plaques loaded with I-125 seeds. Methods: The Nucletron SelectSeed 130.002 I-125 seed and the 12 mm COMS eye plaque were incorporated into a research version of the Oncentra® Brachy v4.5 treatment planning system which uses the Advanced Collapsed-cone Engine (ACE) algorithm. Comparisons of TG-43 and high-accuracy ACE doses were performed for a single seed in a 30×30×30 cm{sup 3} water box, as well as with one seed in the central slot of the 12 mm COMS eye plaque. The doses along the plaque centralmore » axis (CAX) were used to calculate the carrier correction factor, T(r), and were compared to tabulated and MCNP6 simulated doses for both the SelectSeed and IsoAid IAI-125A seeds. Results: The ACE calculated dose for the single seed in water was on average within 0.62 ± 2.2% of the TG-43 dose, with the largest differences occurring near the end-welds. The ratio of ACE to TG-43 calculated doses along the CAX (T(r)) of the 12 mm COMS plaque for the SelectSeed was on average within 3.0% of previously tabulated data, and within 2.9% of the MCNP6 simulated values. The IsoAid and SelectSeed T(r) values agreed within 0.3%. Conclusions: Initial comparisons show good agreement between ACE and MC doses for a single seed in a 12 mm COMS eye plaque; more complicated scenarios are being investigated to determine the accuracy of this calculation method.« less
NASA Astrophysics Data System (ADS)
Hashemi, Sayed Masoud; Lee, Young; Eriksson, Markus; Nordström, Hâkan; Mainprize, James; Grouza, Vladimir; Huynh, Christopher; Sahgal, Arjun; Song, William Y.; Ruschin, Mark
2017-03-01
A Contrast and Attenuation-map (CT-number) Linearity Improvement (CALI) framework is proposed for cone-beam CT (CBCT) images used for brain stereotactic radiosurgery (SRS). The proposed framework is used together with our high spatial resolution iterative reconstruction algorithm and is tailored for the Leksell Gamma Knife ICON (Elekta, Stockholm, Sweden). The incorporated CBCT system in ICON facilitates frameless SRS planning and treatment delivery. The ICON employs a half-cone geometry to accommodate the existing treatment couch. This geometry increases the amount of artifacts and together with other physical imperfections causes image inhomogeneity and contrast reduction. Our proposed framework includes a preprocessing step, involving a shading and beam-hardening artifact correction, and a post-processing step to correct the dome/capping artifact caused by the spatial variations in x-ray energy generated by bowtie-filter. Our shading correction algorithm relies solely on the acquired projection images (i.e. no prior information required) and utilizes filtered-back-projection (FBP) reconstructed images to generate a segmented bone and soft-tissue map. Ideal projections are estimated from the segmented images and a smoothed version of the difference between the ideal and measured projections is used in correction. The proposed beam-hardening and dome artifact corrections are segmentation free. The CALI was tested on CatPhan, as well as patient images acquired on the ICON system. The resulting clinical brain images show substantial improvements in soft contrast visibility, revealing structures such as ventricles and lesions which were otherwise un-detectable in FBP-reconstructed images. The linearity of the reconstructed attenuation-map was also improved, resulting in more accurate CT#.
Optimized Orthovoltage Stereotactic Radiosurgery
NASA Astrophysics Data System (ADS)
Fagerstrom, Jessica M.
Because of its ability to treat intracranial targets effectively and noninvasively, stereotactic radiosurgery (SRS) is a prevalent treatment modality in modern radiation therapy. This work focused on SRS delivering rectangular function dose distributions, which are desirable for some targets such as those with functional tissue included within the target volume. In order to achieve such distributions, this work used fluence modulation and energies lower than those utilized in conventional SRS. In this work, the relationship between prescription isodose and dose gradients was examined for standard, unmodulated orthovoltage SRS dose distributions. Monte Carlo-generated energy deposition kernels were used to calculate 4pi, isocentric dose distributions for a polyenergetic orthovoltage spectrum, as well as monoenergetic orthovoltage beams. The relationship between dose gradients and prescription isodose was found to be field size and energy dependent, and values were found for prescription isodose that optimize dose gradients. Next, a pencil-beam model was used with a Genetic Algorithm search heuristic to optimize the spatial distribution of added tungsten filtration within apertures of cone collimators in a moderately filtered 250 kVp beam. Four cone sizes at three depths were examined with a Monte Carlo model to determine the effects of the optimized modulation compared to open cones, and the simulations found that the optimized cones were able to achieve both improved penumbra and flatness statistics at depth compared to the open cones. Prototypes of the filter designs calculated using mathematical optimization techniques and Monte Carlo simulations were then manufactured and inserted into custom built orthovoltage SRS cone collimators. A positioning system built in-house was used to place the collimator and filter assemblies temporarily in the 250 kVp beam line. Measurements were performed in water using radiochromic film scanned with both a standard white light flatbed scanner as well as a prototype laser densitometry system. Measured beam profiles showed that the modulated beams could more closely approach rectangular function dose profiles compared to the open cones. A methodology has been described and implemented to achieve optimized SRS delivery, including the development of working prototypes. Future work may include the construction of a full treatment platform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clemente, F; Perez, C
Purpose: Redundant treatment verifications in conformal and intensity-modulated radiation therapy techniques are traditionally performed with single point calculations. New solutions can replace these checks with 3D treatment plan verifications. This work describes a software tool (Mobius3D, Mobius Medical Systems) that uses a GPU-accelerated collapsed cone algorithm to perform 3D independent verifications of TPS calculations. Methods: Mobius3D comes with reference beam models for common linear accelerators. The system uses an independently developed collapsed cone algorithm updated with recent enhancements. 144 isotropically-spaced cones are used for each voxel for calculations. These complex calculations can be sped up by using GPUs. Mobius3D calculatemore » dose using DICOM information coming from TPS (CT, RT Struct, RT Plan RT Dose). DVH-metrics and 3D gamma tests can be used to compare both TPS and secondary calculations. 170 patients treated with all common techniques as 3DCFRT (including wedged), static and dynamic IMRT and VMAT have been successfully verified with this solution. Results: Calculation times are between 3–5 minutes for 3DCFRT treatments and 15–20 for most complex dMLC and VMAT plans. For all PTVs mean dose and 90% coverage differences are (1.12±0.97)% and (0.68±1.19)%, respectively. Mean dose discrepancies for all OARs is (0.64±1.00)%. 3D gamma (global, 3%/3 mm) analysis shows a mean passing rate of (97.8 ± 3.0)% for PTVs and (99.0±3.0)% for OARs. 3D gamma pasing rate for all voxels in CT has a mean value of (98.5±1.6)%. Conclusion: Mobius3D is a powerful tool to verify all modalities of radiation therapy treatments. Dose discrepancies calculated by this system are in good agreement with TPS. The use of reference beam data results in time savings and can be used to avoid the propagation of errors in original beam data into our QA system. GPU calculations permit enhanced collapsed cone calculations with reasonable calculation times.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C; Han, M; Baek, J
Purpose: To investigate the detectability of a small target for different slice direction of a volumetric cone beam CT image and its impact on dose reduction. Methods: Analytic projection data of a sphere object (1 mm diameter, 0.2/cm attenuation coefficient) were generated and reconstructed by FDK algorithm. In this work, we compared the detectability of the small target from four different backprojection Methods: hanning weighted ramp filter with linear interpolation (RECON 1), hanning weighted ramp filter with Fourier interpolation (RECON2), ramp filter with linear interpolation (RECON 3), and ramp filter with Fourier interpolation (RECON4), respectively. For noise simulation, 200 photonsmore » per measurement were used, and the noise only data were reconstructed using FDK algorithm. For each reconstructed volume, axial and coronal slice were extracted and detection-SNR was calculated using channelized Hotelling observer (CHO) with dense difference-of-Gaussian (D-DOG) channels. Results: Detection-SNR of coronal images varies for different backprojection methods, while axial images have a similar detection-SNR. Detection-SNR{sup 2} ratios of coronal and axial images in RECON1 and RECON2 are 1.33 and 1.15, implying that the coronal image has a better detectability than axial image. In other words, using coronal slices for the small target detection can reduce the patient dose about 33% and 15% compared to using axial slices in RECON 1 and RECON 2. Conclusion: In this work, we investigated slice direction dependent detectability of a volumetric cone beam CT image. RECON 1 and RECON 2 produced the highest detection-SNR, with better detectability in coronal slices. These results indicate that it is more beneficial to use coronal slice to improve detectability of a small target in a volumetric cone beam CT image. This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the IT Consilience Creative Program (NIPA-2014-H0201-14-1002) supervised by the NIPA (National IT Industry Promotion Agency). Authors declares that s/he has no conflict of Interest in relation to the work in this abstract.« less
Priori mask guided image reconstruction (p-MGIR) for ultra-low dose cone-beam computed tomography
NASA Astrophysics Data System (ADS)
Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Kahler, Darren L.; Liu, Chihray; Lu, Bo
2015-11-01
Recently, the compressed sensing (CS) based iterative reconstruction method has received attention because of its ability to reconstruct cone beam computed tomography (CBCT) images with good quality using sparsely sampled or noisy projections, thus enabling dose reduction. However, some challenges remain. In particular, there is always a tradeoff between image resolution and noise/streak artifact reduction based on the amount of regularization weighting that is applied uniformly across the CBCT volume. The purpose of this study is to develop a novel low-dose CBCT reconstruction algorithm framework called priori mask guided image reconstruction (p-MGIR) that allows reconstruction of high-quality low-dose CBCT images while preserving the image resolution. In p-MGIR, the unknown CBCT volume was mathematically modeled as a combination of two regions: (1) where anatomical structures are complex, and (2) where intensities are relatively uniform. The priori mask, which is the key concept of the p-MGIR algorithm, was defined as the matrix that distinguishes between the two separate CBCT regions where the resolution needs to be preserved and where streak or noise needs to be suppressed. We then alternately updated each part of image by solving two sub-minimization problems iteratively, where one minimization was focused on preserving the edge information of the first part while the other concentrated on the removal of noise/artifacts from the latter part. To evaluate the performance of the p-MGIR algorithm, a numerical head-and-neck phantom, a Catphan 600 physical phantom, and a clinical head-and-neck cancer case were used for analysis. The results were compared with the standard Feldkamp-Davis-Kress as well as conventional CS-based algorithms. Examination of the p-MGIR algorithm showed that high-quality low-dose CBCT images can be reconstructed without compromising the image resolution. For both phantom and the patient cases, the p-MGIR is able to achieve a clinically-reasonable image with 60 projections. Therefore, a clinically-viable, high-resolution head-and-neck CBCT image can be obtained while cutting the dose by 83%. Moreover, the image quality obtained using p-MGIR is better than the quality obtained using other algorithms. In this work, we propose a novel low-dose CBCT reconstruction algorithm called p-MGIR. It can be potentially used as a CBCT reconstruction algorithm with low dose scan requests
Stochastic Semidefinite Programming: Applications and Algorithms
2012-03-03
doi: 2011/09/07 13:38:21 13 TOTAL: 1 Number of Papers published in non peer-reviewed journals: Baha M. Alzalg and K. A. Ariyawansa, Stochastic...symmetric programming over integers. International Conference on Scientific Computing, Las Vegas, Nevada, July 18--21, 2011. Baha M. Alzalg. On recent...Proceeding publications (other than abstracts): PaperReceived Baha M. Alzalg, K. A. Ariyawansa. Stochastic mixed integer second-order cone programming
Measurement of Air Flow Characteristics Using Seven-Hole Cone Probes
NASA Technical Reports Server (NTRS)
Takahashi, Timothy T.
1997-01-01
The motivation for this work has been the development of a wake survey system. A seven-hole probe can measure the distribution of static pressure, total pressure, and flow angularity in a wind tunnel environment. The author describes the development of a simple, very efficient algorithm to compute flow properties from probe tip pressures. Its accuracy and applicability to unsteady, turbulent flow are discussed.
Three-dimensional monochromatic x-ray CT
NASA Astrophysics Data System (ADS)
Saito, Tsuneo; Kudo, Hiroyuki; Takeda, Tohoru; Itai, Yuji; Tokumori, Kenji; Toyofuku, Fukai; Hyodo, Kazuyuki; Ando, Masami; Nishimura, Ktsuyuki; Uyama, Chikao
1995-08-01
In this paper, we describe a 3D computed tomography (3D CT) using monochromatic x-rays generated by synchrotron radiation, which performs a direct reconstruction of 3D volume image of an object from its cone-beam projections. For the develpment of 3D CT, scanning orbit of x-ray source to obtain complete 3D information about an object and corresponding 3D image reconstruction algorithm are considered. Computer simulation studies demonstrate the validities of proposed scanning method and reconstruction algorithm. A prototype experimental system of 3D CT was constructed. Basic phantom examinations and specific material CT image by energy subtraction obtained in this experimental system are shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, A; Paysan, P; Brehm, M
2016-06-15
Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as theymore » pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction substantially improves CBCT image quality. It is anticipated that clinically acceptable reconstruction times will result from a multi-GPU implementation (the algorithms are under active development and not yet commercially available). All authors are employees of and (may) own stock of Varian Medical Systems.« less
Efstratiadis, Stella; Baumrind, Sheldon; Shofer, Frances; Jacobsson-Hunt, Ulla; Laster, Larry; Ghafari, Joseph
2005-11-01
The aims of this study were (1) to evaluate cephalometric changes in subjects with Class II Division 1 malocclusion who were treated with headgear (HG) or Fränkel function regulator (FR) and (2) to compare findings from regional superpositions of cephalometric structures with those from conventional cephalometric measurements. Cephalographs were taken at baseline, after 1 year, and after 2 years of 65 children enrolled in a prospective randomized clinical trial. The spatial location of the landmarks derived from regional superpositions was evaluated in a coordinate system oriented on natural head position. The superpositions included the best anatomic fit of the anterior cranial base, maxillary base, and mandibular structures. Both the HG and the FR were effective in correcting the distoclusion, and they generated enhanced differential growth between the jaws. Differences between cranial and maxillary superpositions regarding mandibular displacement (Point B, pogonion, gnathion, menton) were noted: the HG had a more horizontal vector on maxillary superposition that was also greater (.0001 < P < .05) than the horizontal displacement observed with the FR. This discrepancy appeared to be related to (1) the clockwise (backward) rotation of the palatal and mandibular planes observed with the HG; the palatal plane's rotation, which was transferred through the occlusion to the mandibular plane, was factored out on maxillary superposition; and (2) the interaction between the inclination of the maxillary incisors and the forward movement of the mandible during growth. Findings from superpositions agreed with conventional angular and linear measurements regarding the basic conclusions for the primary effects of HG and FR. However, the results suggest that inferences of mandibular displacement are more reliable from maxillary than cranial superposition when evaluating occlusal changes during treatment.
TU-H-BRC-05: Stereotactic Radiosurgery Optimized with Orthovoltage Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagerstrom, J; Culberson, W; Bender, E
2016-06-15
Purpose: To achieve improved stereotactic radiosurgery (SRS) dose distributions using orthovoltage energy fluence modulation with inverse planning optimization techniques. Methods: A pencil beam model was used to calculate dose distributions from the institution’s orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods as well as measurements with radiochromic film. The orthovoltage photon spectra, modulated by varying thicknesses of attenuating material, were approximated using open-source software. A genetic algorithm search heuristic routine was used to optimize added tungsten filtration thicknesses to approach rectangular function dose distributions at depth. Optimizations were performed for depths of 2.5,more » 5.0, and 7.5 cm, with cone sizes of 8, 10, and 12 mm. Results: Circularly-symmetric tungsten filters were designed based on the results of the optimization, to modulate the orthovoltage beam across the aperture of an SRS cone collimator. For each depth and cone size combination examined, the beam flatness and 80–20% and 90–10% penumbrae were calculated for both standard, open cone-collimated beams as well as for the optimized, filtered beams. For all configurations tested, the modulated beams were able to achieve improved penumbra widths and flatness statistics at depth, with flatness improving between 33 and 52%, and penumbrae improving between 18 and 25% for the modulated beams compared to the unmodulated beams. Conclusion: A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions at depth with improved flatness and penumbrae compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system.« less
A preliminary investigation of ROI-image reconstruction with the rebinned BPF algorithm
NASA Astrophysics Data System (ADS)
Bian, Junguo; Xia, Dan; Yu, Lifeng; Sidky, Emil Y.; Pan, Xiaochuan
2008-03-01
The back-projection filtration (BPF)algorithm is capable of reconstructing ROI images from truncated data acquired with a wide class of general trajectories. However, it has been observed that, similar to other algorithms for convergent beam geometries, the BPF algorithm involves a spatially varying weighting factor in the backprojection step. This weighting factor can not only increase the computation load, but also amplify the noise in reconstructed images The weighting factor can be eliminated by appropriately rebinning the measured cone-beam data into fan-parallel-beam data. Such an appropriate data rebinning not only removes the weighting factor, but also retain other favorable properties of the BPF algorithm. In this work, we conduct a preliminary study of the rebinned BPF algorithm and its noise property. Specifically, we consider an application in which the detector and source can move in several directions for achieving ROI data acquisition. The combined motion of the detector and source generally forms a complex trajectory. We investigate in this work image reconstruction within an ROI from data acquired in this kind of applications.
Nonintegrable Schrodinger discrete breathers.
Gómez-Gardeñes, J; Floría, L M; Peyrard, M; Bishop, A R
2004-12-01
In an extensive numerical investigation of nonintegrable translational motion of discrete breathers in nonlinear Schrödinger lattices, we have used a regularized Newton algorithm to continue these solutions from the limit of the integrable Ablowitz-Ladik lattice. These solutions are shown to be a superposition of a localized moving core and an excited extended state (background) to which the localized moving pulse is spatially asymptotic. The background is a linear combination of small amplitude nonlinear resonant plane waves and it plays an essential role in the energy balance governing the translational motion of the localized core. Perturbative collective variable theory predictions are critically analyzed in the light of the numerical results.
Investigation on the Accuracy of Superposition Predictions of Film Cooling Effectiveness
NASA Astrophysics Data System (ADS)
Meng, Tong; Zhu, Hui-ren; Liu, Cun-liang; Wei, Jian-sheng
2018-05-01
Film cooling effectiveness on flat plates with double rows of holes has been studied experimentally and numerically in this paper. This configuration is widely used to simulate the multi-row film cooling on turbine vane. Film cooling effectiveness of double rows of holes and each single row was used to study the accuracy of superposition predictions. Method of stable infrared measurement technique was used to measure the surface temperature on the flat plate. This paper analyzed the factors that affect the film cooling effectiveness including hole shape, hole arrangement, row-to-row spacing and blowing ratio. Numerical simulations were performed to analyze the flow structure and film cooling mechanisms between each film cooling row. Results show that the blowing ratio within the range of 0.5 to 2 has a significant influence on the accuracy of superposition predictions. At low blowing ratios, results obtained by superposition method agree well with the experimental data. While at high blowing ratios, the accuracy of superposition prediction decreases. Another significant factor is hole arrangement. Results obtained by superposition prediction are nearly the same as experimental values of staggered arrangement structures. For in-line configurations, the superposition values of film cooling effectiveness are much higher than experimental data. For different hole shapes, the accuracy of superposition predictions on converging-expanding holes is better than cylinder holes and compound angle holes. For two different hole spacing structures in this paper, predictions show good agreement with the experiment results.
Quantum superposition at the half-metre scale.
Kovachy, T; Asenbaum, P; Overstreet, C; Donnelly, C A; Dickerson, S M; Sugarbaker, A; Hogan, J M; Kasevich, M A
2015-12-24
The quantum superposition principle allows massive particles to be delocalized over distant positions. Though quantum mechanics has proved adept at describing the microscopic world, quantum superposition runs counter to intuitive conceptions of reality and locality when extended to the macroscopic scale, as exemplified by the thought experiment of Schrödinger's cat. Matter-wave interferometers, which split and recombine wave packets in order to observe interference, provide a way to probe the superposition principle on macroscopic scales and explore the transition to classical physics. In such experiments, large wave-packet separation is impeded by the need for long interaction times and large momentum beam splitters, which cause susceptibility to dephasing and decoherence. Here we use light-pulse atom interferometry to realize quantum interference with wave packets separated by up to 54 centimetres on a timescale of 1 second. These results push quantum superposition into a new macroscopic regime, demonstrating that quantum superposition remains possible at the distances and timescales of everyday life. The sub-nanokelvin temperatures of the atoms and a compensation of transverse optical forces enable a large separation while maintaining an interference contrast of 28 per cent. In addition to testing the superposition principle in a new regime, large quantum superposition states are vital to exploring gravity with atom interferometers in greater detail. We anticipate that these states could be used to increase sensitivity in tests of the equivalence principle, measure the gravitational Aharonov-Bohm effect, and eventually detect gravitational waves and phase shifts associated with general relativity.
A New Algorithm with Plane Waves and Wavelets for Random Velocity Fields with Many Spatial Scales
NASA Astrophysics Data System (ADS)
Elliott, Frank W.; Majda, Andrew J.
1995-03-01
A new Monte Carlo algorithm for constructing and sampling stationary isotropic Gaussian random fields with power-law energy spectrum, infrared divergence, and fractal self-similar scaling is developed here. The theoretical basis for this algorithm involves the fact that such a random field is well approximated by a superposition of random one-dimensional plane waves involving a fixed finite number of directions. In general each one-dimensional plane wave is the sum of a random shear layer and a random acoustical wave. These one-dimensional random plane waves are then simulated by a wavelet Monte Carlo method for a single space variable developed recently by the authors. The computational results reported in this paper demonstrate remarkable low variance and economical representation of such Gaussian random fields through this new algorithm. In particular, the velocity structure function for an imcorepressible isotropic Gaussian random field in two space dimensions with the Kolmogoroff spectrum can be simulated accurately over 12 decades with only 100 realizations of the algorithm with the scaling exponent accurate to 1.1% and the constant prefactor accurate to 6%; in fact, the exponent of the velocity structure function can be computed over 12 decades within 3.3% with only 10 realizations. Furthermore, only 46,592 active computational elements are utilized in each realization to achieve these results for 12 decades of scaling behavior.
A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging.
Yan, Hao; Zhen, Xin; Folkerts, Michael; Li, Yongbao; Pan, Tinsu; Cervino, Laura; Jiang, Steve B; Jia, Xun
2014-07-01
4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3-0.5 mm for patients 1-3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1-1.5 min per phase. High-quality 4D-CBCT imaging based on the clinically standard 1-min 3D CBCT scanning protocol is feasible via the proposed hybrid reconstruction algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, I; Ahmad, S; Alsbou, N
Purpose: A motion algorithm was developed to extract actual length, CT-numbers and motion amplitude of a mobile target imaged with cone-beam-CT (CBCT) retrospective to image-reconstruction. Methods: The motion model considered a mobile target moving with a sinusoidal motion and employed three measurable parameters: apparent length, CT number level and gradient of a mobile target obtained from CBCT images to extract information about the actual length and CT number value of the stationary target and motion amplitude. The algorithm was verified experimentally with a mobile phantom setup that has three targets with different sizes manufactured from homogenous tissue-equivalent gel material embeddedmore » into a thorax phantom. The phantom moved sinusoidal in one-direction using eight amplitudes (0–20mm) and a frequency of 15-cycles-per-minute. The model required imaging parameters such as slice thickness, imaging time. Results: This motion algorithm extracted three unknown parameters: length of the target, CT-number-level, motion amplitude for a mobile target retrospective to CBCT image reconstruction. The algorithm relates three unknown parameters to measurable apparent length, CT-number-level and gradient for well-defined mobile targets obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on actual length of the target and motion amplitude. The cumulative CT-number for a mobile target was dependent on CT-number-level of the stationary target and motion amplitude. The gradient of the CT-distribution of mobile target is dependent on the stationary CT-number-level, actual target length along the direction of motion, and motion amplitude. Motion frequency and phase did not affect the elongation and CT-number distributions of mobile targets when imaging time included several motion cycles. Conclusion: The motion algorithm developed in this study has potential applications in diagnostic CT imaging and radiotherapy to extract actual length, size and CT-numbers distorted by motion in CBCT imaging. The model provides further information about motion of the target.« less
Comparison of four stable numerical methods for Abel's integral equation
NASA Technical Reports Server (NTRS)
Murio, Diego A.; Mejia, Carlos E.
1991-01-01
The 3-D image reconstruction from cone-beam projections in computerized tomography leads naturally, in the case of radial symmetry, to the study of Abel-type integral equations. If the experimental information is obtained from measured data, on a discrete set of points, special methods are needed in order to restore continuity with respect to the data. A new combined Regularized-Adjoint-Conjugate Gradient algorithm, together with two different implementations of the Mollification Method (one based on a data filtering technique and the other on the mollification of the kernal function) and a regularization by truncation method (initially proposed for 2-D ray sample schemes and more recently extended to 3-D cone-beam image reconstruction) are extensively tested and compared for accuracy and numerical stability as functions of the level of noise in the data.
Numerical Aspects of Cone Beam Contour Reconstruction
NASA Astrophysics Data System (ADS)
Louis, Alfred K.
2017-12-01
We describe a method for directly calculating the contours of a function from cone beam data. The algorithm is based on a new inversion formula for the gradient of a function presented in Louis (Inverse Probl 32(11):115005, 2016. http://stacks.iop.org/0266-5611/32/i=11/a=115005). The Radon transform of the gradient is found by using a Grangeat type of formula, reducing the inversion problem to the inversion of the Radon transform. In that way the influence of the scanning curve, vital for all exact inversion formulas for complete data, is avoided Numerical results are presented for the circular scanning geometry which neither fulfills the Tuy-Kirillov condition nor the much weaker condition given by the author in Louis (Inverse Probl 32(11):115005, 2016. http://stacks.iop.org/0266-5611/32/i=11/a=115005).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, K; Leung, R; Law, G
Background: Commercial treatment planning system Pinnacle3 (Philips, Fitchburg, WI, USA) employs a convolution-superposition algorithm for volumetric-modulated arc radiotherapy (VMAT) optimization and dose calculation. Study of Monte Carlo (MC) dose recalculation of VMAT plans for advanced-stage nasopharyngeal cancers (NPC) is currently limited. Methods: Twenty-nine VMAT prescribed 70Gy, 60Gy, and 54Gy to the planning target volumes (PTVs) were included. These clinical plans achieved with a CS dose engine on Pinnacle3 v9.0 were recalculated by the Monaco TPS v5.0 (Elekta, Maryland Heights, MO, USA) with a XVMC-based MC dose engine. The MC virtual source model was built using the same measurement beam datasetmore » as for the Pinnacle beam model. All MC recalculation were based on absorbed dose to medium in medium (Dm,m). Differences in dose constraint parameters per our institution protocol (Supplementary Table 1) were analyzed. Results: Only differences in maximum dose to left brachial plexus, left temporal lobe and PTV54Gy were found to be statistically insignificant (p> 0.05). Dosimetric differences of other tumor targets and normal organs are found in supplementary Table 1. Generally, doses outside the PTV in the normal organs are lower with MC than with CS. This is also true in the PTV54-70Gy doses but higher dose in the nasal cavity near the bone interfaces is consistently predicted by MC, possibly due to the increased backscattering of short-range scattered photons and the secondary electrons that is not properly modeled by the CS. The straight shoulders of the PTV dose volume histograms (DVH) initially resulted from the CS optimization are merely preserved after MC recalculation. Conclusion: Significant dosimetric differences in VMAT NPC plans were observed between CS and MC calculations. Adjustments of the planning dose constraints to incorporate the physics differences from conventional CS algorithm should be made when VMAT optimization is carried out directly with MC dose engine.« less
Thermalization as an invisibility cloak for fragile quantum superpositions
NASA Astrophysics Data System (ADS)
Hahn, Walter; Fine, Boris V.
2017-07-01
We propose a method for protecting fragile quantum superpositions in many-particle systems from dephasing by external classical noise. We call superpositions "fragile" if dephasing occurs particularly fast, because the noise couples very differently to the superposed states. The method consists of letting a quantum superposition evolve under the internal thermalization dynamics of the system, followed by a time-reversal manipulation known as Loschmidt echo. The thermalization dynamics makes the superposed states almost indistinguishable during most of the above procedure. We validate the method by applying it to a cluster of spins ½.
Zhang, Shuqing; Zhou, Luyang; Xue, Changxi; Wang, Lei
2017-09-10
Compound eyes offer a promising field of miniaturized imaging systems. In one application of a compound eye, superposition of compound eye systems forms a composite image by superposing the images produced by different channels. The geometric configuration of superposition compound eye systems is achieved by three micro-lens arrays with different pitches and focal lengths. High resolution is indispensable for the practicability of superposition compound eye systems. In this paper, hybrid diffractive-refractive lenses are introduced into the design of a compound eye system for this purpose. With the help of ZEMAX, two superposition compound eye systems with and without hybrid diffractive-refractive lenses were separately designed. Then, we demonstrate the effectiveness of using a hybrid diffractive-refractive lens to improve the image quality.
Modeling a color-rendering operator for high dynamic range images using a cone-response function
NASA Astrophysics Data System (ADS)
Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju
2015-09-01
Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.
NASA Astrophysics Data System (ADS)
Sambath, P.; Pullepu, Bapuji; Hussain, T.; Ali Shehzad, Sabir
2018-03-01
The consequence of thermal radiation in laminar natural convective hydromagnetic flow of viscous incompressible fluid past a vertical cone with mass transfer under the influence of chemical reaction with heat source/sink is presented here. The surface of the cone is focused to a variable wall temperature (VWT) and wall concentration (VWC). The fluid considered here is a gray absorbing and emitting, but non-scattering medium. The boundary layer dimensionless equations governing the flow are solved by an implicit finite-difference scheme of Crank-Nicolson which has speedy convergence and stable. This method converts the dimensionless equations into a system of tri-diagonal equations and which are then solved by using well known Thomas algorithm. Numerical solutions are obtained for momentum, temperature, concentration, local and average shear stress, heat and mass transfer rates for various values of parameters Pr, Sc, λ, Δ, Rd are established with graphical representations. We observed that the liquid velocity decreased for higher values of Prandtl and Schmidt numbers. The temperature is boost up for decreasing values of Schimdt and Prandtl numbers. The enhancement in radiative parameter gives more heat to liquid due to which temperature is enhanced significantly.
Sensitivity of NTCP parameter values against a change of dose calculation algorithm.
Brink, Carsten; Berg, Martin; Nielsen, Morten
2007-09-01
Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis with those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.
Sensitivity of NTCP parameter values against a change of dose calculation algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brink, Carsten; Berg, Martin; Nielsen, Morten
2007-09-15
Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis withmore » those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.« less
NASA Astrophysics Data System (ADS)
Tumuklu, Ozgur; Levin, Deborah A.; Theofilis, Vassilis
2018-04-01
Shock-dominated hypersonic laminar flows over a double cone are investigated using time accurate direct simulation Monte Carlo combined with the residuals algorithm for unit Reynolds numbers gradually increasing from 9.35 × 104 to 3.74 × 105 m-1 at a Mach number of about 16. The main flow features, such as the strong bow-shock, location of the separation shock, the triple point, and the entire laminar separated region, show a time-dependent behavior. Although the separation shock angle is found to be similar for all Re numbers, the effects of Reynolds number on the structure and extent of the separation region are profound. As the Reynolds number is increased, larger pressure values in the under-expanded jet region due to strong shock interactions form more prominent λ-shocklets in the supersonic region between two contact surfaces. Likewise, the surface parameters, especially on the second cone surface, show a strong dependence on the Reynolds number, with skin friction, pressure, and surface heating rates increasing and velocity slip and temperature jump values decreasing for increasing Re number. A Kelvin-Helmholtz instability arising at the shear layer results in an unsteady flow for the highest Reynolds number. These findings suggest that consideration of experimental measurement times is important when it comes to determining the steady state surface parameters even for a relatively simple double cone geometry at moderately large Reynolds numbers.
Cone-beam volume CT mammographic imaging: feasibility study
NASA Astrophysics Data System (ADS)
Chen, Biao; Ning, Ruola
2001-06-01
X-ray projection mammography, using a film/screen combination or digital techniques, has proven to be the most effective imaging modality for early detection of breast cancer currently available. However, the inherent superimposition of structures makes small carcinoma (a few millimeters in size) difficult to detect in the occultation case or in dense breasts, resulting in a high false positive biopsy rate. The cone-beam x-ray projection based volume imaging using flat panel detectors (FPDs) makes it possible to obtain three-dimensional breast images. This may benefit diagnosis of the structure and pattern of the lesion while eliminating hard compression of the breast. This paper presents a novel cone-beam volume CT mammographic imaging protocol based on the above techniques. Through computer simulation, the key issues of the system and imaging techniques, including the x-ray imaging geometry and corresponding reconstruction algorithms, x-ray characteristics of breast tissues, x-ray setting techniques, the absorbed dose estimation and the quantitative effect of x-ray scattering on image quality, are addressed. The preliminary simulation results support the proposed cone-beam volume CT mammographic imaging modality in respect to feasibility and practicability for mammography. The absorbed dose level is comparable to that of current two-view mammography and would not be a prominent problem for this imaging protocol. Compared to traditional mammography, the proposed imaging protocol with isotropic spatial resolution will potentially provide significantly better low contrast detectability of breast tumors and more accurate location of breast lesions.
Antecedent Synoptic Environments Conducive to North American Polar/Subtropical Jet Superpositions
NASA Astrophysics Data System (ADS)
Winters, A. C.; Keyser, D.; Bosart, L. F.
2017-12-01
The atmosphere often exhibits a three-step pole-to-equator tropopause structure, with each break in the tropopause associated with a jet stream. The polar jet stream (PJ) typically resides in the break between the polar and subtropical tropopause and is positioned atop the strongly baroclinic, tropospheric-deep polar front around 50°N. The subtropical jet stream (STJ) resides in the break between the subtropical and the tropical tropopause and is situated on the poleward edge of the Hadley cell around 30°N. On occasion, the latitudinal separation between the PJ and the STJ can vanish, resulting in a vertical jet superposition. Prior case study work indicates that jet superpositions are often attended by a vigorous transverse vertical circulation that can directly impact the production of extreme weather over North America. Furthermore, this work suggests that there is considerable variability among antecedent environments conducive to the production of jet superpositions. These considerations motivate a comprehensive study to examine the synoptic-dynamic mechanisms that operate within the double-jet environment to produce North American jet superpositions. This study focuses on the identification of North American jet superposition events in the CFSR dataset during November-March 1979-2010. Superposition events will be classified into three characteristic types: "Polar Dominant" events will consist of events during which only the PJ is characterized by a substantial excursion from its climatological latitude band; "Subtropical Dominant" events will consist of events during which only the STJ is characterized by a substantial excursion from its climatological latitude band; and "Hybrid" events will consist of those events characterized by an excursion of both the PJ and STJ from their climatological latitude bands. Following their classification, frequency distributions of jet superpositions will be constructed to highlight the geographical locations most often associated with jet superpositions for each event type. PV inversion and composite analysis will also be performed on each event type in an effort to illustrate the antecedent environments and the dominant synoptic-dynamic mechanisms that favor the production of North American jet superpositions for each event type.
Monteiro, Bruna Moraes; Nobrega Filho, Denys Silveira; Lopes, Patrícia de Medeiros Loureiro; de Sales, Marcelo Augusto Oliveira
2012-01-01
The aim of this study was to analyze the influence of filters (algorithms) to improve the image of Cone Beam Computed Tomography (CBCT) in diagnosis of osteolytic lesions of the mandible, in order to establish the protocols for viewing images more suitable for CBCT diagnostics. 15 dry mandibles in which perforations were performed, simulating lesions, were submitted to CBCT examination. Two examiners analyzed the images, using filters to improve image Hard, Normal, and Very Sharp, contained in the iCAT Vision software, and protocols for assessment: axial; sagittal and coronal; and axial, sagittal and coronal planes simultaneously (MPR), on two occasions. The sensitivity and specificity (validity) of the cone beam computed tomography (CBCT) have been demonstrated as the values achieved were above 75% for sensitivity and above 85% for specificity, reaching around 95.5% of sensitivity and 99% of specificity when we used the appropriate observation protocol. It was concluded that the use of filters (algorithms) to improve the CBCT image influences the diagnosis, due to the fact that all measured values were correspondingly higher when it was used the filter Very Sharp, which justifies its use for clinical activities, followed by Hard and Normal filters, in order of decreasing values. PMID:22956955
Monteiro, Bruna Moraes; Nobrega Filho, Denys Silveira; Lopes, Patrícia de Medeiros Loureiro; de Sales, Marcelo Augusto Oliveira
2012-01-01
The aim of this study was to analyze the influence of filters (algorithms) to improve the image of Cone Beam Computed Tomography (CBCT) in diagnosis of osteolytic lesions of the mandible, in order to establish the protocols for viewing images more suitable for CBCT diagnostics. 15 dry mandibles in which perforations were performed, simulating lesions, were submitted to CBCT examination. Two examiners analyzed the images, using filters to improve image Hard, Normal, and Very Sharp, contained in the iCAT Vision software, and protocols for assessment: axial; sagittal and coronal; and axial, sagittal and coronal planes simultaneously (MPR), on two occasions. The sensitivity and specificity (validity) of the cone beam computed tomography (CBCT) have been demonstrated as the values achieved were above 75% for sensitivity and above 85% for specificity, reaching around 95.5% of sensitivity and 99% of specificity when we used the appropriate observation protocol. It was concluded that the use of filters (algorithms) to improve the CBCT image influences the diagnosis, due to the fact that all measured values were correspondingly higher when it was used the filter Very Sharp, which justifies its use for clinical activities, followed by Hard and Normal filters, in order of decreasing values.
CT to Cone-beam CT Deformable Registration With Simultaneous Intensity Correction
Zhen, Xin; Gu, Xuejun; Yan, Hao; Zhou, Linghong; Jia, Xun; Jiang, Steve B.
2012-01-01
Computed tomography (CT) to cone-beam computed tomography (CBCT) deformable image registration (DIR) is a crucial step in adaptive radiation therapy. Current intensity-based registration algorithms, such as demons, may fail in the context of CT-CBCT DIR because of inconsistent intensities between the two modalities. In this paper, we propose a variant of demons, called Deformation with Intensity Simultaneously Corrected (DISC), to deal with CT-CBCT DIR. DISC distinguishes itself from the original demons algorithm by performing an adaptive intensity correction step on the CBCT image at every iteration step of the demons registration. Specifically, the intensity correction of a voxel in CBCT is achieved by matching the first and the second moments of the voxel intensities inside a patch around the voxel with those on the CT image. It is expected that such a strategy can remove artifacts in the CBCT image, as well as ensuring the intensity consistency between the two modalities. DISC is implemented on computer graphics processing units (GPUs) in compute unified device architecture (CUDA) programming environment. The performance of DISC is evaluated on a simulated patient case and six clinical head-and-neck cancer patient data. It is found that DISC is robust against the CBCT artifacts and intensity inconsistency and significantly improves the registration accuracy when compared with the original demons. PMID:23032638
NASA Astrophysics Data System (ADS)
Han, Xiao; Pearson, Erik; Pelizzari, Charles; Al-Hallaq, Hania; Sidky, Emil Y.; Bian, Junguo; Pan, Xiaochuan
2015-06-01
Kilo-voltage (KV) cone-beam computed tomography (CBCT) unit mounted onto a linear accelerator treatment system, often referred to as on-board imager (OBI), plays an increasingly important role in image-guided radiation therapy. While the FDK algorithm is currently used for reconstructing images from clinical OBI data, optimization-based reconstruction has also been investigated for OBI CBCT. An optimization-based reconstruction involves numerous parameters, which can significantly impact reconstruction properties (or utility). The success of an optimization-based reconstruction for a particular class of practical applications thus relies strongly on appropriate selection of parameter values. In the work, we focus on tailoring the constrained-TV-minimization-based reconstruction, an optimization-based reconstruction previously shown of some potential for CBCT imaging conditions of practical interest, to OBI imaging through appropriate selection of parameter values. In particular, for given real data of phantoms and patient collected with OBI CBCT, we first devise utility metrics specific to OBI-quality-assurance tasks and then apply them to guiding the selection of parameter values in constrained-TV-minimization-based reconstruction. The study results show that the reconstructions are with improvement, relative to clinical FDK reconstruction, in both visualization and quantitative assessments in terms of the devised utility metrics.
Calculating observables in inhomogeneous cosmologies. Part I: general framework
NASA Astrophysics Data System (ADS)
Hellaby, Charles; Walters, Anthony
2018-02-01
We lay out a general framework for calculating the variation of a set of cosmological observables, down the past null cone of an arbitrarily placed observer, in a given arbitrary inhomogeneous metric. The observables include redshift, proper motions, area distance and redshift-space density. Of particular interest are observables that are zero in the spherically symmetric case, such as proper motions. The algorithm is based on the null geodesic equation and the geodesic deviation equation, and it is tailored to creating a practical numerical implementation. The algorithm provides a method for tracking which light rays connect moving objects to the observer at successive times. Our algorithm is applied to the particular case of the Szekeres metric. A numerical implementation has been created and some results will be presented in a subsequent paper. Future work will explore the range of possibilities.
NASA Astrophysics Data System (ADS)
Xue, ShiChuan; Wu, JunJie; Xu, Ping; Yang, XueJun
2018-02-01
Quantum computing is a significant computing capability which is superior to classical computing because of its superposition feature. Distinguishing several quantum states from quantum algorithm outputs is often a vital computational task. In most cases, the quantum states tend to be non-orthogonal due to superposition; quantum mechanics has proved that perfect outcomes could not be achieved by measurements, forcing repetitive measurement. Hence, it is important to determine the optimum measuring method which requires fewer repetitions and a lower error rate. However, extending current measurement approaches mainly aiming at quantum cryptography to multi-qubit situations for quantum computing confronts challenges, such as conducting global operations which has considerable costs in the experimental realm. Therefore, in this study, we have proposed an optimum subsystem method to avoid these difficulties. We have provided an analysis of the comparison between the reduced subsystem method and the global minimum error method for two-qubit problems; the conclusions have been verified experimentally. The results showed that the subsystem method could effectively discriminate non-orthogonal two-qubit states, such as separable states, entangled pure states, and mixed states; the cost of the experimental process had been significantly reduced, in most circumstances, with acceptable error rate. We believe the optimal subsystem method is the most valuable and promising approach for multi-qubit quantum computing applications.
Kasaven, C P; McIntyre, G T; Mossey, P A
2017-01-01
Our objective was to assess the accuracy of virtual and printed 3-dimensional models derived from cone-beam computed tomographic (CT) scans to measure the volume of alveolar clefts before bone grafting. Fifteen subjects with unilateral cleft lip and palate had i-CAT cone-beam CT scans recorded at 0.2mm voxel and sectioned transversely into slices 0.2mm thick using i-CAT Vision. Volumes of alveolar clefts were calculated using first a validated algorithm; secondly, commercially-available virtual 3-dimensional model software; and finally 3-dimensional printed models, which were scanned with microCT and analysed using 3-dimensional software. For inter-observer reliability, a two-way mixed model intraclass correlation coefficient (ICC) was used to evaluate the reproducibility of identification of the cranial and caudal limits of the clefts among three observers. We used a Friedman test to assess the significance of differences among the methods, and probabilities of less than 0.05 were accepted as significant. Inter-observer reliability was almost perfect (ICC=0.987). There were no significant differences among the three methods. Virtual and printed 3-dimensional models were as precise as the validated computer algorithm in the calculation of volumes of the alveolar cleft before bone grafting, but virtual 3-dimensional models were the most accurate with the smallest 95% CI and, subject to further investigation, could be a useful adjunct in clinical practice. Copyright © 2016 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Non-classical State via Superposition of Two Opposite Coherent States
NASA Astrophysics Data System (ADS)
Ren, Gang; Du, Jian-ming; Yu, Hai-jun
2018-04-01
We study the non-classical properties of the states generated by superpositions of two opposite coherent states with the arbitrary relative phase factors. We show that the relative phase factors plays an important role in these superpositions. We demonstrate this result by discussing their squeezing properties, quantum statistical properties and fidelity in principle.
Ultrafast creation of large Schrödinger cat states of an atom.
Johnson, K G; Wong-Campos, J D; Neyenhuis, B; Mizrahi, J; Monroe, C
2017-09-26
Mesoscopic quantum superpositions, or Schrödinger cat states, are widely studied for fundamental investigations of quantum measurement and decoherence as well as applications in sensing and quantum information science. The generation and maintenance of such states relies upon a balance between efficient external coherent control of the system and sufficient isolation from the environment. Here we create a variety of cat states of a single trapped atom's motion in a harmonic oscillator using ultrafast laser pulses. These pulses produce high fidelity impulsive forces that separate the atom into widely separated positions, without restrictions that typically limit the speed of the interaction or the size and complexity of the resulting motional superposition. This allows us to quickly generate and measure cat states larger than previously achieved in a harmonic oscillator, and create complex multi-component superposition states in atoms.Generation of mesoscopic quantum superpositions requires both reliable coherent control and isolation from the environment. Here, the authors succeed in creating a variety of cat states of a single trapped atom, mapping spin superpositions into spatial superpositions using ultrafast laser pulses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki, Shigenari; Department of Electronics and Electrical Engineering, Keio University, 3-14-1, Hiyoshi, Kohoku-ku, Yokohama, 223-8522; Takeoka, Masahiro
2006-04-15
We present a simple protocol to purify a coherent-state superposition that has undergone a linear lossy channel. The scheme constitutes only a single beam splitter and a homodyne detector, and thus is experimentally feasible. In practice, a superposition of coherent states is transformed into a classical mixture of coherent states by linear loss, which is usually the dominant decoherence mechanism in optical systems. We also address the possibility of producing a larger amplitude superposition state from decohered states, and show that in most cases the decoherence of the states are amplified along with the amplitude.
The principle of superposition and its application in ground-water hydraulics
Reilly, Thomas E.; Franke, O. Lehn; Bennett, Gordon D.
1987-01-01
The principle of superposition, a powerful mathematical technique for analyzing certain types of complex problems in many areas of science and technology, has important applications in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that problem solutions can be added together to obtain composite solutions. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to ground-water hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader.
The principle of superposition and its application in ground-water hydraulics
Reilly, T.E.; Franke, O.L.; Bennett, G.D.
1984-01-01
The principle of superposition, a powerful methematical technique for analyzing certain types of complex problems in many areas of science and technology, has important application in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that solutions to individual problems can be added together to obtain solutions to complex problems. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to groundwater hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader. (USGS)
Path planning algorithms for assembly sequence planning. [in robot kinematics
NASA Technical Reports Server (NTRS)
Krishnan, S. S.; Sanderson, Arthur C.
1991-01-01
Planning for manipulation in complex environments often requires reasoning about the geometric and mechanical constraints which are posed by the task. In planning assembly operations, the automatic generation of operations sequences depends on the geometric feasibility of paths which permit parts to be joined into subassemblies. Feasible locations and collision-free paths must be present for part motions, robot and grasping motions, and fixtures. This paper describes an approach to reasoning about the feasibility of straight-line paths among three-dimensional polyhedral parts using an algebra of polyhedral cones. A second method recasts the feasibility conditions as constraints in a nonlinear optimization framework. Both algorithms have been implemented and results are presented.
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahimian, Benjamin P.; Zhao Yunzhe; Huang Zhifeng
Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). Inmore » each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.« less
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction.
Fahimian, Benjamin P; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J; Osher, Stanley J; McNitt-Gray, Michael F; Miao, Jianwei
2013-03-01
A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction
Fahimian, Benjamin P.; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J.; Osher, Stanley J.; McNitt-Gray, Michael F.; Miao, Jianwei
2013-01-01
Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method. PMID:23464329
Hidden Statistics Approach to Quantum Simulations
NASA Technical Reports Server (NTRS)
Zak, Michail
2010-01-01
Recent advances in quantum information theory have inspired an explosion of interest in new quantum algorithms for solving hard computational (quantum and non-quantum) problems. The basic principle of quantum computation is that the quantum properties can be used to represent structure data, and that quantum mechanisms can be devised and built to perform operations with this data. Three basic non-classical properties of quantum mechanics superposition, entanglement, and direct-product decomposability were main reasons for optimism about capabilities of quantum computers that promised simultaneous processing of large massifs of highly correlated data. Unfortunately, these advantages of quantum mechanics came with a high price. One major problem is keeping the components of the computer in a coherent state, as the slightest interaction with the external world would cause the system to decohere. That is why the hardware implementation of a quantum computer is still unsolved. The basic idea of this work is to create a new kind of dynamical system that would preserve the main three properties of quantum physics superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. In other words, such a system would reinforce the advantages and minimize limitations of both quantum and classical aspects. Based upon a concept of hidden statistics, a new kind of dynamical system for simulation of Schroedinger equation is proposed. The system represents a modified Madelung version of Schroedinger equation. It preserves superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for simulating quantum systems. The model includes a transitional component of quantum potential (that has been overlooked in previous treatment of the Madelung equation). The role of the transitional potential is to provide a jump from a deterministic state to a random state with prescribed probability density. This jump is triggered by blowup instability due to violation of Lipschitz condition generated by the quantum potential. As a result, the dynamics attains quantum properties on a classical scale. The model can be implemented physically as an analog VLSI-based (very-large-scale integration-based) computer, or numerically on a digital computer. This work opens a way of developing fundamentally new algorithms for quantum simulations of exponentially complex problems that expand NASA capabilities in conducting space activities. It has been illustrated that the complexity of simulations of particle interaction can be reduced from an exponential one to a polynomial one.
Image reconstruction from few-view CT data by gradient-domain dictionary learning.
Hu, Zhanli; Liu, Qiegen; Zhang, Na; Zhang, Yunwan; Peng, Xi; Wu, Peter Z; Zheng, Hairong; Liang, Dong
2016-05-21
Decreasing the number of projections is an effective way to reduce the radiation dose exposed to patients in medical computed tomography (CT) imaging. However, incomplete projection data for CT reconstruction will result in artifacts and distortions. In this paper, a novel dictionary learning algorithm operating in the gradient-domain (Grad-DL) is proposed for few-view CT reconstruction. Specifically, the dictionaries are trained from the horizontal and vertical gradient images, respectively and the desired image is reconstructed subsequently from the sparse representations of both gradients by solving the least-square method. Since the gradient images are sparser than the image itself, the proposed approach could lead to sparser representations than conventional DL methods in the image-domain, and thus a better reconstruction quality is achieved. To evaluate the proposed Grad-DL algorithm, both qualitative and quantitative studies were employed through computer simulations as well as real data experiments on fan-beam and cone-beam geometry. The results show that the proposed algorithm can yield better images than the existing algorithms.
Teleportation of Unknown Superpositions of Collective Atomic Coherent States
NASA Astrophysics Data System (ADS)
Zheng, Shi-Biao
2001-06-01
We propose a scheme to teleport an unknown superposition of two atomic coherent states with different phases. Our scheme is based on resonant and dispersive atom-field interaction. Our scheme provides a possibility of teleporting macroscopic superposition states of many atoms first time. The project supported by National Natural Science Foundation of China under Grant No. 60008003
Student Ability to Distinguish between Superposition States and Mixed States in Quantum Mechanics
ERIC Educational Resources Information Center
Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.
2015-01-01
Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the…
Nonclassical Properties of Q-Deformed Superposition Light Field State
NASA Technical Reports Server (NTRS)
Ren, Min; Shenggui, Wang; Ma, Aiqun; Jiang, Zhuohong
1996-01-01
In this paper, the squeezing effect, the bunching effect and the anti-bunching effect of the superposition light field state which involving q-deformation vacuum state and q-Glauber coherent state are studied, the controllable q-parameter of the squeezing effect, the bunching effect and the anti-bunching effect of q-deformed superposition light field state are obtained.
Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong
2017-03-01
Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors' memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.
Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong
2017-01-01
Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors’ memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm. PMID:28257060
Dosimetry audit simulation of treatment planning system in multicenters radiotherapy
NASA Astrophysics Data System (ADS)
Kasmuri, S.; Pawiro, S. A.
2017-07-01
Treatment Planning System (TPS) is an important modality that determines radiotherapy outcome. TPS requires input data obtained through commissioning and the potentially error occurred. Error in this stage may result in the systematic error. The aim of this study to verify the TPS dosimetry to know deviation range between calculated and measurement dose. This study used CIRS phantom 002LFC representing the human thorax and simulated all external beam radiotherapy stages. The phantom was scanned using CT Scanner and planned 8 test cases that were similar to those in clinical practice situation were made, tested in four radiotherapy centers. Dose measurement using 0.6 cc ionization chamber. The results of this study showed that generally, deviation of all test cases in four centers was within agreement criteria with average deviation about -0.17±1.59 %, -1.64±1.92 %, 0.34±1.34 % and 0.13±1.81 %. The conclusion of this study was all TPS involved in this study showed good performance. The superposition algorithm showed rather poor performance than either analytic anisotropic algorithm (AAA) and convolution algorithm with average deviation about -1.64±1.92 %, -0.17±1.59 % and -0.27±1.51 % respectively.
NASA Technical Reports Server (NTRS)
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
Trackside acoustic diagnosis of axle box bearing based on kurtosis-optimization wavelet denoising
NASA Astrophysics Data System (ADS)
Peng, Chaoyong; Gao, Xiaorong; Peng, Jianping; Wang, Ai
2018-04-01
As one of the key components of railway vehicles, the operation condition of the axle box bearing has a significant effect on traffic safety. The acoustic diagnosis is more suitable than vibration diagnosis for trackside monitoring. The acoustic signal generated by the train axle box bearing is an amplitude modulation and frequency modulation signal with complex train running noise. Although empirical mode decomposition (EMD) and some improved time-frequency algorithms have proved to be useful in bearing vibration signal processing, it is hard to extract the bearing fault signal from serious trackside acoustic background noises by using those algorithms. Therefore, a kurtosis-optimization-based wavelet packet (KWP) denoising algorithm is proposed, as the kurtosis is the key indicator of bearing fault signal in time domain. Firstly, the geometry based Doppler correction is applied to signals of each sensor, and with the signal superposition of multiple sensors, random noises and impulse noises, which are the interference of the kurtosis indicator, are suppressed. Then, the KWP is conducted. At last, the EMD and Hilbert transform is applied to extract the fault feature. Experiment results indicate that the proposed method consisting of KWP and EMD is superior to the EMD.
Liu, Wenbin; Liu, Aimin
2018-01-01
With the exploitation of offshore oil and gas gradually moving to deep water, higher temperature differences and pressure differences are applied to the pipeline system, making the global buckling of the pipeline more serious. For unburied deep-water pipelines, the lateral buckling is the major buckling form. The initial imperfections widely exist in the pipeline system due to manufacture defects or the influence of uneven seabed, and the distribution and geometry features of initial imperfections are random. They can be divided into two kinds based on shape: single-arch imperfections and double-arch imperfections. This paper analyzed the global buckling process of a pipeline with 2 initial imperfections by using a numerical simulation method and revealed how the ratio of the initial imperfection’s space length to the imperfection’s wavelength and the combination of imperfections affects the buckling process. The results show that a pipeline with 2 initial imperfections may suffer the superposition of global buckling. The growth ratios of buckling displacement, axial force and bending moment in the superposition zone are several times larger than no buckling superposition pipeline. The ratio of the initial imperfection’s space length to the imperfection’s wavelength decides whether a pipeline suffers buckling superposition. The potential failure point of pipeline exhibiting buckling superposition is as same as the no buckling superposition pipeline, but the failure risk of pipeline exhibiting buckling superposition is much higher. The shape and direction of two nearby imperfections also affects the failure risk of pipeline exhibiting global buckling superposition. The failure risk of pipeline with two double-arch imperfections is higher than pipeline with two single-arch imperfections. PMID:29554123
Zhang, Qiuxiang; Lu, Rongwen; Wang, Benquan; Messinger, Jeffrey D.; Curcio, Christine A.; Yao, Xincheng
2015-01-01
Transient intrinsic optical signal (IOS) changes have been observed in retinal photoreceptors, suggesting a unique biomarker for eye disease detection. However, clinical deployment of IOS imaging is challenging due to unclear IOS sources and limited signal-to-noise ratios (SNRs). Here, by developing high spatiotemporal resolution optical coherence tomography (OCT) and applying an adaptive algorithm for IOS processing, we were able to record robust IOSs from single-pass measurements. Transient IOSs, which might reflect an early stage of light phototransduction, are consistently observed in the photoreceptor outer segment almost immediately (<4 ms) after retinal stimulation. Comparative studies of dark- and light-adapted retinas have demonstrated the feasibility of functional OCT mapping of rod and cone photoreceptors, promising a new method for early disease detection and improved treatment of diseases such as age-related macular degeneration (AMD) and other eye diseases that can cause photoreceptor damage. PMID:25901915
Static terrestrial laser scanning of juvenile understory trees for field phenotyping
NASA Astrophysics Data System (ADS)
Wang, Huanhuan; Lin, Yi
2014-11-01
This study was to attempt the cutting-edge 3D remote sensing technique of static terrestrial laser scanning (TLS) for parametric 3D reconstruction of juvenile understory trees. The data for test was collected with a Leica HDS6100 TLS system in a single-scan way. The geometrical structures of juvenile understory trees are extracted by model fitting. Cones are used to model trunks and branches. Principal component analysis (PCA) is adopted to calculate their major axes. Coordinate transformation and orthogonal projection are used to estimate the parameters of the cones. Then, AutoCAD is utilized to simulate the morphological characteristics of the understory trees, and to add secondary branches and leaves in a random way. Comparison of the reference values and the estimated values gives the regression equation and shows that the proposed algorithm of extracting parameters is credible. The results have basically verified the applicability of TLS for field phenotyping of juvenile understory trees.
Mehdizadeh, Mojdeh; Ahmadi, Navid; Jamshidi, Mahsa
2014-11-01
Exact location of the inferior alveolar nerve (IAN) bundle is very important. The aim of this study is to evaluate the relationship between the mandibular third molar and the mandibular canal by cone-beam computed tomography. This was a cross-sectional study with convenience sampling. 94 mandibular CBCTs performed with CSANEX 3D machine (Soredex, Finland) and 3D system chosen. Vertical and horizontal relationship between the mandibular canal and the third molar depicted by 3D, panoramic reformat view of CBCT and cross-sectional view. Cross-sectional view was our gold standard and other view evaluated by it. There were significant differences between the vertical and horizontal relation of nerve and tooth in all views (p < 0.001). The results showed differences in the position of the inferior alveolar nerve with different views of CBCT, so CBCT images are not quite reliable and have possibility of error.
NASA Astrophysics Data System (ADS)
Zhang, Qiuxiang; Lu, Rongwen; Wang, Benquan; Messinger, Jeffrey D.; Curcio, Christine A.; Yao, Xincheng
2015-04-01
Transient intrinsic optical signal (IOS) changes have been observed in retinal photoreceptors, suggesting a unique biomarker for eye disease detection. However, clinical deployment of IOS imaging is challenging due to unclear IOS sources and limited signal-to-noise ratios (SNRs). Here, by developing high spatiotemporal resolution optical coherence tomography (OCT) and applying an adaptive algorithm for IOS processing, we were able to record robust IOSs from single-pass measurements. Transient IOSs, which might reflect an early stage of light phototransduction, are consistently observed in the photoreceptor outer segment almost immediately (<4 ms) after retinal stimulation. Comparative studies of dark- and light-adapted retinas have demonstrated the feasibility of functional OCT mapping of rod and cone photoreceptors, promising a new method for early disease detection and improved treatment of diseases such as age-related macular degeneration (AMD) and other eye diseases that can cause photoreceptor damage.
Endohedral Metallofullerene as Molecular High Spin Qubit: Diverse Rabi Cycles in Gd2@C79N.
Hu, Ziqi; Dong, Bo-Wei; Liu, Zheng; Liu, Jun-Jie; Su, Jie; Yu, Changcheng; Xiong, Jin; Shi, Di-Er; Wang, Yuanyuan; Wang, Bing-Wu; Ardavan, Arzhang; Shi, Zujin; Jiang, Shang-Da; Gao, Song
2018-01-24
An anisotropic high-spin qubit with long coherence time could scale the quantum system up. It has been proposed that Grover's algorithm can be implemented in such systems. Dimetallic aza[80]fullerenes M 2 @C 79 N (M = Y or Gd) possess an unpaired electron located between two metal ions, offering an opportunity to manipulate spin(s) protected in the cage for quantum information processing. Herein, we report the crystallographic determination of Gd 2 @C 79 N for the first time. This molecular magnet with a collective high-spin ground state (S = 15/2) generated by strong magnetic coupling (J Gd-Rad = 350 ± 20 cm -1 ) has been unambiguously validated by magnetic susceptibility experiments. Gd 2 @C 79 N has quantum coherence and diverse Rabi cycles, allowing arbitrary superposition state manipulation between each adjacent level. The phase memory time reaches 5 μs at 5 K by dynamic decoupling. This molecule fulfills the requirements of Grover's searching algorithm proposed by Leuenberger and Loss.
On the Use of Material-Dependent Damping in ANSYS for Mode Superposition Transient Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nie, J.; Wei, X.
The mode superposition method is often used for dynamic analysis of complex structures, such as the seismic Category I structures in nuclear power plants, in place of the less efficient full method, which uses the full system matrices for calculation of the transient responses. In such applications, specification of material-dependent damping is usually desirable because complex structures can consist of multiple types of materials that may have different energy dissipation capabilities. A recent review of the ANSYS manual for several releases found that the use of material-dependent damping is not clearly explained for performing a mode superposition transient dynamic analysis.more » This paper includes several mode superposition transient dynamic analyses using different ways to specify damping in ANSYS, in order to determine how material-dependent damping can be specified conveniently in a mode superposition transient dynamic analysis.« less
Testing the quantum superposition principle: matter waves and beyond
NASA Astrophysics Data System (ADS)
Ulbricht, Hendrik
2015-05-01
New technological developments allow to explore the quantum properties of very complex systems, bringing the question of whether also macroscopic systems share such features, within experimental reach. The interest in this question is increased by the fact that, on the theory side, many suggest that the quantum superposition principle is not exact, departures from it being the larger, the more macroscopic the system. Testing the superposition principle intrinsically also means to test suggested extensions of quantum theory, so-called collapse models. We will report on three new proposals to experimentally test the superposition principle with nanoparticle interferometry, optomechanical devices and by spectroscopic experiments in the frequency domain. We will also report on the status of optical levitation and cooling experiments with nanoparticles in our labs, towards an Earth bound matter-wave interferometer to test the superposition principle for a particle mass of one million amu (atomic mass unit).
The Use of High Performance Computing (HPC) to Strengthen the Development of Army Systems
2011-11-01
accurately predicting the supersonic magus effect about spinning cones, ogive- cylinders , and boat-tailed afterbodies. This work led to the successful...successful computer model of the proposed product or system, one can then build prototypes on the computer and study the effects on the performance of...needed. The NRC report discusses the requirements for effective use of such computing power. One needs “models, algorithms, software, hardware
Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography
NASA Astrophysics Data System (ADS)
Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G.; Liu, Chihray; Lu, Bo
2015-12-01
Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm ‘the common mask guided image reconstruction’ (c-MGIR). In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and ‘well’ solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the algorithm, the code was implemented with a graphic processing unit for parallel processing purposes. Root mean square error (RMSE) between the ground truth and reconstructed volumes of the numerical phantom were in the descending order of FDK, CTV, PICCS, MCIR, and c-MGIR for all phases. Specifically, the means and the standard deviations of the RMSE of FDK, CTV, PICCS, MCIR and c-MGIR for all phases were 42.64 ± 6.5%, 3.63 ± 0.83%, 1.31% ± 0.09%, 0.86% ± 0.11% and 0.52 % ± 0.02%, respectively. The image quality of the patient case also indicated the superiority of c-MGIR compared to other algorithms. The results indicated that clinically viable 4D CBCT images can be reconstructed while requiring no more projection data than a typical clinical 3D CBCT scan. This makes c-MGIR a potential online reconstruction algorithm for 4D CBCT, which can provide much better image quality than other available algorithms, while requiring less dose and potentially less scanning time.
Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography.
Park, Justin C; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G; Liu, Chihray; Lu, Bo
2015-12-07
Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm 'the common mask guided image reconstruction' (c-MGIR).In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and 'well' solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the algorithm, the code was implemented with a graphic processing unit for parallel processing purposes.Root mean square error (RMSE) between the ground truth and reconstructed volumes of the numerical phantom were in the descending order of FDK, CTV, PICCS, MCIR, and c-MGIR for all phases. Specifically, the means and the standard deviations of the RMSE of FDK, CTV, PICCS, MCIR and c-MGIR for all phases were 42.64 ± 6.5%, 3.63 ± 0.83%, 1.31% ± 0.09%, 0.86% ± 0.11% and 0.52 % ± 0.02%, respectively. The image quality of the patient case also indicated the superiority of c-MGIR compared to other algorithms.The results indicated that clinically viable 4D CBCT images can be reconstructed while requiring no more projection data than a typical clinical 3D CBCT scan. This makes c-MGIR a potential online reconstruction algorithm for 4D CBCT, which can provide much better image quality than other available algorithms, while requiring less dose and potentially less scanning time.
Enhancements on the Convex Programming Based Powered Descent Guidance Algorithm for Mars Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, Lars; Scharf, Daniel P.; Wolf, Aron
2008-01-01
In this paper, we present enhancements on the powered descent guidance algorithm developed for Mars pinpoint landing. The guidance algorithm solves the powered descent minimum fuel trajectory optimization problem via a direct numerical method. Our main contribution is to formulate the trajectory optimization problem, which has nonconvex control constraints, as a finite dimensional convex optimization problem, specifically as a finite dimensional second order cone programming (SOCP) problem. SOCP is a subclass of convex programming, and there are efficient SOCP solvers with deterministic convergence properties. Hence, the resulting guidance algorithm can potentially be implemented onboard a spacecraft for real-time applications. Particularly, this paper discusses the algorithmic improvements obtained by: (i) Using an efficient approach to choose the optimal time-of-flight; (ii) Using a computationally inexpensive way to detect the feasibility/ infeasibility of the problem due to the thrust-to-weight constraint; (iii) Incorporating the rotation rate of the planet into the problem formulation; (iv) Developing additional constraints on the position and velocity to guarantee no-subsurface flight between the time samples of the temporal discretization; (v) Developing a fuel-limited targeting algorithm; (vi) Initial result on developing an onboard table lookup method to obtain almost fuel optimal solutions in real-time.
Quantum state engineering by a coherent superposition of photon subtraction and addition
NASA Astrophysics Data System (ADS)
Lee, Su-Yong; Nha, Hyunchul
2011-10-01
We study a coherent superposition tâ+r↠of field annihilation and creation operator acting on continuous variable systems and propose its application for quantum state engineering. We propose an experimental scheme to implement this elementary coherent operation and discuss its usefulness to produce an arbitrary superposition of number states involving up to two photons.
NASA Astrophysics Data System (ADS)
Xiao, Xiazi; Yu, Long
2018-05-01
Linear and square superposition hardening models are compared for the surface nanoindentation of ion-irradiated materials. Hardening mechanisms of both dislocations and defects within the plasticity affected region (PAR) are considered. Four sets of experimental data for ion-irradiated materials are adopted to compare with theoretical results of the two hardening models. It is indicated that both models describe experimental data equally well when the PAR is within the irradiated layer; whereas, when the PAR is beyond the irradiated region, the square superposition hardening model performs better. Therefore, the square superposition model is recommended to characterize the hardening behavior of ion-irradiated materials.
NASA Astrophysics Data System (ADS)
Handlos, Zachary J.
Though considerable research attention has been devoted to examination of the Northern Hemispheric polar and subtropical jet streams, relatively little has been directed toward understanding the circumstances that conspire to produce the relatively rare vertical superposition of these usually separate features. This dissertation investigates the structure and evolution of large-scale environments associated with jet superposition events in the northwest Pacific. An objective identification scheme, using NCEP/NCAR Reanalysis 1 data, is employed to identify all jet superpositions in the west Pacific (30-40°N, 135-175°E) for boreal winters (DJF) between 1979/80 - 2009/10. The analysis reveals that environments conducive to west Pacific jet superposition share several large-scale features usually associated with East Asian Winter Monsoon (EAWM) northerly cold surges, including the presence of an enhanced Hadley Cell-like circulation within the jet entrance region. It is further demonstrated that several EAWM indices are statistically significantly correlated with jet superposition frequency in the west Pacific. The life cycle of EAWM cold surges promotes interaction between tropical convection and internal jet dynamics. Low potential vorticity (PV), high theta e tropical boundary layer air, exhausted by anomalous convection in the west Pacific lower latitudes, is advected poleward towards the equatorward side of the jet in upper tropospheric isentropic layers resulting in anomalous anticyclonic wind shear that accelerates the jet. This, along with geostrophic cold air advection in the left jet entrance region that drives the polar tropopause downward through the jet core, promotes the development of the deep, vertical PV wall characteristic of superposed jets. West Pacific jet superpositions preferentially form within an environment favoring the aforementioned characteristics regardless of EAWM seasonal strength. Post-superposition, it is shown that the west Pacific jet extends eastward and is associated with an upper tropospheric cyclonic (anticyclonic) anomaly in its left (right) exit region. A downstream ridge is present over northwest Canada, and within the strong EAWM environment, a wavier flow over North America is observed relative to the neutral EAWM environment. Preliminary investigation of the two weak EAWM season superpositions reveals a Kona Low type feature post-superposition. This is associated with anomalous convection reminiscent of an atmospheric river southwest of Mexico.
Data compressive paradigm for multispectral sensing using tunable DWELL mid-infrared detectors.
Jang, Woo-Yong; Hayat, Majeed M; Godoy, Sebastián E; Bender, Steven C; Zarkesh-Ha, Payman; Krishna, Sanjay
2011-09-26
While quantum dots-in-a-well (DWELL) infrared photodetectors have the feature that their spectral responses can be shifted continuously by varying the applied bias, the width of the spectral response at any applied bias is not sufficiently narrow for use in multispectral sensing without the aid of spectral filters. To achieve higher spectral resolutions without using physical spectral filters, algorithms have been developed for post-processing the DWELL's bias-dependent photocurrents resulting from probing an object of interest repeatedly over a wide range of applied biases. At the heart of these algorithms is the ability to approximate an arbitrary spectral filter, which we desire the DWELL-algorithm combination to mimic, by forming a weighted superposition of the DWELL's non-orthogonal spectral responses over a range of applied biases. However, these algorithms assume availability of abundant DWELL data over a large number of applied biases (>30), leading to large overall acquisition times in proportion with the number of biases. This paper reports a new multispectral sensing algorithm to substantially compress the number of necessary bias values subject to a prescribed performance level across multiple sensing applications. The algorithm identifies a minimal set of biases to be used in sensing only the relevant spectral information for remote-sensing applications of interest. Experimental results on target spectrometry and classification demonstrate a reduction in the number of required biases by a factor of 7 (e.g., from 30 to 4). The tradeoff between performance and bias compression is thoroughly investigated. © 2011 Optical Society of America
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korpics, Mark; Surucu, Murat; Mescioglu, Ibrahim
Purpose and Objectives: To quantify, through an observer study, the reduction in metal artifacts on cone beam computed tomographic (CBCT) images using a projection-interpolation algorithm, on images containing metal artifacts from dental fillings and implants in patients treated for head and neck (H&N) cancer. Methods and Materials: An interpolation-substitution algorithm was applied to H&N CBCT images containing metal artifacts from dental fillings and implants. Image quality with respect to metal artifacts was evaluated subjectively and objectively. First, 6 independent radiation oncologists were asked to rank randomly sorted blinded images (before and after metal artifact reduction) using a 5-point rating scalemore » (1 = severe artifacts; 5 = no artifacts). Second, the standard deviation of different regions of interest (ROI) within each image was calculated and compared with the mean rating scores. Results: The interpolation-substitution technique successfully reduced metal artifacts in 70% of the cases. From a total of 60 images from 15 H&N cancer patients undergoing image guided radiation therapy, the mean rating score on the uncorrected images was 2.3 ± 1.1, versus 3.3 ± 1.0 for the corrected images. The mean difference in ranking score between uncorrected and corrected images was 1.0 (95% confidence interval: 0.9-1.2, P<.05). The standard deviation of each ROI significantly decreased after artifact reduction (P<.01). Moreover, a negative correlation between the mean rating score for each image and the standard deviation of the oral cavity and bilateral cheeks was observed. Conclusion: The interpolation-substitution algorithm is efficient and effective for reducing metal artifacts caused by dental fillings and implants on CBCT images, as demonstrated by the statistically significant increase in observer image quality ranking and by the decrease in ROI standard deviation between uncorrected and corrected images.« less
SU-F-T-74: Experimental Validation of Monaco Electron Monte Carlo Dose Calculation for Small Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varadhan; Way, S; Arentsen, L
2016-06-15
Purpose: To verify experimentally the accuracy of Monaco (Elekta) electron Monte Carlo (eMC) algorithm to calculate small field size depth doses, monitor units and isodose distributions. Methods: Beam modeling of eMC algorithm was performed for electron energies of 6, 9, 12 15 and 18 Mev for a Elekta Infinity Linac and all available ( 6, 10, 14 20 and 25 cone) applicator sizes. Electron cutouts of incrementally smaller field sizes (20, 40, 60 and 80% blocked from open cone) were fabricated. Dose calculation was performed using a grid size smaller than one-tenth of the R{sub 80–20} electron distal falloff distancemore » and number of particle histories was set at 500,000 per cm{sup 2}. Percent depth dose scans and beam profiles at dmax, d{sub 90} and d{sub 80} depths were measured for each cutout and energy with Wellhoffer (IBA) Blue Phantom{sup 2} scanning system and compared against eMC calculated doses. Results: The measured dose and output factors of incrementally reduced cutout sizes (to 3cm diameter) agreed with eMC calculated doses within ± 2.5%. The profile comparisons at dmax, d{sub 90} and d{sub 80} depths and percent depth doses at reduced field sizes agreed within 2.5% or 2mm. Conclusion: Our results indicate that the Monaco eMC algorithm can accurately predict depth doses, isodose distributions, and monitor units in homogeneous water phantom for field sizes as small as 3.0 cm diameter for energies in the 6 to 18 MeV range at 100 cm SSD. Consequently, the old rule of thumb to approximate limiting cutout size for an electron field determined by the lateral scatter equilibrium (E (MeV)/2.5 in centimeters of water) does not apply to Monaco eMC algorithm.« less
Ultra-high resolution computed tomography imaging
Paulus, Michael J.; Sari-Sarraf, Hamed; Tobin, Jr., Kenneth William; Gleason, Shaun S.; Thomas, Jr., Clarence E.
2002-01-01
A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.
A simple extension of Roe's scheme for real gases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arabi, Sina, E-mail: sina.arabi@polymtl.ca; Trépanier, Jean-Yves; Camarero, Ricardo
The purpose of this paper is to develop a highly accurate numerical algorithm to model real gas flows in local thermodynamic equilibrium (LTE). The Euler equations are solved using a finite volume method based on Roe's flux difference splitting scheme including real gas effects. A novel algorithm is proposed to calculate the Jacobian matrix which satisfies the flux difference splitting exactly in the average state for a general equation of state. This algorithm increases the robustness and accuracy of the method, especially around the contact discontinuities and shock waves where the gas properties jump appreciably. The results are compared withmore » an exact solution of the Riemann problem for the shock tube which considers the real gas effects. In addition, the method is applied to a blunt cone to illustrate the capability of the proposed extension in solving two dimensional flows.« less
NASA Astrophysics Data System (ADS)
Sohn, Y.
2011-12-01
Recent studies show that the architecture of hydromagmatic volcanoes is far more complex than formerly expected. A number of external factors, such as paleohydrology and tectonics, in addition to magmatic processes are thought to play a role in controlling the overall characteristics and architecture of these volcanoes. One of the main consequences of these controls is the migration of the active vent during eruption. Case studies of hydromagmatic volcanoes in Korea show that those volcanoes that have undergone vent migration are characterized by superposition or juxtaposition of multiple rim deposits of partial tuff rings and/or tuff cones that have contrasting lithofacies characteristics, bed attitudes, and paleoflow directions. Various causes of vent migration are inferred from these volcanoes. Large-scale collapse of fragile substrate is interpreted to have caused vent migration in the Early Pleistocene volcanoes of Jeju Island, which were built upon still unconsolidated continental shelf sediments. Late Pleistocene to Holocene volcanoes, which were built upon a stack of rigid, shield-forming lava flows, lack features due to large-scale substrate collapse and have generally simple and circular morphologies either of a tuff ring or of a tuff cone. However, ~600 m shift of the eruptive center is inferred from one of these volcanoes (Ilchulbong tuff cone). The vent migration in this volcano is interpreted to have occurred because the eruption was sourced by multiple magma batches with significant eruptive pauses in between. The Yangpori diatreme in a Miocene terrestrial half-graben basin in SE Korea is interpreted to be a subsurface equivalent of a hydromagmatic volcano that has undergone vent migration. The vent migration here is inferred to have had both vertical and lateral components and have been caused by an abrupt tectonic activity near the basin margin. In all these cases, rimbeds or diatreme fills derived from different source vents are bounded by either prominent or subtle, commonly laterally extensive truncation surfaces or stratigraphic discontinuities. Careful documentation of these surfaces and discontinuities thus appears vital to proper interpretation of eruption history, morphologic evolution, and even deep-seated magmatic processes of a hydromagmatic volcano. In this respect, the technique known as 'allostratigraphy' appears useful in mapping, correlation, and interpretation of many hydrovolcanic edifices and sequences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hackett, S; Asselen, B van; Wolthaus, J
2016-06-15
Purpose: Treatment plans for the MR-linac, calculated in Monaco v5.19, include direct simulation of the effects of the 1.5T B{sub 0}-field. We tested the feasibility of using a collapsed-cone (CC) algorithm in Oncentra, which does not account for effects of the B{sub 0}-field, as a fast online, independent 3D check of dose calculations. Methods: Treatment plans for six patients were generated in Monaco with a 6 MV FFF beam and the B{sub 0}-field. All plans were recalculated with a CC model of the same beam. Plans for the same patients were also generated in Monaco without the B{sub 0}-field. Themore » mean dose (Dmean) and doses to 10% (D10%) and 90% (D90%) of the volume were determined, as percentages of the prescribed dose, for target volumes and OARs in each calculated dose distribution. Student’s t-tests between paired parameters from Monaco plans and corresponding CC calculations were performed. Results: Figure 1 shows an example of the difference between dose distributions calculated in Monaco, with the B{sub 0}-field, and the CC algorithm. Figure 2 shows distributions of (absolute) difference between parameters for Monaco plans, with the B{sub 0}-field, and CC calculations. The Dmean and D90% values for the CTVs and PTVs were significantly different, but differences in dose distributions arose predominantly at the edges of the target volumes. Inclusion of the B{sub 0}-field had little effect on agreement of the Dmean values, as illustrated by Figure 3, nor on agreement of the D10% and D90% values. Conclusion: Dose distributions recalculated with a CC algorithm show good agreement with those calculated with Monaco, for plans both with and without the B{sub 0}-field, indicating that the CC algorithm could be used to check online treatment planning for the MRlinac. Agreement for a wider range of treatment sites, and the feasibility of using the γ-test as a simple pass/fail criterion, will be investigated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaskowiak, J; Ahmad, S; Ali, I
Purpose: To investigate correlation of displacement vector fields (DVF) calculated by deformable image registration algorithms with motion parameters in helical axial and cone-beam CT images with motion artifacts. Methods: A mobile thorax phantom with well-known targets with different sizes that were made from water-equivalent material and inserted in foam to simulate lung lesions. The thorax phantom was imaged with helical, axial and cone-beam CT. The phantom was moved with a cyclic motion with different motion amplitudes and frequencies along the superior-inferior direction. Different deformable image registration algorithms including demons, fast demons, Horn-Shunck and iterative-optical-flow from the DIRART software were usedmore » to deform CT images for the phantom with different motion patterns. The CT images of the mobile phantom were deformed to CT images of the stationary phantom. Results: The values of displacement vectors calculated by deformable image registration algorithm correlated strongly with motion amplitude where large displacement vectors were calculated for CT images with large motion amplitudes. For example, the maximal displacement vectors were nearly equal to the motion amplitudes (5mm, 10mm or 20mm) at interfaces between the mobile targets lung tissue, while the minimal displacement vectors were nearly equal to negative the motion amplitudes. The maximal and minimal displacement vectors matched with edges of the blurred targets along the Z-axis (motion-direction), while DVF’s were small in the other directions. This indicates that the blurred edges by phantom motion were shifted largely to match with the actual target edge. These shifts were nearly equal to the motion amplitude. Conclusions: The DVF from deformable-image registration algorithms correlated well with motion amplitude of well-defined mobile targets. This can be used to extract motion parameters such as amplitude. However, as motion amplitudes increased, image artifacts increased significantly and that limited image quality and poor correlation between the motion amplitude and DVF was obtained.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shieh, C; Kipritidis, J; OBrien, R
2014-06-15
Purpose: The Feldkamp-Davis-Kress (FDK) algorithm currently used for clinical thoracic 4-dimensional (4D) cone-beam CT (CBCT) reconstruction suffers from noise and streaking artifacts due to projection under-sampling. Compressed sensing theory enables reconstruction of under-sampled datasets via total-variation (TV) minimization, but TV-minimization algorithms such as adaptive-steepest-descent-projection-onto-convex-sets (ASD-POCS) often converge slowly and are prone to over-smoothing anatomical details. These disadvantages can be overcome by incorporating general anatomical knowledge via anatomy segmentation. Based on this concept, we have developed an anatomical-adaptive compressed sensing (AACS) algorithm for thoracic 4D-CBCT reconstruction. Methods: AACS is based on the ASD-POCS framework, where each iteration consists of a TV-minimizationmore » step and a data fidelity constraint step. Prior to every AACS iteration, four major thoracic anatomical structures - soft tissue, lungs, bony anatomy, and pulmonary details - were segmented from the updated solution image. Based on the segmentation, an anatomical-adaptive weighting was applied to the TV-minimization step, so that TV-minimization was enhanced at noisy/streaky regions and suppressed at anatomical structures of interest. The image quality and convergence speed of AACS was compared to conventional ASD-POCS using an XCAT digital phantom and a patient scan. Results: For the XCAT phantom, the AACS image represented the ground truth better than the ASD-POCS image, giving a higher structural similarity index (0.93 vs. 0.84) and lower absolute difference (1.1*10{sup 4} vs. 1.4*10{sup 4}). For the patient case, while both algorithms resulted in much less noise and streaking than FDK, the AACS image showed considerably better contrast and sharpness of the vessels, tumor, and fiducial marker than the ASD-POCS image. In addition, AACS converged over 50% faster than ASD-POCS in both cases. Conclusions: The proposed AACS algorithm was shown to reconstruct thoracic 4D-CBCT images more accurately and with faster convergence compared to ASD-POCS. The superior image quality and rapid convergence makes AACS promising for future clinical use.« less
Computation of asymmetric supersonic flows around cones at large incidence
NASA Technical Reports Server (NTRS)
Degani, David
1987-01-01
The Schiff-Steger parabolized Navier-Stokes (PNS) code has been modified to allow computation of conical flowfields around cones at high incidence. The improved algorithm of Degani and Schiff has been incorporated with the PNS code. This algorithm adds the cross derivative and circumferential viscous terms to the original PNS code and modifies the algebraic eddy viscosity turbulence model to take into account regions of so called cross-flow separation. Assuming the flowfield is conical (but not necessarily symmetric) a marching stepback procedure is used: the solution is marched one step downstream using improved PNS code and the flow variables are then scaled to place the solution back to the original station. The process is repeated until no change in the flow variables is observed with further marching. The flow variables are then constant along rays of the flowfield. The experiments obtained by Bannik and Nebbeling were chosen as a test case. In these experiments a cone of 7.5 deg. half angle at Mach number 2.94 and Reynolds number 1.372 x 10(7) was tested up 34 deg. angle of attack. At high angle of attack nonconical asymmetric leeward side vortex patterns were observed. In the first set of computations, using an earlier obtained solution of the above cone for angle of attack of 22.6 deg. and at station x=0.5 as a starting solution, the angle of attack was gradually increased up to 34 deg. During this procedure the grid was carfully adjusted to capture the bow shock. A stable, converged symmetric solution was obtained. Since the numerical code converged to a symmetric solution which is not the physical one, the stability was tested by a random perturbation at each point. The possible effect of surface roughness or non perfect body shape was also investigated. It was concluded that although the assumption of conical viscous flows can be very useful for certain cases, it can not be used for the present case. Thus the second part of the investigation attempted to obtain a marching (in space) solution with the PNS method using the conical solution as initial data. Finally, the solution of the full Navier-Stokes equations was carried out.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao Daliang; Earl, Matthew A.; Luan, Shuang
2006-04-15
A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases weremore » selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle{sup 3} treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.« less
Multichannel Polarization-Controllable Superpositions of Orbital Angular Momentum States.
Yue, Fuyong; Wen, Dandan; Zhang, Chunmei; Gerardot, Brian D; Wang, Wei; Zhang, Shuang; Chen, Xianzhong
2017-04-01
A facile metasurface approach is shown to realize polarization-controllable multichannel superpositions of orbital angular momentum (OAM) states with various topological charges. By manipulating the polarization state of the incident light, four kinds of superpositions of OAM states are realized using a single metasurface consisting of space-variant arrays of gold nanoantennas. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Splash-cup plants accelerate raindrops to disperse seeds.
Amador, Guillermo J; Yamada, Yasukuni; McCurley, Matthew; Hu, David L
2013-02-01
The conical flowers of splash-cup plants Chrysosplenium and Mazus catch raindrops opportunistically, exploiting the subsequent splash to disperse their seeds. In this combined experimental and theoretical study, we elucidate their mechanism for maximizing dispersal distance. We fabricate conical plant mimics using three-dimensional printing, and use high-speed video to visualize splash profiles and seed travel distance. Drop impacts that strike the cup off-centre achieve the largest dispersal distances of up to 1 m. Such distances are achieved because splash speeds are three to five times faster than incoming drop speeds, and so faster than the traditionally studied splashes occurring upon horizontal surfaces. This anomalous splash speed is because of the superposition of two components of momentum, one associated with a component of the drop's motion parallel to the splash-cup surface, and the other associated with film spreading induced by impact with the splash-cup. Our model incorporating these effects predicts the observed dispersal distance within 6-18% error. According to our experiments, the optimal cone angle for the splash-cup is 40°, a value consistent with the average of five species of splash-cup plants. This optimal angle arises from the competing effects of velocity amplification and projectile launching angle.
Quantitative Laser-Saturated Fluorescence Measurements of Nitric Oxide in a Heptane Spray Flame
NASA Technical Reports Server (NTRS)
Cooper, Clayton S.; Laurendeau, Normand M.; Lee, Chi (Technical Monitor)
1997-01-01
We report spatially resolved laser-saturated fluorescence measurements of NO concentration in a pre-heated, lean-direct injection (LDI) spray flame at atmospheric pressure. The spray is produced by a hollow-cone, pressure-atomized nozzle supplied with liquid heptane. NO is excited via the Q2(26.5) transition of the gamma(0,0) band. Detection is performed in a 2-nm region centered on the gamma(0,1) band. Because of the relatively close spectral spacing between the excitation (226 nm) and detection wavelengths (236 nm), the gamma(0,1) band of NO cannot be isolated from the spectral wings of the Mie scattering signal produced by the spray. To account for the resulting superposition of the fluorescence and scattering signals, a background subtraction method has been developed that utilizes a nearby non-resonant wavelength. Excitation scans have been performed to locate the optimum off-line wavelength. Detection scans have been performed at problematic locations in the flame to determine possible fluorescence interferences from UHCs and PAHs at both the on-line and off-line excitation wavelengths. Quantitative radial NO profiles are presented and analyzed so as to better understand the operation of lean-direct injectors for gas turbine combustors.
Analytical model of a corona discharge from a conical electrode under saturation
NASA Astrophysics Data System (ADS)
Boltachev, G. Sh.; Zubarev, N. M.
2012-11-01
Exact partial solutions are found for the electric field distribution in the outer region of a stationary unipolar corona discharge from an ideal conical needle in the space-charge-limited current mode with allowance for the electric field dependence of the ion mobility. It is assumed that only the very tip of the cone is responsible for the discharge, i.e., that the ionization zone is a point. The solutions are obtained by joining the spherically symmetric potential distribution in the drift space and the self-similar potential distribution in the space-charge-free region. Such solutions are outside the framework of the conventional Deutsch approximation, according to which the space charge insignificantly influences the shape of equipotential surfaces and electric lines of force. The dependence is derived of the corona discharge saturation current on the apex angle of the conical electrode and applied potential difference. A simple analytical model is suggested that describes drift in the point-plane electrode geometry under saturation as a superposition of two exact solutions for the field potential. In terms of this model, the angular distribution of the current density over the massive plane electrode is derived, which agrees well with Warburg's empirical law.
Complexity of Quantum Impurity Problems
NASA Astrophysics Data System (ADS)
Bravyi, Sergey; Gosset, David
2017-12-01
We give a quasi-polynomial time classical algorithm for estimating the ground state energy and for computing low energy states of quantum impurity models. Such models describe a bath of free fermions coupled to a small interacting subsystem called an impurity. The full system consists of n fermionic modes and has a Hamiltonian {H=H_0+H_{imp}}, where H 0 is quadratic in creation-annihilation operators and H imp is an arbitrary Hamiltonian acting on a subset of O(1) modes. We show that the ground energy of H can be approximated with an additive error {2^{-b}} in time {n^3 \\exp{[O(b^3)]}}. Our algorithm also finds a low energy state that achieves this approximation. The low energy state is represented as a superposition of {\\exp{[O(b^3)]}} fermionic Gaussian states. To arrive at this result we prove several theorems concerning exact ground states of impurity models. In particular, we show that eigenvalues of the ground state covariance matrix decay exponentially with the exponent depending very mildly on the spectral gap of H 0. A key ingredient of our proof is Zolotarev's rational approximation to the {√{x}} function. We anticipate that our algorithms may be used in hybrid quantum-classical simulations of strongly correlated materials based on dynamical mean field theory. We implemented a simplified practical version of our algorithm and benchmarked it using the single impurity Anderson model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkoff, T. J., E-mail: adidasty@gmail.com
We motivate and introduce a class of “hierarchical” quantum superposition states of N coupled quantum oscillators. Unlike other well-known multimode photonic Schrödinger-cat states such as entangled coherent states, the hierarchical superposition states are characterized as two-branch superpositions of tensor products of single-mode Schrödinger-cat states. In addition to analyzing the photon statistics and quasiprobability distributions of prominent examples of these nonclassical states, we consider their usefulness for highprecision quantum metrology of nonlinear optical Hamiltonians and quantify their mode entanglement. We propose two methods for generating hierarchical superpositions in N = 2 coupled microwave cavities, exploiting currently existing quantum optical technology formore » generating entanglement between spatially separated electromagnetic field modes.« less
Space-variant polarization patterns of non-collinear Poincaré superpositions
NASA Astrophysics Data System (ADS)
Galvez, E. J.; Beach, K.; Zeosky, J. J.; Khajavi, B.
2015-03-01
We present analysis and measurements of the polarization patterns produced by non-collinear superpositions of Laguerre-Gauss spatial modes in orthogonal polarization states, which are known as Poincaré modes. Our findings agree with predictions (I. Freund Opt. Lett. 35, 148-150 (2010)), that superpositions containing a C-point lead to a rotation of the polarization ellipse in 3-dimensions. Here we do imaging polarimetry of superpositions of first- and zero-order spatial modes at relative beam angles of 0-4 arcmin. We find Poincaré-type polarization patterns showing fringes in polarization orientation, but which preserve the polarization-singularity index for all three cases of C-points: lemons, stars and monstars.
Non-coaxial superposition of vector vortex beams.
Aadhi, A; Vaity, Pravin; Chithrabhanu, P; Reddy, Salla Gangi; Prabakar, Shashi; Singh, R P
2016-02-10
Vector vortex beams are classified into four types depending upon spatial variation in their polarization vector. We have generated all four of these types of vector vortex beams by using a modified polarization Sagnac interferometer with a vortex lens. Further, we have studied the non-coaxial superposition of two vector vortex beams. It is observed that the superposition of two vector vortex beams with same polarization singularity leads to a beam with another kind of polarization singularity in their interaction region. The results may be of importance in ultrahigh security of the polarization-encrypted data that utilizes vector vortex beams and multiple optical trapping with non-coaxial superposition of vector vortex beams. We verified our experimental results with theory.
Sze, Michelle Wynne C; Sugon, Quirino M; McNamara, Daniel J
2010-11-01
In this paper, we use Clifford (geometric) algebra Cl(3,0) to verify if electromagnetic energy-momentum density is still conserved for oblique superposition of two elliptically polarized plane waves with the same frequency. We show that energy-momentum conservation is valid at any time only for the superposition of two counter-propagating elliptically polarized plane waves. We show that the time-average energy-momentum of the superposition of two circularly polarized waves with opposite handedness is conserved regardless of the propagation directions of the waves. And, we show that the resulting momentum density of the superposed waves generally has a vector component perpendicular to the momentum densities of the individual waves.
Fully automatic segmentation of arbitrarily shaped fiducial markers in cone-beam CT projections
NASA Astrophysics Data System (ADS)
Bertholet, J.; Wan, H.; Toftegaard, J.; Schmidt, M. L.; Chotard, F.; Parikh, P. J.; Poulsen, P. R.
2017-02-01
Radio-opaque fiducial markers of different shapes are often implanted in or near abdominal or thoracic tumors to act as surrogates for the tumor position during radiotherapy. They can be used for real-time treatment adaptation, but this requires a robust, automatic segmentation method able to handle arbitrarily shaped markers in a rotational imaging geometry such as cone-beam computed tomography (CBCT) projection images and intra-treatment images. In this study, we propose a fully automatic dynamic programming (DP) assisted template-based (TB) segmentation method. Based on an initial DP segmentation, the DPTB algorithm generates and uses a 3D marker model to create 2D templates at any projection angle. The 2D templates are used to segment the marker position as the position with highest normalized cross-correlation in a search area centered at the DP segmented position. The accuracy of the DP algorithm and the new DPTB algorithm was quantified as the 2D segmentation error (pixels) compared to a manual ground truth segmentation for 97 markers in the projection images of CBCT scans of 40 patients. Also the fraction of wrong segmentations, defined as 2D errors larger than 5 pixels, was calculated. The mean 2D segmentation error of DP was reduced from 4.1 pixels to 3.0 pixels by DPTB, while the fraction of wrong segmentations was reduced from 17.4% to 6.8%. DPTB allowed rejection of uncertain segmentations as deemed by a low normalized cross-correlation coefficient and contrast-to-noise ratio. For a rejection rate of 9.97%, the sensitivity in detecting wrong segmentations was 67% and the specificity was 94%. The accepted segmentations had a mean segmentation error of 1.8 pixels and 2.5% wrong segmentations.
A prototype table-top inverse-geometry volumetric CT system.
Schmidt, Taly Gilat; Star-Lack, Josh; Bennett, N Robert; Mazin, Samuel R; Solomon, Edward G; Fahrig, Rebecca; Pelc, Norbert J
2006-06-01
A table-top volumetric CT system has been implemented that is able to image a 5-cm-thick volume in one circular scan with no cone-beam artifacts. The prototype inverse-geometry CT (IGCT) scanner consists of a large-area, scanned x-ray source and a detector array that is smaller in the transverse direction. The IGCT geometry provides sufficient volumetric sampling because the source and detector have the same axial, or slice direction, extent. This paper describes the implementation of the table-top IGCT scanner, which is based on the NexRay Scanning-Beam Digital X-ray system (NexRay, Inc., Los Gatos, CA) and an investigation of the system performance. The alignment and flat-field calibration procedures are described, along with a summary of the reconstruction algorithm. The resolution and noise performance of the prototype IGCT system are studied through experiments and further supported by analytical predictions and simulations. To study the presence of cone-beam artifacts, a "Defrise" phantom was scanned on both the prototype IGCT scanner and a micro CT system with a +/-5 cone angle for a 4.5-cm volume thickness. Images of inner ear specimens are presented and compared to those from clinical CT systems. Results showed that the prototype IGCT system has a 0.25-mm isotropic resolution and that noise comparable to that from a clinical scanner with equivalent spatial resolution is achievable. The measured MTF and noise values agreed reasonably well with theoretical predictions and computer simulations. The IGCT system was able to faithfully reconstruct the laminated pattern of the Defrise phantom while the micro CT system suffered severe cone-beam artifacts for the same object. The inner ear acquisition verified that the IGCT system can image a complex anatomical object, and the resulting images exhibited more high-resolution details than the clinical CT acquisition. Overall, the successful implementation of the prototype system supports the IGCT concept for single-rotation volumetric scanning free from cone-beam artifacts.
A Dynamic Attitude Measurement System Based on LINS
Li, Hanzhou; Pan, Quan; Wang, Xiaoxu; Zhang, Juanni; Li, Jiang; Jiang, Xiangjun
2014-01-01
A dynamic attitude measurement system (DAMS) is developed based on a laser inertial navigation system (LINS). Three factors of the dynamic attitude measurement error using LINS are analyzed: dynamic error, time synchronization and phase lag. An optimal coning errors compensation algorithm is used to reduce coning errors, and two-axis wobbling verification experiments are presented in the paper. The tests indicate that the attitude accuracy is improved 2-fold by the algorithm. In order to decrease coning errors further, the attitude updating frequency is improved from 200 Hz to 2000 Hz. At the same time, a novel finite impulse response (FIR) filter with three notches is designed to filter the dither frequency of the ring laser gyro (RLG). The comparison tests suggest that the new filter is five times more effective than the old one. The paper indicates that phase-frequency characteristics of FIR filter and first-order holder of navigation computer constitute the main sources of phase lag in LINS. A formula to calculate the LINS attitude phase lag is introduced in the paper. The expressions of dynamic attitude errors induced by phase lag are derived. The paper proposes a novel synchronization mechanism that is able to simultaneously solve the problems of dynamic test synchronization and phase compensation. A single-axis turntable and a laser interferometer are applied to verify the synchronization mechanism. The experiments results show that the theoretically calculated values of phase lag and attitude error induced by phase lag can both match perfectly with testing data. The block diagram of DAMS and physical photos are presented in the paper. The final experiments demonstrate that the real-time attitude measurement accuracy of DAMS can reach up to 20″ (1σ) and the synchronization error is less than 0.2 ms on the condition of three axes wobbling for 10 min. PMID:25177802
A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Hao; Folkerts, Michael; Jiang, Steve B., E-mail: xun.jia@utsouthwestern.edu, E-mail: steve.jiang@UTSouthwestern.edu
2014-07-15
Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is inventedmore » to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase. Conclusions: High-quality 4D-CBCT imaging based on the clinically standard 1-min 3D CBCT scanning protocol is feasible via the proposed hybrid reconstruction algorithm.« less
Approaches to reducing photon dose calculation errors near metal implants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Jessie Y.; Followill, David S.; Howell, Reb
Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well asmore » two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact reduction methods investigated, the authors found that O-MAR was the most consistent method, resulting in either improved dose calculation accuracy (dental case) or little impact on calculation accuracy (spine case). GSI was unsuccessful at reducing the severe artifacts caused by dental fillings and had very little impact on calculation accuracy. GSI with MARS on the other hand gave mixed results, sometimes introducing metal distortion and increasing calculation errors (titanium rectangular implant and titanium spinal hardware) but other times very successfully reducing artifacts (Cerrobend rectangular implant and dental fillings). Conclusions: Though successful at improving dose calculation accuracy upstream of metal implants, metal kernels were not found to substantially improve accuracy for clinical cases. Of the commercial artifact reduction methods investigated, O-MAR was found to be the most consistent candidate for all-purpose CT simulation imaging. The MARS algorithm for GSI should be used with caution for titanium implants, larger implants, and implants located near heterogeneities as it can distort the size and shape of implants and increase calculation errors.« less
Tomographic Neutron Imaging using SIRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregor, Jens; FINNEY, Charles E A; Toops, Todd J
2013-01-01
Neutron imaging is complementary to x-ray imaging in that materials such as water and plastic are highly attenuating while material such as metal is nearly transparent. We showcase tomographic imaging of a diesel particulate filter. Reconstruction is done using a modified version of SIRT called PSIRT. We expand on previous work and introduce Tikhonov regularization. We show that near-optimal relaxation can still be achieved. The algorithmic ideas apply to cone beam x-ray CT and other inverse problems.
None
2018-05-14
We will introduce and discuss in some detail the two main classes of jets: cone type and sequential-recombination type. We will discuss their basic properties, as well as more advanced concepts such as jet substructure, jet filtering, ways of optimizing the jet radius, ways of defining the areas of jets, and of establishing the quality measure of the jet-algorithm in terms of discriminating power in specific searches. Finally we will discuss applications for Higgs searches involving boosted particles.
Three-dimensional monochromatic x-ray computed tomography using synchrotron radiation
NASA Astrophysics Data System (ADS)
Saito, Tsuneo; Kudo, Hiroyuki; Takeda, Tohoru; Itai, Yuji; Tokumori, Kenji; Toyofuku, Fukai; Hyodo, Kazuyuki; Ando, Masami; Nishimura, Katsuyuki; Uyama, Chikao
1998-08-01
We describe a technique of 3D computed tomography (3D CT) using monochromatic x rays generated by synchrotron radiation, which performs a direct reconstruction of a 3D volume image of an object from its cone-beam projections. For the development, we propose a practical scanning orbit of the x-ray source to obtain complete 3D information on an object, and its corresponding 3D image reconstruction algorithm. The validity and usefulness of the proposed scanning orbit and reconstruction algorithm were confirmed by computer simulation studies. Based on these investigations, we have developed a prototype 3D monochromatic x-ray CT using synchrotron radiation, which provides exact 3D reconstruction and material-selective imaging by using the K-edge energy subtraction technique.
4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling
NASA Astrophysics Data System (ADS)
Zhong, Zichun; Gu, Xuejun; Mao, Weihua; Wang, Jing
2016-02-01
A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp-Davis-Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations.
4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling.
Zhong, Zichun; Gu, Xuejun; Mao, Weihua; Wang, Jing
2016-02-07
A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp-Davis-Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations.
Confocal non-line-of-sight imaging based on the light-cone transform
NASA Astrophysics Data System (ADS)
O’Toole, Matthew; Lindell, David B.; Wetzstein, Gordon
2018-03-01
How to image objects that are hidden from a camera’s view is a problem of fundamental importance to many fields of research, with applications in robotic vision, defence, remote sensing, medical imaging and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and a time-resolved detector. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections, NLOS imaging reconstructs the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem. This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution. Confocal scanning also provides a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate its potential for real-time tracking and derive efficient algorithms that incorporate image priors and a physically accurate noise model. Additionally, we describe successful outdoor experiments of NLOS imaging under indirect sunlight.
Confocal non-line-of-sight imaging based on the light-cone transform.
O'Toole, Matthew; Lindell, David B; Wetzstein, Gordon
2018-03-15
How to image objects that are hidden from a camera's view is a problem of fundamental importance to many fields of research, with applications in robotic vision, defence, remote sensing, medical imaging and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and a time-resolved detector. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections, NLOS imaging reconstructs the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem. This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution. Confocal scanning also provides a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate its potential for real-time tracking and derive efficient algorithms that incorporate image priors and a physically accurate noise model. Additionally, we describe successful outdoor experiments of NLOS imaging under indirect sunlight.
4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling
Zhong, Zichun; Gu, Xuejun; Mao, Weihua; Wang, Jing
2016-01-01
A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp–Davis–Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations. PMID:26758496
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, P; Ma, L
Purpose: To study the feasibility of treating multiple brain tumors withlarge number of noncoplanar IMRT beams. Methods: Thirty beams are selected from 390 deliverable beams separated by six degree in 4pi space. Beam selection optimization is based on a column generation algorithm. MLC leaf size is 2 mm. Dose matrices are calculated with collapsed cone convolution and superposition method in a 2 mm by 2mm by 2 mm grid. Twelve brain tumors of various shapes, sizes and locations are used to generate four plans treating 3, 6, 9 and 12 tumors. The radiation dose was 20 Gy prescribed to themore » 100% isodose line. Dose Volume Histograms for tumor and brain were compared. Results: All results are based on a 2 mm by 2 mm by 2 mm CT grid. For 3, 6, 9 and 12 tumor plans, minimum tumor doses are all 20 Gy. Mean tumor dose are 20.0, 20.1, 20.1 and 20.1 Gy. Maximum tumor dose are 23.3, 23.6, 25.4 and 25.4 Gy. Mean ventricles dose are 0.7, 1.7, 2.4 and 3.1 Gy.Mean subventricular zone dose are 0.8, 1.3, 2.2 and 3.2 Gy. Average Equivalent uniform dose (gEUD) values for tumor are 20.1, 20.1, 20.2 and 20.2 Gy. The conformity index (CI) values are close to 1 for all 4 plans. The gradient index (GI) values are 2.50, 2.05, 2.09 and 2.19. Conclusion: Compared with published Gamma Knife treatment studies, noncoplanar IMRT treatment plan is superior in terms of dose conformity. Due to maximum limit of beams per plan, Gamma knife has to treat multiple tumors separately in different plans. Noncoplanar IMRT plans theoretically can be delivered in a single plan on any modern linac with an automated couch and image guidance. This warrants further study of using noncoplanar IMRT as a viable treatment solution for multiple brain tumors.« less
Evaluation of a scattering correction method for high energy tomography
NASA Astrophysics Data System (ADS)
Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel
2018-01-01
One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where experimental complexities must be avoided. This approach has been previously tested successfully in the energy range of 100 keV - 6 MeV. In this paper, the kernels are simulated using MCNP in order to take into account both photons and electronic processes in scattering radiation contribution. We present scatter correction results on a large object scanned with a 9 MeV linear accelerator.
Sadeghi, Mohammad Hosein; Sina, Sedigheh; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani
2018-02-01
The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy.
Alagar, Ananda Giri Babu; Mani, Ganesh Kadirampatti; Karunakaran, Kaviarasu
2016-01-08
Small fields smaller than 4 × 4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model-based algorithms, X-ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS-Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth-of-dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth-dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1 × 1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1 × 1 cm2 field showed maximum deviation, except in 6MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower-density materials compared to high-density materials.
Entanglement and quantum superposition induced by a single photon
NASA Astrophysics Data System (ADS)
Lü, Xin-You; Zhu, Gui-Lei; Zheng, Li-Li; Wu, Ying
2018-03-01
We predict the occurrence of single-photon-induced entanglement and quantum superposition in a hybrid quantum model, introducing an optomechanical coupling into the Rabi model. Originally, it comes from the photon-dependent quantum property of the ground state featured by the proposed hybrid model. It is associated with a single-photon-induced quantum phase transition, and is immune to the A2 term of the spin-field interaction. Moreover, the obtained quantum superposition state is actually a squeezed cat state, which can significantly enhance precision in quantum metrology. This work offers an approach to manipulate entanglement and quantum superposition with a single photon, which might have potential applications in the engineering of new single-photon quantum devices, and also fundamentally broaden the regime of cavity QED.
Ahmad, Moiz; Balter, Peter; Pan, Tinsu
2011-10-01
Data sufficiency are a major problem in four-dimensional cone-beam computed tomography (4D-CBCT) on linear accelerator-integrated scanners for image-guided radiotherapy. Scan times must be in the range of 4-6 min to avoid undersampling artifacts. Various image reconstruction algorithms have been proposed to accommodate undersampled data acquisitions, but these algorithms are computationally expensive, may require long reconstruction times, and may require algorithm parameters to be optimized. The authors present a novel reconstruction method, 4D volume-of-interest (4D-VOI) reconstruction which suppresses undersampling artifacts and resolves lung tumor motion for undersampled 1-min scans. The 4D-VOI reconstruction is much less computationally expensive than other 4D-CBCT algorithms. The 4D-VOI method uses respiration-correlated projection data to reconstruct a four-dimensional (4D) image inside a VOI containing the moving tumor, and uncorrelated projection data to reconstruct a three-dimensional (3D) image outside the VOI. Anatomical motion is resolved inside the VOI and blurred outside the VOI. The authors acquired a 1-min. scan of an anthropomorphic chest phantom containing a moving water-filled sphere. The authors also used previously acquired 1-min scans for two lung cancer patients who had received CBCT-guided radiation therapy. The same raw data were used to test and compare the 4D-VOI reconstruction with the standard 4D reconstruction and the McKinnon-Bates (MB) reconstruction algorithms. Both the 4D-VOI and the MB reconstructions suppress nearly all the streak artifacts compared with the standard 4D reconstruction, but the 4D-VOI has 3-8 times greater contrast-to-noise ratio than the MB reconstruction. In the dynamic chest phantom study, the 4D-VOI and the standard 4D reconstructions both resolved a moving sphere with an 18 mm displacement. The 4D-VOI reconstruction shows a motion blur of only 3 mm, whereas the MB reconstruction shows a motion blur of 13 mm. With graphics processing unit hardware used to accelerate computations, the 4D-VOI reconstruction required a 40-s reconstruction time. 4D-VOI reconstruction effectively reduces undersampling artifacts and resolves lung tumor motion in 4D-CBCT. The 4D-VOI reconstruction is computationally inexpensive compared with more sophisticated iterative algorithms. Compared with these algorithms, our 4D-VOI reconstruction is an attractive alternative in 4D-CBCT for reconstructing target motion without generating numerous streak artifacts.
Ahmad, Moiz; Balter, Peter; Pan, Tinsu
2011-01-01
Purpose: Data sufficiency are a major problem in four-dimensional cone-beam computed tomography (4D-CBCT) on linear accelerator-integrated scanners for image-guided radiotherapy. Scan times must be in the range of 4–6 min to avoid undersampling artifacts. Various image reconstruction algorithms have been proposed to accommodate undersampled data acquisitions, but these algorithms are computationally expensive, may require long reconstruction times, and may require algorithm parameters to be optimized. The authors present a novel reconstruction method, 4D volume-of-interest (4D-VOI) reconstruction which suppresses undersampling artifacts and resolves lung tumor motion for undersampled 1-min scans. The 4D-VOI reconstruction is much less computationally expensive than other 4D-CBCT algorithms. Methods: The 4D-VOI method uses respiration-correlated projection data to reconstruct a four-dimensional (4D) image inside a VOI containing the moving tumor, and uncorrelated projection data to reconstruct a three-dimensional (3D) image outside the VOI. Anatomical motion is resolved inside the VOI and blurred outside the VOI. The authors acquired a 1-min. scan of an anthropomorphic chest phantom containing a moving water-filled sphere. The authors also used previously acquired 1-min scans for two lung cancer patients who had received CBCT-guided radiation therapy. The same raw data were used to test and compare the 4D-VOI reconstruction with the standard 4D reconstruction and the McKinnon-Bates (MB) reconstruction algorithms. Results: Both the 4D-VOI and the MB reconstructions suppress nearly all the streak artifacts compared with the standard 4D reconstruction, but the 4D-VOI has 3–8 times greater contrast-to-noise ratio than the MB reconstruction. In the dynamic chest phantom study, the 4D-VOI and the standard 4D reconstructions both resolved a moving sphere with an 18 mm displacement. The 4D-VOI reconstruction shows a motion blur of only 3 mm, whereas the MB reconstruction shows a motion blur of 13 mm. With graphics processing unit hardware used to accelerate computations, the 4D-VOI reconstruction required a 40-s reconstruction time. Conclusions: 4D-VOI reconstruction effectively reduces undersampling artifacts and resolves lung tumor motion in 4D-CBCT. The 4D-VOI reconstruction is computationally inexpensive compared with more sophisticated iterative algorithms. Compared with these algorithms, our 4D-VOI reconstruction is an attractive alternative in 4D-CBCT for reconstructing target motion without generating numerous streak artifacts. PMID:21992381
Safe Maritime Navigation with COLREGS Using Velocity Obstacles
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki; Wolf, Michael T.; Zarzhitsky, Dimitri; Huntsberger, Terrance L.
2011-01-01
This paper presents a motion planning algorithm for Unmanned Surface Vehicles (USVs) to navigate safely in dynamic, cluttered environments. The proposed algorithm not only addresses Hazard Avoidance (HA) for stationary and moving hazards but also applies the International Regulations for Preventing Collisions at Sea (known as COLREGs). The COLREG rules specify, for example, which vessel is responsible for giving way to the other and to which side of the "stand-on" vessel to maneuver. The three primary COLREG rules were considered in this paper: crossing, overtaking, and head-on situations. For USVs to be safely deployed in environments with other traffic boats, it is imperative that the USV's navigation algorithm obey COLREGs. Note also that if other boats disregard their responsibility under COLREGs, the USV will still apply its HA algorithms to avoid a collision. The proposed approach is based on Velocity Obstacles, which generates a cone-shaped obstacle in the velocity space. Because Velocity Obstacles also specify which side of the obstacle the vehicle will pass during the avoidance maneuver, COLREGs are encoded in the velocity space in a natural way. The algorithm is demonstrated via both simulation and on-water tests.
Optimization-based reconstruction for reduction of CBCT artifact in IGRT
NASA Astrophysics Data System (ADS)
Xia, Dan; Zhang, Zheng; Paysan, Pascal; Seghers, Dieter; Brehm, Marcus; Munro, Peter; Sidky, Emil Y.; Pelizzari, Charles; Pan, Xiaochuan
2016-04-01
Kilo-voltage cone-beam computed tomography (CBCT) plays an important role in image guided radiation therapy (IGRT) by providing 3D spatial information of tumor potentially useful for optimizing treatment planning. In current IGRT CBCT system, reconstructed images obtained with analytic algorithms, such as FDK algorithm and its variants, may contain artifacts. In an attempt to compensate for the artifacts, we investigate optimization-based reconstruction algorithms such as the ASD-POCS algorithm for potentially reducing arti- facts in IGRT CBCT images. In this study, using data acquired with a physical phantom and a patient subject, we demonstrate that the ASD-POCS reconstruction can significantly reduce artifacts observed in clinical re- constructions. Moreover, patient images reconstructed by use of the ASD-POCS algorithm indicate a contrast level of soft-tissue improved over that of the clinical reconstruction. We have also performed reconstructions from sparse-view data, and observe that, for current clinical imaging conditions, ASD-POCS reconstructions from data collected at one half of the current clinical projection views appear to show image quality, in terms of spatial and soft-tissue-contrast resolution, higher than that of the corresponding clinical reconstructions.
Granato, Gregory E.; Smith, Kirk P.
1999-01-01
Discrete or composite samples of highway runoff may not adequately represent in-storm water-quality fluctuations because continuous records of water stage, specific conductance, pH, and temperature of the runoff indicate that these properties fluctuate substantially during a storm. Continuous records of water-quality properties can be used to maximize the information obtained about the stormwater runoff system being studied and can provide the context needed to interpret analyses of water samples. Concentrations of the road-salt constituents calcium, sodium, and chloride in highway runoff were estimated from theoretical and empirical relations between specific conductance and the concentrations of these ions. These relations were examined using the analysis of 233 highwayrunoff samples collected from August 1988 through March 1995 at four highway-drainage monitoring stations along State Route 25 in southeastern Massachusetts. Theoretically, the specific conductance of a water sample is the sum of the individual conductances attributed to each ionic species in solution-the product of the concentrations of each ion in milliequivalents per liter (meq/L) multiplied by the equivalent ionic conductance at infinite dilution-thereby establishing the principle of superposition. Superposition provides an estimate of actual specific conductance that is within measurement error throughout the conductance range of many natural waters, with errors of less than ?5 percent below 1,000 microsiemens per centimeter (?S/cm) and ?10 percent between 1,000 and 4,000 ?S/cm if all major ionic constituents are accounted for. A semi-empirical method (adjusted superposition) was used to adjust for concentration effects-superposition-method prediction errors at high and low concentrations-and to relate measured specific conductance to that calculated using superposition. The adjusted superposition method, which was developed to interpret the State Route 25 highway-runoff records, accounts for contributions of constituents other than calcium, sodium, and chloride in dilute waters. The adjusted superposition method also accounts for the attenuation of each constituent's contribution to conductance as ionic strength increases. Use of the adjusted superposition method generally reduced predictive error to within measurement error throughout the range of specific conductance (from 37 to 51,500 ?S/cm) in the highway runoff samples. The effects of pH, temperature, and organic constituents on the relation between concentrations of dissolved constituents and measured specific conductance were examined but these properties did not substantially affect interpretation of the Route 25 data set. Predictive abilities of the adjusted superposition method were similar to results obtained by standard regression techniques, but the adjusted superposition method has several advantages. Adjusted superposition can be applied using available published data about the constituents in precipitation, highway runoff, and the deicing chemicals applied to a highway. This semi-empirical method can be used as a predictive and diagnostic tool before a substantial number of samples are collected, but the power of the regression method is based upon a large number of water-quality analyses that may be affected by a bias in the data.
A genetic-algorithm approach for assessing the liquefaction potential of sandy soils
NASA Astrophysics Data System (ADS)
Sen, G.; Akyol, E.
2010-04-01
The determination of liquefaction potential is required to take into account a large number of parameters, which creates a complex nonlinear structure of the liquefaction phenomenon. The conventional methods rely on simple statistical and empirical relations or charts. However, they cannot characterise these complexities. Genetic algorithms are suited to solve these types of problems. A genetic algorithm-based model has been developed to determine the liquefaction potential by confirming Cone Penetration Test datasets derived from case studies of sandy soils. Software has been developed that uses genetic algorithms for the parameter selection and assessment of liquefaction potential. Then several estimation functions for the assessment of a Liquefaction Index have been generated from the dataset. The generated Liquefaction Index estimation functions were evaluated by assessing the training and test data. The suggested formulation estimates the liquefaction occurrence with significant accuracy. Besides, the parametric study on the liquefaction index curves shows a good relation with the physical behaviour. The total number of misestimated cases was only 7.8% for the proposed method, which is quite low when compared to another commonly used method.
NASA Astrophysics Data System (ADS)
Bogusz, Michael
1993-01-01
The need for a systematic methodology for the analysis of aircraft electromagnetic compatibility (EMC) problems is examined. The available computer aids used in aircraft EMC analysis are assessed and a theoretical basis is established for the complex algorithms which identify and quantify electromagnetic interactions. An overview is presented of one particularly well established aircraft antenna to antenna EMC analysis code, the Aircraft Inter-Antenna Propagation with Graphics (AAPG) Version 07 software. The specific new algorithms created to compute cone geodesics and their associated path losses and to graph the physical coupling path are discussed. These algorithms are validated against basic principles. Loss computations apply the uniform geometrical theory of diffraction and are subsequently compared to measurement data. The increased modelling and analysis capabilities of the newly developed AAPG Version 09 are compared to those of Version 07. Several models of real aircraft, namely the Electronic Systems Trainer Challenger, are generated and provided as a basis for this preliminary comparative assessment. Issues such as software reliability, algorithm stability, and quality of hardcopy output are also discussed.
Endmember extraction from hyperspectral image based on discrete firefly algorithm (EE-DFA)
NASA Astrophysics Data System (ADS)
Zhang, Chengye; Qin, Qiming; Zhang, Tianyuan; Sun, Yuanheng; Chen, Chao
2017-04-01
This study proposed a novel method to extract endmembers from hyperspectral image based on discrete firefly algorithm (EE-DFA). Endmembers are the input of many spectral unmixing algorithms. Hence, in this paper, endmember extraction from hyperspectral image is regarded as a combinational optimization problem to get best spectral unmixing results, which can be solved by the discrete firefly algorithm. Two series of experiments were conducted on the synthetic hyperspectral datasets with different SNR and the AVIRIS Cuprite dataset, respectively. The experimental results were compared with the endmembers extracted by four popular methods: the sequential maximum angle convex cone (SMACC), N-FINDR, Vertex Component Analysis (VCA), and Minimum Volume Constrained Nonnegative Matrix Factorization (MVC-NMF). What's more, the effect of the parameters in the proposed method was tested on both synthetic hyperspectral datasets and AVIRIS Cuprite dataset, and the recommended parameters setting was proposed. The results in this study demonstrated that the proposed EE-DFA method showed better performance than the existing popular methods. Moreover, EE-DFA is robust under different SNR conditions.
Viscous Effects in the Elastodynamics of Thick Beams
NASA Technical Reports Server (NTRS)
Johnson, A. R.; Tessler, A.
1997-01-01
A viscoelastic higher-order thick beam finite element formulation is extended to include elastodynamic deformations. The material constitutive law is a special differential form of the Maxwell solid. In the constitutive model, the elastic strains and the conjugate viscous strains are coupled through a system of first- order ordinary differential equations. The total time-dependent stress is the superposition of its elastic and viscous components. The elastodynamic equations of motion are derived from the virtual work principle. Computational examples are carried out for a thick orthotropic cantilevered beam. A quasi-static relaxation problem is employed as a validation test for the elastodynamic algorithm. The elastodynamic code is demonstrated by analyzing the damped vibrations of the beam which is deformed and then released to freely vibrate.
Sadeghi, Mohammad Hosein; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani
2018-01-01
Purpose The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. Material and methods In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. Results The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. Conclusions According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy. PMID:29619061
Performance-scalable volumetric data classification for online industrial inspection
NASA Astrophysics Data System (ADS)
Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.
2002-03-01
Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.
Optimization-based image reconstruction from sparse-view data in offset-detector CBCT
NASA Astrophysics Data System (ADS)
Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y.; Shao, Lingxiong; Pan, Xiaochuan
2013-01-01
The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in a single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting the CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithms, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging.
Holmes, T J; Liu, Y H
1989-11-15
A maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging was presented in a previous publication with some initial computer-simulation testing. This algorithm is identical in form to that previously derived in a different way by W. H. Richardson "Bayesian-Based Iterative Method of Image Restoration," J. Opt. Soc. Am. 62, 55-59 (1972) and L. B. Lucy "An Iterative Technique for the Rectification of Observed Distributions," Astron. J. 79, 745-765 (1974). Foreseen applications include superresolution and 3-D fluorescence microscopy. This paper presents further simulation testing of this algorithm and a preliminary experiment with a defocused camera. The simulations show quantified resolution improvement as a function of iteration number, and they show qualitatively the trend in limitations on restored resolution when noise is present in the data. Also shown are results of a simulation in restoring missing-cone information for 3-D imaging. Conclusions are in support of the feasibility of using these methods with real systems, while computational cost and timing estimates indicate that it should be realistic to implement these methods. Itis suggested in the Appendix that future extensions to the maximum likelihood based derivation of this algorithm will address some of the limitations that are experienced with the nonextended form of the algorithm presented here.
Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sales, J. S.; Silva, L. F. da; Almeida, N. G. de
2011-03-15
We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.
Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity
NASA Astrophysics Data System (ADS)
Sales, J. S.; da Silva, L. F.; de Almeida, N. G.
2011-03-01
We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.
Loi, Gianfranco; Dominietto, Marco; Manfredda, Irene; Mones, Eleonora; Carriero, Alessandro; Inglese, Eugenio; Krengli, Marco; Brambilla, Marco
2008-09-01
This note describes a method to characterize the performances of image fusion software (Syntegra) with respect to accuracy and robustness. Computed tomography (CT), magnetic resonance imaging (MRI), and single-photon emission computed tomography (SPECT) studies were acquired from two phantoms and 10 patients. Image registration was performed independently by two couples composed of one radiotherapist and one physicist by means of superposition of anatomic landmarks. Each couple performed jointly and saved the registration. The two solutions were averaged to obtain the gold standard registration. A new set of estimators was defined to identify translation and rotation errors in the coordinate axes, independently from point position in image field of view (FOV). Algorithms evaluated were local correlation (LC) for CT-MRI, normalized mutual information (MI) for CT-MRI, and CT-SPECT registrations. To evaluate accuracy, estimator values were compared to limiting values for the algorithms employed, both in phantoms and in patients. To evaluate robustness, different alignments between images taken from a sample patient were produced and registration errors determined. LC algorithm resulted accurate in CT-MRI registrations in phantoms, but exceeded limiting values in 3 of 10 patients. MI algorithm resulted accurate in CT-MRI and CT-SPECT registrations in phantoms; limiting values were exceeded in one case in CT-MRI and never reached in CT-SPECT registrations. Thus, the evaluation of robustness was restricted to the algorithm of MI both for CT-MRI and CT-SPECT registrations. The algorithm of MI proved to be robust: limiting values were not exceeded with translation perturbations up to 2.5 cm, rotation perturbations up to 10 degrees and roto-translational perturbation up to 3 cm and 5 degrees.
Synthetic Incoherence via Scanned Gaussian Beams
Levine, Zachary H.
2006-01-01
Tomography, in most formulations, requires an incoherent signal. For a conventional transmission electron microscope, the coherence of the beam often results in diffraction effects that limit the ability to perform a 3D reconstruction from a tilt series with conventional tomographic reconstruction algorithms. In this paper, an analytic solution is given to a scanned Gaussian beam, which reduces the beam coherence to be effectively incoherent for medium-size (of order 100 voxels thick) tomographic applications. The scanned Gaussian beam leads to more incoherence than hollow-cone illumination. PMID:27274945
Design of a modulated orthovoltage stereotactic radiosurgery system.
Fagerstrom, Jessica M; Bender, Edward T; Lawless, Michael J; Culberson, Wesley S
2017-07-01
To achieve stereotactic radiosurgery (SRS) dose distributions with sharp gradients using orthovoltage energy fluence modulation with inverse planning optimization techniques. A pencil beam model was used to calculate dose distributions from an orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods. A Genetic Algorithm search heuristic was used to optimize the spatial distribution of added tungsten filtration to achieve dose distributions with sharp dose gradients. Optimizations were performed for depths of 2.5, 5.0, and 7.5 cm, with cone sizes of 5, 6, 8, and 10 mm. In addition to the beam profiles, 4π isocentric irradiation geometries were modeled to examine dose at 0.07 mm depth, a representative skin depth, for the low energy beams. Profiles from 4π irradiations of a constant target volume, assuming maximally conformal coverage, were compared. Finally, dose deposition in bone compared to tissue in this energy range was examined. Based on the results of the optimization, circularly symmetric tungsten filters were designed to modulate the orthovoltage beam across the apertures of SRS cone collimators. For each depth and cone size combination examined, the beam flatness and 80-20% and 90-10% penumbrae were calculated for both standard, open cone-collimated beams as well as for optimized, filtered beams. For all configurations tested, the modulated beam profiles had decreased penumbra widths and flatness statistics at depth. Profiles for the optimized, filtered orthovoltage beams also offered decreases in these metrics compared to measured linear accelerator cone-based SRS profiles. The dose at 0.07 mm depth in the 4π isocentric irradiation geometries was higher for the modulated beams compared to unmodulated beams; however, the modulated dose at 0.07 mm depth remained <0.025% of the central, maximum dose. The 4π profiles irradiating a constant target volume showed improved statistics for the modulated, filtered distribution compared to the standard, open cone-collimated distribution. Simulations of tissue and bone confirmed previously published results that a higher energy beam (≥ 200 keV) would be preferable, but the 250 kVp beam was chosen for this work because it is available for future measurements. A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions with decreased flatness and penumbra statistics compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system. © 2017 American Association of Physicists in Medicine.
GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.
Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd
2018-01-01
In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.
Coherent superposition of propagation-invariant laser beams
NASA Astrophysics Data System (ADS)
Soskind, R.; Soskind, M.; Soskind, Y. G.
2012-10-01
The coherent superposition of propagation-invariant laser beams represents an important beam-shaping technique, and results in new beam shapes which retain the unique property of propagation invariance. Propagation-invariant laser beam shapes depend on the order of the propagating beam, and include Hermite-Gaussian and Laguerre-Gaussian beams, as well as the recently introduced Ince-Gaussian beams which additionally depend on the beam ellipticity parameter. While the superposition of Hermite-Gaussian and Laguerre-Gaussian beams has been discussed in the past, the coherent superposition of Ince-Gaussian laser beams has not received significant attention in literature. In this paper, we present the formation of propagation-invariant laser beams based on the coherent superposition of Hermite-Gaussian, Laguerre-Gaussian, and Ince-Gaussian beams of different orders. We also show the resulting field distributions of the superimposed Ince-Gaussian laser beams as a function of the ellipticity parameter. By changing the beam ellipticity parameter, we compare the various shapes of the superimposed propagation-invariant laser beams transitioning from Laguerre-Gaussian beams at one ellipticity extreme to Hermite-Gaussian beams at the other extreme.
Bäcklund transformations for the Boussinesq equation and merging solitons
NASA Astrophysics Data System (ADS)
Rasin, Alexander G.; Schiff, Jeremy
2017-08-01
The Bäcklund transformation (BT) for the ‘good’ Boussinesq equation and its superposition principles are presented and applied. Unlike other standard integrable equations, the Boussinesq equation does not have a strictly algebraic superposition principle for 2 BTs, but it does for 3. We present this and discuss associated lattice systems. Applying the BT to the trivial solution generates both standard solitons and what we call ‘merging solitons’—solutions in which two solitary waves (with related speeds) merge into a single one. We use the superposition principles to generate a variety of interesting solutions, including superpositions of a merging soliton with 1 or 2 regular solitons, and solutions that develop a singularity in finite time which then disappears at a later finite time. We prove a Wronskian formula for the solutions obtained by applying a general sequence of BTs on the trivial solution. Finally, we obtain the standard conserved quantities of the Boussinesq equation from the BT, and show how the hierarchy of local symmetries follows in a simple manner from the superposition principle for 3 BTs.
A modified homotopy perturbation method and the axial secular frequencies of a non-linear ion trap.
Doroudi, Alireza
2012-01-01
In this paper, a modified version of the homotopy perturbation method, which has been applied to non-linear oscillations by V. Marinca, is used for calculation of axial secular frequencies of a non-linear ion trap with hexapole and octopole superpositions. The axial equation of ion motion in a rapidly oscillating field of an ion trap can be transformed to a Duffing-like equation. With only octopole superposition the resulted non-linear equation is symmetric; however, in the presence of hexapole and octopole superpositions, it is asymmetric. This modified homotopy perturbation method is used for solving the resulting non-linear equations. As a result, the ion secular frequencies as a function of non-linear field parameters are obtained. The calculated secular frequencies are compared with the results of the homotopy perturbation method and the exact results. With only hexapole superposition, the results of this paper and the homotopy perturbation method are the same and with hexapole and octopole superpositions, the results of this paper are much more closer to the exact results compared with the results of the homotopy perturbation method.
Phylogenetic Copy-Number Factorization of Multiple Tumor Samples.
Zaccaria, Simone; El-Kebir, Mohammed; Klau, Gunnar W; Raphael, Benjamin J
2018-04-16
Cancer is an evolutionary process driven by somatic mutations. This process can be represented as a phylogenetic tree. Constructing such a phylogenetic tree from genome sequencing data is a challenging task due to the many types of mutations in cancer and the fact that nearly all cancer sequencing is of a bulk tumor, measuring a superposition of somatic mutations present in different cells. We study the problem of reconstructing tumor phylogenies from copy-number aberrations (CNAs) measured in bulk-sequencing data. We introduce the Copy-Number Tree Mixture Deconvolution (CNTMD) problem, which aims to find the phylogenetic tree with the fewest number of CNAs that explain the copy-number data from multiple samples of a tumor. We design an algorithm for solving the CNTMD problem and apply the algorithm to both simulated and real data. On simulated data, we find that our algorithm outperforms existing approaches that either perform deconvolution/factorization of mixed tumor samples or build phylogenetic trees assuming homogeneous tumor samples. On real data, we analyze multiple samples from a prostate cancer patient, identifying clones within these samples and a phylogenetic tree that relates these clones and their differing proportions across samples. This phylogenetic tree provides a higher resolution view of copy-number evolution of this cancer than published analyses.
Demonstration of two-qubit algorithms with a superconducting quantum processor.
DiCarlo, L; Chow, J M; Gambetta, J M; Bishop, Lev S; Johnson, B R; Schuster, D I; Majer, J; Blais, A; Frunzio, L; Girvin, S M; Schoelkopf, R J
2009-07-09
Quantum computers, which harness the superposition and entanglement of physical states, could outperform their classical counterparts in solving problems with technological impact-such as factoring large numbers and searching databases. A quantum processor executes algorithms by applying a programmable sequence of gates to an initialized register of qubits, which coherently evolves into a final state containing the result of the computation. Building a quantum processor is challenging because of the need to meet simultaneously requirements that are in conflict: state preparation, long coherence times, universal gate operations and qubit readout. Processors based on a few qubits have been demonstrated using nuclear magnetic resonance, cold ion trap and optical systems, but a solid-state realization has remained an outstanding challenge. Here we demonstrate a two-qubit superconducting processor and the implementation of the Grover search and Deutsch-Jozsa quantum algorithms. We use a two-qubit interaction, tunable in strength by two orders of magnitude on nanosecond timescales, which is mediated by a cavity bus in a circuit quantum electrodynamics architecture. This interaction allows the generation of highly entangled states with concurrence up to 94 per cent. Although this processor constitutes an important step in quantum computing with integrated circuits, continuing efforts to increase qubit coherence times, gate performance and register size will be required to fulfil the promise of a scalable technology.
Fast analytical scatter estimation using graphics processing units.
Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris
2015-01-01
To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.
NASA Astrophysics Data System (ADS)
Kirchbach, M.; Compean, C. B.
2017-04-01
In the article under discussion the analysis of the spectra of the unflavored mesons lead us to some intriguing insights into the possible geometry of space-time outside the causal Minkowski light cone and into the nature of strong interactions. In applying the potential theory concept of geometrization of interactions, we showed that the meson masses are best described by a confining potential composed by the centrifugal barrier on the three-dimensional spherical space, S3, and of a charge-dipole potential constructed from the Green function to the S3 Laplacian. The dipole potential emerged in view of the fact that S3 does not support single-charges without violation of the Gauss theorem and the superposition principle, thus providing a natural stage for the description of the general phenomenon of confined charge-neutral systems. However, in the original article we did not relate the charge-dipoles on S3 to the color neutral mesons, and did not express the magnitude of the confining dipole potential in terms of the strong coupling αS and the number of colors, Nc, the subject of the addendum. To the amount S3 can be thought of as the unique closed space-like geodesic of a four-dimensional de Sitter space-time, dS4, we hypothesized the space-like region outside the causal Einsteinian light cone (it describes virtual processes, among them interactions) as the (1+4)-dimensional subspace of the conformal (2+4) space-time, foliated with dS4 hyperboloids, and in this way assumed relevance of dS4 special relativity for strong interaction processes. The potential designed in this way predicted meson spectra of conformal degeneracy patterns, and in accord with the experimental observations. We now extract the αs values in the infrared from data on meson masses. The results obtained are compatible with the αs estimates provided by other approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, S; Rangaraj, D
2016-06-15
Purpose: Although cone-beam CT (CBCT) imaging became popular in radiation oncology, its imaging dose estimation is still challenging. The goal of this study is to assess the kilovoltage CBCT doses using GMctdospp - an EGSnrc based Monte Carlo (MC) framework. Methods: Two Varian OBI x-ray tube models were implemented in the GMctpdospp framework of EGSnrc MC System. The x-ray spectrum of 125 kVp CBCT beam was acquired from an EGSnrc/BEAMnrc simulation and validated with IPEM report 78. Then, the spectrum was utilized as an input spectrum in GMctdospp dose calculations. Both full and half bowtie pre-filters of the OBI systemmore » were created by using egs-prism module. The x-ray tube MC models were verified by comparing calculated dosimetric profiles (lateral and depth) to ion chamber measurements for a static x-ray beam irradiation to a cuboid water phantom. An abdominal CBCT imaging doses was simulated in GMctdospp framework using a 5-year-old anthropomorphic phantom. The organ doses and effective dose (ED) from the framework were assessed and compared to the MOSFET measurements and convolution/superposition dose calculations. Results: The lateral and depth dose profiles in the water cuboid phantom were well matched within 6% except a few areas - left shoulder of the half bowtie lateral profile and surface of water phantom. The organ doses and ED from the MC framework were found to be closer to MOSFET measurements and CS calculations within 2 cGy and 5 mSv respectively. Conclusion: This study implemented and validated the Varian OBI x-ray tube models in the GMctdospp MC framework using a cuboid water phantom and CBCT imaging doses were also evaluated in a 5-year-old anthropomorphic phantom. In future study, various CBCT imaging protocols will be implemented and validated and consequently patient CT images will be used to estimate the CBCT imaging doses in patients.« less
NASA Astrophysics Data System (ADS)
Chang, Li-Na; Luo, Shun-Long; Sun, Yuan
2017-11-01
The principle of superposition is universal and lies at the heart of quantum theory. Although ever since the inception of quantum mechanics a century ago, superposition has occupied a central and pivotal place, rigorous and systematic studies of the quantification issue have attracted significant interests only in recent years, and many related problems remain to be investigated. In this work we introduce a figure of merit which quantifies superposition from an intuitive and direct perspective, investigate its fundamental properties, connect it to some coherence measures, illustrate it through several examples, and apply it to analyze wave-particle duality. Supported by Science Challenge Project under Grant No. TZ2016002, Laboratory of Computational Physics, Institute of Applied Physics and Computational Mathematics, Beijing, Key Laboratory of Random Complex Structures and Data Science, Chinese Academy of Sciences, Grant under No. 2008DP173182
Becker, H; Albera, L; Comon, P; Nunes, J-C; Gribonval, R; Fleureau, J; Guillotel, P; Merlet, I
2017-08-15
Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients. Copyright © 2017 Elsevier Inc. All rights reserved.
Huang, Hsuan-Ming; Hsiao, Ing-Tsung
2016-01-01
In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate.
Rudolph, G; Bechmann, M; Berninger, T; Kutschbach, E; Held, U; Tornow, R P; Kalpadakis, P; Zol'nikova, I V; Shamshinova, A M
2001-01-01
A new method of multifocal electroretinography making use of scanning laser ophthalmoscope with a wavelength of 630 nm (SLO-m-ERG), evoking short spatial visual stimuli on the retina, is proposed. Algorithm of presenting the visual stimuli and analysis of distribution of local electroretinograms on the surface of the retina is based on short m-sequences. Mathematical cross correlation analysis shows a three-dimensional distribution of bioelectrical activity of the retina in the central visual field. In normal subjects the cone bioelectrical activity is the maximum in the macular area (corresponding to the density of cone distribution) and absent in the blind spot. The method detects the slightest pathological changes in the retina under control of the site of stimulation and ophthalmoscopic picture of the fundus oculi. The site of the pathological process correlates with the topography of changes in bioelectrical activity of the examined retinal area in diseases of the macular area and pigmented retinitis detectable by ophthalmoscopy.
NASA Technical Reports Server (NTRS)
Mielke, Roland; Dcunha, Ivan; Alvertos, Nicolas
1994-01-01
In the final phase of the proposed research a complete top to down three dimensional object recognition scheme has been proposed. The various three dimensional objects included spheres, cones, cylinders, ellipsoids, paraboloids, and hyperboloids. Utilizing a newly developed blob determination technique, a given range scene with several non-cluttered quadric surfaces is segmented. Next, using the earlier (phase 1) developed alignment scheme, each of the segmented objects are then aligned in a desired coordinate system. For each of the quadric surfaces based upon their intersections with certain pre-determined planes, a set of distinct features (curves) are obtained. A database with entities such as the equations of the planes and angular bounds of these planes has been created for each of the quadric surfaces. Real range data of spheres, cones, cylinders, and parallelpipeds have been utilized for the recognition process. The developed algorithm gave excellent results for the real data as well as for several sets of simulated range data.
Boulanger, Pierre; Flores-Mir, Carlos; Ramirez, Juan F; Mesa, Elizabeth; Branch, John W
2009-01-01
The measurements from registered images obtained from Cone Beam Computed Tomography (CBCT) and a photogrammetric sensor are used to track three-dimensional shape variations of orthodontic patients before and after their treatments. The methodology consists of five main steps: (1) the patient's bone and skin shapes are measured in 3D using the fusion of images from a CBCT and a photogrammetric sensor. (2) The bone shape is extracted from the CBCT data using a standard marching cube algorithm. (3) The bone and skin shape measurements are registered using titanium targets located on the head of the patient. (4) Using a manual segmentation technique the head and lower jaw geometry are extracted separately to deal with jaw motion at the different record visits. (5) Using natural features of the upper head the two datasets are then registered with each other and then compared to evaluate bone, teeth, and skin displacements before and after treatments. This procedure is now used at the University of Alberta orthodontic clinic.
Huang, Hsuan-Ming; Hsiao, Ing-Tsung
2016-01-01
In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853
Finite-Length Line Source Superposition Model (FLLSSM)
NASA Astrophysics Data System (ADS)
1980-03-01
A linearized thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high level waste or spent fuel assemblies were represented as finite length line sources in a continuous media. The combined effects of multiple canisters in a representative storage pattern were established at selected points of interest by superposition of the temperature rises calculated for each canister. The methodology is outlined and the computer code FLLSSM which performs required numerical integrations and superposition operations is described.
NASA Astrophysics Data System (ADS)
Mobarakeh, Pouyan Shakeri; Grinchenko, Victor T.
2015-06-01
The majority of practical cases of acoustics problems requires solving the boundary problems in non-canonical domains. Therefore construction of analytical solutions of mathematical physics boundary problems for non-canonical domains is both lucrative from the academic viewpoint, and very instrumental for elaboration of efficient algorithms of quantitative estimation of the field characteristics under study. One of the main solving ideologies for such problems is based on the superposition method that allows one to analyze a wide class of specific problems with domains which can be constructed as the union of canonically-shaped subdomains. It is also assumed that an analytical solution (or quasi-solution) can be constructed for each subdomain in one form or another. However, this case implies some difficulties in the construction of calculation algorithms, insofar as the boundary conditions are incompletely defined in the intervals, where the functions appearing in the general solution are orthogonal to each other. We discuss several typical examples of problems with such difficulties, we study their nature and identify the optimal methods to overcome them.
FoldMiner and LOCK 2: protein structure comparison and motif discovery on the web.
Shapiro, Jessica; Brutlag, Douglas
2004-07-01
The FoldMiner web server (http://foldminer.stanford.edu/) provides remote access to methods for protein structure alignment and unsupervised motif discovery. FoldMiner is unique among such algorithms in that it improves both the motif definition and the sensitivity of a structural similarity search by combining the search and motif discovery methods and using information from each process to enhance the other. In a typical run, a query structure is aligned to all structures in one of several databases of single domain targets in order to identify its structural neighbors and to discover a motif that is the basis for the similarity among the query and statistically significant targets. This process is fully automated, but options for manual refinement of the results are available as well. The server uses the Chime plugin and customized controls to allow for visualization of the motif and of structural superpositions. In addition, we provide an interface to the LOCK 2 algorithm for rapid alignments of a query structure to smaller numbers of user-specified targets.
Ultrasound-based measurement of liquid-layer thickness: A novel time-domain approach
NASA Astrophysics Data System (ADS)
Praher, Bernhard; Steinbichler, Georg
2017-01-01
Measuring the thickness of a thin liquid layer between two solid materials is important when the adequate separation of metallic parts by a lubricant film (e.g., in bearings or mechanical seals) is to be assessed. The challenge in using ultrasound-based systems for such measurements is that the signal from the liquid layer is a superposition of multiple reflections. We have developed an algorithm for reconstructing this superimposed signal in the time domain. By comparing simulated and measured signals, the time-of-flight of the ultrasonic pulse in a layer can be estimated. With the longitudinal sound velocity known, the layer thickness can then be calculated. In laboratory measurements, we validate successfully (maximum relative error 4.9%) our algorithm for layer thicknesses ranging from 30 μm to 200 μm. Furthermore, we tested our method in the high-temperature environment of polymer processing by measuring the clearance between screw and barrel in the plasticisation unit of an injection moulding machine. The results of such measurements can indicate (i) the wear status of the tribo-mechanical screw-barrel system and (ii) unsuitable process conditions.
NASA Astrophysics Data System (ADS)
Ellerman, David
2014-03-01
In models of QM over finite fields (e.g., Schumacher's ``modal quantum theory'' MQT), one finite field stands out, Z2, since Z2 vectors represent sets. QM (finite-dimensional) mathematics can be transported to sets resulting in quantum mechanics over sets or QM/sets. This gives a full probability calculus (unlike MQT with only zero-one modalities) that leads to a fulsome theory of QM/sets including ``logical'' models of the double-slit experiment, Bell's Theorem, QIT, and QC. In QC over Z2 (where gates are non-singular matrices as in MQT), a simple quantum algorithm (one gate plus one function evaluation) solves the Parity SAT problem (finding the parity of the sum of all values of an n-ary Boolean function). Classically, the Parity SAT problem requires 2n function evaluations in contrast to the one function evaluation required in the quantum algorithm. This is quantum speedup but with all the calculations over Z2 just like classical computing. This shows definitively that the source of quantum speedup is not in the greater power of computing over the complex numbers, and confirms the idea that the source is in superposition.
Resolving coiled shapes reveals new reorientation behaviors in C. elegans
Broekmans, Onno D; Rodgers, Jarlath B; Ryu, William S; Stephens, Greg J
2016-01-01
We exploit the reduced space of C. elegans postures to develop a novel tracking algorithm which captures both simple shapes and also self-occluding coils, an important, yet unexplored, component of 2D worm behavior. We apply our algorithm to show that visually complex, coiled sequences are a superposition of two simpler patterns: the body wave dynamics and a head-curvature pulse. We demonstrate the precise Ω-turn dynamics of an escape response and uncover a surprising new dichotomy in spontaneous, large-amplitude coils; deep reorientations occur not only through classical Ω-shaped postures but also through larger postural excitations which we label here as δ-turns. We find that omega and delta turns occur independently, suggesting a distinct triggering mechanism, and are the serpentine analog of a random left-right step. Finally, we show that omega and delta turns occur with approximately equal rates and adapt to food-free conditions on a similar timescale, a simple strategy to avoid navigational bias. DOI: http://dx.doi.org/10.7554/eLife.17227.001 PMID:27644113
NASA Astrophysics Data System (ADS)
Sun, Hao-yu; Cui, Zhiwei; Wang, Jiajie; Han, Yiping; Sun, Peng; Shi, Xiaowei
2018-06-01
A numerical analysis of electromagnetic (EM) scattering characteristics of a hypersonic aerocraft enveloped by a plasma sheath is presented. The flow field parameters around a hypersonic aerocraft are derived by numerically solving the Navier-Stokes equations. Through multiphysics coupling of flow field and electromagnetic field, distributions of plasma frequency and collision frequency in plasma sheaths are obtained. A high-order auxiliary differential equation finite-difference time-domain algorithm is employed to investigate the EM wave scattering properties of the aerocraft covered by a plasma sheath. The backward radar cross sections (RCSs) of a blunt cone in the hypersonic flows at different velocities and altitudes with frequencies from 0.1 GHz to 18 GHz are studied. Numerical results show that, for the cases of altitude ranging from 50 km to 55 km and velocity ranging from 18 Ma to 20 Ma, the plasma sheath enhances the backscattering of the blunt cone when frequencies are below 1.5 GHz, and it reduces the backward RCSs of the blunt cone as frequency ranges from 1.5 GHz to 13.5 GHz. The plasma sheath has a larger attenuation effect for frequency lying in the range of 2 GHz to 6 GHz, but it has little influence on the backward electromagnetic scattering characteristics when frequencies are above 14 GHz.
NASA Astrophysics Data System (ADS)
Zhao, Huayong; Williams, Ben; Stone, Richard
2014-01-01
A new low-cost optical diagnostic technique, called Cone Beam Tomographic Three Colour Spectrometry (CBT-TCS), has been developed to measure the planar distributions of temperature, soot particle size, and soot volume fraction in a co-flow axi-symmetric laminar diffusion flame. The image of a flame is recorded by a colour camera, and then by using colour interpolation and applying a cone beam tomography algorithm, a colour map can be reconstructed that corresponds to a diametral plane. Look-up tables calculated using Planck's law and different scattering models are then employed to deduce the temperature, approximate average soot particle size and soot volume fraction in each voxel (volumetric pixel). A sensitivity analysis of the look-up tables shows that the results have a high temperature resolution but a relatively low soot particle size resolution. The assumptions underlying the technique are discussed in detail. Sample data from an ethylene laminar diffusion flame are compared with data in the literature for similar flames. The comparison shows very consistent temperature and soot volume fraction profiles. Further analysis indicates that the difference seen in comparison with published results are within the measurement uncertainties. This methodology is ready to be applied to measure 3D data by capturing multiple flame images from different angles for non-axisymmetric flame.
A model-based spike sorting algorithm for removing correlation artifacts in multi-neuron recordings.
Pillow, Jonathan W; Shlens, Jonathon; Chichilnisky, E J; Simoncelli, Eero P
2013-01-01
We examine the problem of estimating the spike trains of multiple neurons from voltage traces recorded on one or more extracellular electrodes. Traditional spike-sorting methods rely on thresholding or clustering of recorded signals to identify spikes. While these methods can detect a large fraction of the spikes from a recording, they generally fail to identify synchronous or near-synchronous spikes: cases in which multiple spikes overlap. Here we investigate the geometry of failures in traditional sorting algorithms, and document the prevalence of such errors in multi-electrode recordings from primate retina. We then develop a method for multi-neuron spike sorting using a model that explicitly accounts for the superposition of spike waveforms. We model the recorded voltage traces as a linear combination of spike waveforms plus a stochastic background component of correlated Gaussian noise. Combining this measurement model with a Bernoulli prior over binary spike trains yields a posterior distribution for spikes given the recorded data. We introduce a greedy algorithm to maximize this posterior that we call "binary pursuit". The algorithm allows modest variability in spike waveforms and recovers spike times with higher precision than the voltage sampling rate. This method substantially corrects cross-correlation artifacts that arise with conventional methods, and substantially outperforms clustering methods on both real and simulated data. Finally, we develop diagnostic tools that can be used to assess errors in spike sorting in the absence of ground truth.
A Model-Based Spike Sorting Algorithm for Removing Correlation Artifacts in Multi-Neuron Recordings
Chichilnisky, E. J.; Simoncelli, Eero P.
2013-01-01
We examine the problem of estimating the spike trains of multiple neurons from voltage traces recorded on one or more extracellular electrodes. Traditional spike-sorting methods rely on thresholding or clustering of recorded signals to identify spikes. While these methods can detect a large fraction of the spikes from a recording, they generally fail to identify synchronous or near-synchronous spikes: cases in which multiple spikes overlap. Here we investigate the geometry of failures in traditional sorting algorithms, and document the prevalence of such errors in multi-electrode recordings from primate retina. We then develop a method for multi-neuron spike sorting using a model that explicitly accounts for the superposition of spike waveforms. We model the recorded voltage traces as a linear combination of spike waveforms plus a stochastic background component of correlated Gaussian noise. Combining this measurement model with a Bernoulli prior over binary spike trains yields a posterior distribution for spikes given the recorded data. We introduce a greedy algorithm to maximize this posterior that we call “binary pursuit”. The algorithm allows modest variability in spike waveforms and recovers spike times with higher precision than the voltage sampling rate. This method substantially corrects cross-correlation artifacts that arise with conventional methods, and substantially outperforms clustering methods on both real and simulated data. Finally, we develop diagnostic tools that can be used to assess errors in spike sorting in the absence of ground truth. PMID:23671583
NASA Astrophysics Data System (ADS)
Xiong, Yan; Chen, Yang; Sun, Zhiyong; Hao, Lina; Dong, Jie
2014-07-01
Ionic polymer metal composites (IPMCs) are a type of electroactive polymer (EAP) that can be used as both sensors and actuators. An IPMC has enormous potential application in the field of biomimetic robotics, medical devices, and so on. However, an IPMC actuator has a great number of disadvantages, such as creep and time-variation, making it vulnerable to external disturbances. In addition, the complex actuation mechanism makes it difficult to model and the demand of the control algorithm is laborious to implement. In this paper, we obtain a creep model of the IPMC by means of model identification based on the method of creep operator linear superposition. Although the mathematical model is not approximate to the IPMC accurate model, it is accurate enough to be used in MATLAB to prove the control algorithm. A controller based on the active disturbance rejection control (ADRC) method is designed to solve the drawbacks previously given. Because the ADRC controller is separate from the mathematical model of the controlled plant, the control algorithm has the ability to complete disturbance estimation and compensation. Some factors, such as all external disturbances, uncertainty factors, the inaccuracy of the identification model and different kinds of IPMCs, have little effect on controlling the output block force of the IPMC. Furthermore, we use the particle swarm optimization algorithm to adjust ADRC parameters so that the IPMC actuator can approach the desired block force with unknown external disturbances. Simulations and experimental examples validate the effectiveness of the ADRC controller.
NASA Astrophysics Data System (ADS)
Huang, Xiaokun; Zhang, You; Wang, Jing
2017-03-01
Four-dimensional (4D) cone-beam computed tomography (CBCT) enables motion tracking of anatomical structures and removes artifacts introduced by motion. However, the imaging time/dose of 4D-CBCT is substantially longer/higher than traditional 3D-CBCT. We previously developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, to reconstruct high-quality 4D-CBCT from limited number of projections to reduce the imaging time/dose. However, the accuracy of SMEIR is limited in reconstructing low-contrast regions with fine structure details. In this study, we incorporate biomechanical modeling into the SMEIR algorithm (SMEIR-Bio), to improve the reconstruction accuracy at low-contrast regions with fine details. The efficacy of SMEIR-Bio is evaluated using 11 lung patient cases and compared to that of the original SMEIR algorithm. Qualitative and quantitative comparisons showed that SMEIR-Bio greatly enhances the accuracy of reconstructed 4D-CBCT volume in low-contrast regions, which can potentially benefit multiple clinical applications including the treatment outcome analysis.
A hybrid multiview stereo algorithm for modeling urban scenes.
Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu; Vu, Hoang-Hiep
2013-01-01
We present an original multiview stereo reconstruction algorithm which allows the 3D-modeling of urban scenes as a combination of meshes and geometric primitives. The method provides a compact model while preserving details: Irregular elements such as statues and ornaments are described by meshes, whereas regular structures such as columns and walls are described by primitives (planes, spheres, cylinders, cones, and tori). We adopt a two-step strategy consisting first in segmenting the initial meshbased surface using a multilabel Markov Random Field-based model and second in sampling primitive and mesh components simultaneously on the obtained partition by a Jump-Diffusion process. The quality of a reconstruction is measured by a multi-object energy model which takes into account both photo-consistency and semantic considerations (i.e., geometry and shape layout). The segmentation and sampling steps are embedded into an iterative refinement procedure which provides an increasingly accurate hybrid representation. Experimental results on complex urban structures and large scenes are presented and compared to state-of-the-art multiview stereo meshing algorithms.
Towards quantum superposition of a levitated nanodiamond with a NV center
NASA Astrophysics Data System (ADS)
Li, Tongcang
2015-05-01
Creating large Schrödinger's cat states with massive objects is one of the most challenging goals in quantum mechanics. We have previously achieved an important step of this goal by cooling the center-of-mass motion of a levitated microsphere from room temperature to millikelvin temperatures with feedback cooling. To generate spatial quantum superposition states with an optical cavity, however, requires a very strong quadratic coupling that is difficult to achieve. We proposed to optically trap a nanodiamond with a nitrogen-vacancy (NV) center in vacuum, and generate large spatial superposition states using the NV spin-optomechanical coupling in a strong magnetic gradient field. The large spatial superposition states can be used to study objective collapse theories of quantum mechanics. We have optically trapped nanodiamonds in air and are working towards this goal.
NASA Astrophysics Data System (ADS)
Ajo, Ramzi, Jr.
Modern treatment planning systems (TPS's) utilize different algorithms in computing dose within the patient medium. The algorithms rely on properly modeled clinical setups in order to perform optimally. Aside from various parameters of the beam, modifiers, such as multileaf collimators (MLC's), must also be modeled properly. That could not be more true today, where dynamic delivery such as intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) are being increasingly utilized due to their ability to deliver higher dose precisely to the target while sparing more surrounding normal tissue. Two of the most popular TPS's, Pinnacle (Philips) and Eclipse (Varian), were compared, with special emphasis placed on parameterization of the dosimetric leaf gap (DLG) in Eclipse. The DLG is a parameter that accounts for Varian's rounded MLC leaf ends. While Pinnacle accounts for the rounded leaf end by modeling the MLC's, Eclipse uses a measured parameter. This study investigated whether a single value measured DLG is sufficient for dynamic delivery. Using five planning volumes for vertebral body SBRT treatments, each prescribed for 3000 cGy in 5 fractions, an array of 20 treatment plans was generated using varying energies of 6MV-FFF and 10MV-FFF. Treatment techniques consisted of 9-field Step-and-shoot IMRT, and dual-arc VMAT using patient specific optimization criteria in the Pinnacle TPS v9.8. Each plan was normalized to ensure coverage of 3000cGy to 95% of the target volume. The dose was computed in Pinnacle v9.8, with the Collapsed Cone Convolution Superposition algorithm and Eclipse v11, with the Acuros XB algorithm, using a dose grid resolution of 2 mm in both systems. Dose volume histograms (DVH's) were generated for a comparison of max and mean dose to the targets and spinal cord, as well as 95% coverage of the targets and the volume of the spinal cord receiving 14.5 Gy (V14.5). Patient specific quality assurance (PSQA) fields were generated and then delivered, using a Varian Edge linear accelerator, to a 4D QA phantom for a gamma analysis and distance to agreement (DTA) comparison. All Eclipse calculations were made for both measured and optimized DLG parameters. Calculated vs. measured point dose for the Pinnacle TPS had an average difference of 2.79 +/- 2.00%. Gamma analysis using a 3% and 3 mm DTA had 99/100 fields passing at > 95%. Using measured values of the DLG in Eclipse, calculated vs. measured point dose was -4.44 +/- 1.97%, and DTA had 33/110 fields passing at > 95%. After an optimization of the DLG in Eclipse, calculated vs. measured point dose had an average difference of 2.20 +/- 2.23%, and DTA with 95/110 fields passing at > 95%. This study looked at the performance of the Pinnacle and Eclipse TPS's, with special consideration given to the DLG parameterization used by Eclipse. The results support the idea that a single valued DLG is not sufficient for dynamic delivery. An optimization of the parameter is necessary to account for the high modulation of IMRT and VMAT techniques.
Breast density quantification with cone-beam CT: A post-mortem study
Johnson, Travis; Ding, Huanjun; Le, Huy Q.; Ducote, Justin L.; Molloi, Sabee
2014-01-01
Forty post-mortem breasts were imaged with a flat-panel based cone-beam x-ray CT system at 50 kVp. The feasibility of breast density quantification has been investigated using standard histogram thresholding and an automatic segmentation method based on the fuzzy c-means algorithm (FCM). The breasts were chemically decomposed into water, lipid, and protein immediately after image acquisition was completed. The percent fibroglandular volume (%FGV) from chemical analysis was used as the gold standard for breast density comparison. Both image-based segmentation techniques showed good precision in breast density quantification with high linear coefficients between the right and left breast of each pair. When comparing with the gold standard using %FGV from chemical analysis, Pearson’s r-values were estimated to be 0.983 and 0.968 for the FCM clustering and the histogram thresholding techniques, respectively. The standard error of the estimate (SEE) was also reduced from 3.92% to 2.45% by applying the automatic clustering technique. The results of the postmortem study suggested that breast tissue can be characterized in terms of water, lipid and protein contents with high accuracy by using chemical analysis, which offers a gold standard for breast density studies comparing different techniques. In the investigated image segmentation techniques, the FCM algorithm had high precision and accuracy in breast density quantification. In comparison to conventional histogram thresholding, it was more efficient and reduced inter-observer variation. PMID:24254317
Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization.
Lee, Sing Chun; Fuerst, Bernhard; Fotouhi, Javad; Fischer, Marius; Osgood, Greg; Navab, Nassir
2016-06-01
This work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient's surface without the need to move the C-arm. An RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm. Several experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries. To the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles.
GPU-based cone beam computed tomography.
Noël, Peter B; Walczak, Alan M; Xu, Jinhui; Corso, Jason J; Hoffmann, Kenneth R; Schafer, Sebastian
2010-06-01
The use of cone beam computed tomography (CBCT) is growing in the clinical arena due to its ability to provide 3D information during interventions, its high diagnostic quality (sub-millimeter resolution), and its short scanning times (60 s). In many situations, the short scanning time of CBCT is followed by a time-consuming 3D reconstruction. The standard reconstruction algorithm for CBCT data is the filtered backprojection, which for a volume of size 256(3) takes up to 25 min on a standard system. Recent developments in the area of Graphic Processing Units (GPUs) make it possible to have access to high-performance computing solutions at a low cost, allowing their use in many scientific problems. We have implemented an algorithm for 3D reconstruction of CBCT data using the Compute Unified Device Architecture (CUDA) provided by NVIDIA (NVIDIA Corporation, Santa Clara, California), which was executed on a NVIDIA GeForce GTX 280. Our implementation results in improved reconstruction times from minutes, and perhaps hours, to a matter of seconds, while also giving the clinician the ability to view 3D volumetric data at higher resolutions. We evaluated our implementation on ten clinical data sets and one phantom data set to observe if differences occur between CPU and GPU-based reconstructions. By using our approach, the computation time for 256(3) is reduced from 25 min on the CPU to 3.2 s on the GPU. The GPU reconstruction time for 512(3) volumes is 8.5 s. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Marchant, T. E.; Joshi, K. D.; Moore, C. J.
2018-03-01
Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).
NASA Astrophysics Data System (ADS)
Meng, Bowen; Xing, Lei; Han, Bin; Koong, Albert; Chang, Daniel; Cheng, Jason; Li, Ruijiang
2013-11-01
Non-coplanar beams are important for treatment of both cranial and noncranial tumors. Treatment verification of such beams with couch rotation/kicks, however, is challenging, particularly for the application of cone beam CT (CBCT). In this situation, only limited and unconventional imaging angles are feasible to avoid collision between the gantry, couch, patient, and on-board imaging system. The purpose of this work is to develop a CBCT verification strategy for patients undergoing non-coplanar radiation therapy. We propose an image reconstruction scheme that integrates a prior image constrained compressed sensing (PICCS) technique with image registration. Planning CT or CBCT acquired at the neutral position is rotated and translated according to the nominal couch rotation/translation to serve as the initial prior image. Here, the nominal couch movement is chosen to have a rotational error of 5° and translational error of 8 mm from the ground truth in one or more axes or directions. The proposed reconstruction scheme alternates between two major steps. First, an image is reconstructed using the PICCS technique implemented with total-variation minimization and simultaneous algebraic reconstruction. Second, the rotational/translational setup errors are corrected and the prior image is updated by applying rigid image registration between the reconstructed image and the previous prior image. The PICCS algorithm and rigid image registration are alternated iteratively until the registration results fall below a predetermined threshold. The proposed reconstruction algorithm is evaluated with an anthropomorphic digital phantom and physical head phantom. The proposed algorithm provides useful volumetric images for patient setup using projections with an angular range as small as 60°. It reduced the translational setup errors from 8 mm to generally <1 mm and the rotational setup errors from 5° to <1°. Compared with the PICCS algorithm alone, the integration of rigid registration significantly improved the reconstructed image quality, with a reduction of mostly 2-3 folds (up to 100) in root mean square image error. The proposed algorithm provides a remedy for solving the problem of non-coplanar CBCT reconstruction from limited angle of projections by combining the PICCS technique and rigid image registration in an iterative framework. In this proof of concept study, non-coplanar beams with couch rotations of 45° can be effectively verified with the CBCT technique.
Kinematics of Cone-In-Cone Growth, with Implications for Timing and Formation Mechanism
NASA Astrophysics Data System (ADS)
Hooker, J. N.; Cartwright, J. A.
2015-12-01
Cone-in-cone is an enigmatic structure. Similar to many fibrous calcite veins, cone-in-cone is generally formed of calcite and present in bedding-parallel vein-like accumulations within fine-grained rocks. Unlike most fibrous veins, cone-in-cone contains conical inclusions of host-rock material, creating nested, parallel cones throughout. A long-debated aspect of cone-in-cone structures is whether the calcite precipitated with its conical form (primary cone-in-cone), or whether the cones formed afterwards (secondary cone-in-cone). Trace dolomite within a calcite cone-in-cone structure from the Cretaceous of Jordan supports the primary hypothesis. The host sediment is a siliceous mud containing abundant rhombohedral dolomite grains. Dolomite rhombohedra are also distributed throughout the cone-in-cone. The rhombohedra within the cones are randomly oriented yet locally have dolomite overgrowths having boundaries that are aligned with calcite fibers. Evidence that dolomite co-precipitated with calcite, and did not replace calcite, includes (i) preferential downward extension of dolomite overgrowths, in the presumed growth-direction of the cone-in-cone, and (ii) planar, vertical borders between dolomite crystals and calcite fibers. Because dolomite overgrows host-sediment rhombohedra and forms fibers within the cones, it follows that the host-sediment was included within the growing cone-in-cone as the calcite precipitated, and not afterward. The host-sediment was not injected into the cone-in-cone along fractures, as the secondary-origin hypothesis suggests. This finding implies that cone-in-cone in general does not form over multiple stages, and thus has greater potential to preserve the chemical signature of its original precipitation. Because cone-in-cone likely forms before complete lithification of the host, and because the calcite displaces the host material against gravity, this chemical signature can preserve information about early overpressures in fine-grained sediments.
Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Kuncic, Zdenka; Keall, Paul J.
2014-01-01
Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR values were found to increase with decreasing RMSE values of projection angular gaps with strong correlations (r ≈ −0.7) regardless of the reconstruction algorithm used. Conclusions: Based on the authors’ results, displacement-based binning methods, better reconstruction algorithms, and the acquisition of even projection angular views are the most important factors to consider for improving thoracic 4D-CBCT image quality. In view of the practical issues with displacement-based binning and the fact that projection angular spacing is not currently directly controllable, development of better reconstruction algorithms represents the most effective strategy for improving image quality in thoracic 4D-CBCT for IGRT applications at the current stage. PMID:24694143
Fully 3D refraction correction dosimetry system.
Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan
2016-02-21
The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.
MetalS(3), a database-mining tool for the identification of structurally similar metal sites.
Valasatava, Yana; Rosato, Antonio; Cavallaro, Gabriele; Andreini, Claudia
2014-08-01
We have developed a database search tool to identify metal sites having structural similarity to a query metal site structure within the MetalPDB database of minimal functional sites (MFSs) contained in metal-binding biological macromolecules. MFSs describe the local environment around the metal(s) independently of the larger context of the macromolecular structure. Such a local environment has a determinant role in tuning the chemical reactivity of the metal, ultimately contributing to the functional properties of the whole system. The database search tool, which we called MetalS(3) (Metal Sites Similarity Search), can be accessed through a Web interface at http://metalweb.cerm.unifi.it/tools/metals3/ . MetalS(3) uses a suitably adapted version of an algorithm that we previously developed to systematically compare the structure of the query metal site with each MFS in MetalPDB. For each MFS, the best superposition is kept. All these superpositions are then ranked according to the MetalS(3) scoring function and are presented to the user in tabular form. The user can interact with the output Web page to visualize the structural alignment or the sequence alignment derived from it. Options to filter the results are available. Test calculations show that the MetalS(3) output correlates well with expectations from protein homology considerations. Furthermore, we describe some usage scenarios that highlight the usefulness of MetalS(3) to obtain mechanistic and functional hints regardless of homology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Góźdź, A., E-mail: andrzej.gozdz@umcs.lublin.pl; Góźdź, M., E-mail: mgozdz@kft.umcs.lublin.pl
The theory of neutrino oscillations rests on the assumption, that the interaction basis and the physical (mass) basis of neutrino states are different. Therefore neutrino is produced in a certain welldefined superposition of three mass eigenstates, which propagate separately and may be detected as a different superposition. This is called flavor oscillations. It is, however, not clear why neutrinos behave this way, i.e., what is the underlying mechanism which leads to the production of a superposition of physical states in a single reaction. In this paper we argue, that one of the reasons may be connected with the temporal structuremore » of the process. In order to discuss the role of time in processes on the quantum level, we use a special formulation of the quantum mechanics, which is based on the projection time evolution. We arrive at the conclusion, that for short reaction times the formation of a superposition of states of similar masses is natural.« less
Transient Response of Shells of Revolution by Direct Integration and Modal Superposition Methods
NASA Technical Reports Server (NTRS)
Stephens, W. B.; Adelman, H. M.
1974-01-01
The results of an analytical effort to obtain and evaluate transient response data for a cylindrical and a conical shell by use of two different approaches: direct integration and modal superposition are described. The inclusion of nonlinear terms is more important than the inclusion of secondary linear effects (transverse shear deformation and rotary inertia) although there are thin-shell structures where these secondary effects are important. The advantages of the direct integration approach are that geometric nonlinear and secondary effects are easy to include and high-frequency response may be calculated. In comparison to the modal superposition technique the computer storage requirements are smaller. The advantages of the modal superposition approach are that the solution is independent of the previous time history and that once the modal data are obtained, the response for repeated cases may be efficiently computed. Also, any admissible set of initial conditions can be applied.
Splash-cup plants accelerate raindrops to disperse seeds
Amador, Guillermo J.; Yamada, Yasukuni; McCurley, Matthew; Hu, David L.
2013-01-01
The conical flowers of splash-cup plants Chrysosplenium and Mazus catch raindrops opportunistically, exploiting the subsequent splash to disperse their seeds. In this combined experimental and theoretical study, we elucidate their mechanism for maximizing dispersal distance. We fabricate conical plant mimics using three-dimensional printing, and use high-speed video to visualize splash profiles and seed travel distance. Drop impacts that strike the cup off-centre achieve the largest dispersal distances of up to 1 m. Such distances are achieved because splash speeds are three to five times faster than incoming drop speeds, and so faster than the traditionally studied splashes occurring upon horizontal surfaces. This anomalous splash speed is because of the superposition of two components of momentum, one associated with a component of the drop's motion parallel to the splash-cup surface, and the other associated with film spreading induced by impact with the splash-cup. Our model incorporating these effects predicts the observed dispersal distance within 6–18% error. According to our experiments, the optimal cone angle for the splash-cup is 40°, a value consistent with the average of five species of splash-cup plants. This optimal angle arises from the competing effects of velocity amplification and projectile launching angle. PMID:23235266
Magnetic antenna excitation of whistler modes. III. Group and phase velocities of wave packets
NASA Astrophysics Data System (ADS)
Urrutia, J. M.; Stenzel, R. L.
2015-07-01
The properties of whistler modes excited by single and multiple magnetic loop antennas have been investigated in a large laboratory plasma. A single loop excites a wavepacket, but an array of loops across the ambient magnetic field B0 excites approximate plane whistler modes. The single loop data are measured. The array patterns are obtained by linear superposition of experimental data shifted in space and time, which is valid in a uniform plasma and magnetic field for small amplitude waves. Phasing the array changes the angle of wave propagation. The antennas are excited by an rf tone burst whose propagating envelope and oscillations yield group and phase velocities. A single loop antenna with dipole moment across B0 excites wave packets whose topology resembles m = 1 helicon modes, but without radial boundaries. The phase surfaces are conical with propagation characteristics of Gendrin modes. The cones form near the antenna with comparable parallel and perpendicular phase velocities. A physical model for the wave excitation is given. When a wave burst is applied to a phased antenna array, the wave front propagates both along the array and into the plasma forming a "whistler wing" at the front. These laboratory observations may be relevant for excitation and detection of whistler modes in space plasmas.
Focazio, M.J.; Speiran, G.K.
1993-01-01
The groundwater-flow system of the Virginia Coastal Plain consists of areally extensive and interconnected aquifers. Large, regionally coalescing cones of depression that are caused by large withdrawals of water are found in these aquifers. Local groundwater systems are affected by regional pumping, because of the interactions within the system of aquifers. Accordingly, these local systems are affected by regional groundwater flow and by spatial and temporal differences in withdrawals by various users. A geographic- information system was used to refine a regional groundwater-flow model around selected withdrawal centers. A method was developed in which drawdown maps that were simulated by the regional groundwater-flow model and the principle of superposition could be used to estimate drawdown at local sites. The method was applied to create drawdown maps in the Brightseat/Upper Potomac Aquifer for periods of 3, 6, 9, and 12 months for Chesapeake, Newport News, Norfolk, Portsmouth, Suffolk, and Virginia Beach, Virginia. Withdrawal rates were supplied by the individual localities and remained constant for each simulation period. This provides an efficient method by which the individual local groundwater users can determine the amount of drawdown produced by their wells in a groundwater system that is a water source for multiple users and that is affected by regional-flow systems.
QUANTIFYING OBSERVATIONAL PROJECTION EFFECTS USING MOLECULAR CLOUD SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beaumont, Christopher N.; Offner, Stella S.R.; Shetty, Rahul
2013-11-10
The physical properties of molecular clouds are often measured using spectral-line observations, which provide the only probes of the clouds' velocity structure. It is hard, though, to assess whether and to what extent intensity features in position-position-velocity (PPV) space correspond to 'real' density structures in position-position-position (PPP) space. In this paper, we create synthetic molecular cloud spectral-line maps of simulated molecular clouds, and present a new technique for measuring the reality of individual PPV structures. Using a dendrogram algorithm, we identify hierarchical structures in both PPP and PPV space. Our procedure projects density structures identified in PPP space into correspondingmore » intensity structures in PPV space and then measures the geometric overlap of the projected structures with structures identified from the synthetic observation. The fractional overlap between a PPP and PPV structure quantifies how well the synthetic observation recovers information about the three-dimensional structure. Applying this machinery to a set of synthetic observations of CO isotopes, we measure how well spectral-line measurements recover mass, size, velocity dispersion, and virial parameter for a simulated star-forming region. By disabling various steps of our analysis, we investigate how much opacity, chemistry, and gravity affect measurements of physical properties extracted from PPV cubes. For the simulations used here, which offer a decent, but not perfect, match to the properties of a star-forming region like Perseus, our results suggest that superposition induces a ∼40% uncertainty in masses, sizes, and velocity dispersions derived from {sup 13}CO (J = 1-0). As would be expected, superposition and confusion is worst in regions where the filling factor of emitting material is large. The virial parameter is most affected by superposition, such that estimates of the virial parameter derived from PPV and PPP information typically disagree by a factor of ∼2. This uncertainty makes it particularly difficult to judge whether gravitational or kinetic energy dominate a given region, since the majority of virial parameter measurements fall within a factor of two of the equipartition level α ∼ 2.« less
gWEGA: GPU-accelerated WEGA for molecular superposition and shape comparison.
Yan, Xin; Li, Jiabo; Gu, Qiong; Xu, Jun
2014-06-05
Virtual screening of a large chemical library for drug lead identification requires searching/superimposing a large number of three-dimensional (3D) chemical structures. This article reports a graphic processing unit (GPU)-accelerated weighted Gaussian algorithm (gWEGA) that expedites shape or shape-feature similarity score-based virtual screening. With 86 GPU nodes (each node has one GPU card), gWEGA can screen 110 million conformations derived from an entire ZINC drug-like database with diverse antidiabetic agents as query structures within 2 s (i.e., screening more than 55 million conformations per second). The rapid screening speed was accomplished through the massive parallelization on multiple GPU nodes and rapid prescreening of 3D structures (based on their shape descriptors and pharmacophore feature compositions). Copyright © 2014 Wiley Periodicals, Inc.
A non-orthogonal decomposition of flows into discrete events
NASA Astrophysics Data System (ADS)
Boxx, Isaac; Lewalle, Jacques
1998-11-01
This work is based on the formula for the inverse Hermitian wavelet transform. A signal can be interpreted as a (non-unique) superposition of near-singular, partially overlapping events arising from Dirac functions and/or its derivatives combined with diffusion.( No dynamics implied: dimensionless diffusion is related to the definition of the analyzing wavelets.) These events correspond to local maxima of spectral energy density. We successfully fitted model events of various orders on a succession of fields, ranging from elementary signals to one-dimensional hot-wire traces. We document edge effects, event overlap and its implications on the algorithm. The interpretation of the discrete singularities as flow events (such as coherent structures) and the fundamental non-uniqueness of the decomposition are discussed. The dynamics of these events will be examined in the companion paper.
Retinex at 50: color theory and spatial algorithms, a review
NASA Astrophysics Data System (ADS)
McCann, John J.
2017-05-01
Retinex Imaging shares two distinct elements: first, a model of human color vision; second, a spatial-imaging algorithm for making better reproductions. Edwin Land's 1964 Retinex Color Theory began as a model of human color vision of real complex scenes. He designed many experiments, such as Color Mondrians, to understand why retinal cone quanta catch fails to predict color constancy. Land's Retinex model used three spatial channels (L, M, S) that calculated three independent sets of monochromatic lightnesses. Land and McCann's lightness model used spatial comparisons followed by spatial integration across the scene. The parameters of their model were derived from extensive observer data. This work was the beginning of the second Retinex element, namely, using models of spatial vision to guide image reproduction algorithms. Today, there are many different Retinex algorithms. This special section, "Retinex at 50," describes a wide variety of them, along with their different goals, and ground truths used to measure their success. This paper reviews (and provides links to) the original Retinex experiments and image-processing implementations. Observer matches (measuring appearances) have extended our understanding of how human spatial vision works. This paper describes a collection very challenging datasets, accumulated by Land and McCann, for testing algorithms that predict appearance.
NASA Astrophysics Data System (ADS)
Masunun, P.; Tangboonduangjit, P.; Dumrongkijudom, N.
2016-03-01
The purpose of this study is to compare the build-up region doses on breast Rando phantom surface with the bolus covered, the doses in breast Rando phantom and also the doses in a lung that is the heterogeneous region by two algorithms. The AAA in Eclipse TPS and the collapsed cone convolution algorithm in Pinnacle treatment planning system were used to plan in tangential field technique with 6 MV photon beam at 200 cGy total doses in Breast Rando phantom with bolus covered (5 mm and 10 mm). TLDs were calibrated with Cobalt-60 and used to measure the doses in irradiation process. The results in treatment planning show that the doses in build-up region and the doses in breast phantom were closely matched in both algorithms which are less than 2% differences. However, overestimate of doses in a lung (L2) were found in AAA with 13.78% and 6.06% differences at 5 mm and 10 mm bolus thickness, respectively when compared with CCC algorithm. The TLD measurements show the underestimate in buildup region and in breast phantom but the doses in a lung (L2) were overestimated when compared with the doses in the two plannings at both thicknesses of the bolus.
Analytic reconstruction algorithms for triple-source CT with horizontal data truncation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ming; Yu, Hengyong, E-mail: hengyong-yu@ieee.org
2015-10-15
Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle tomore » cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.« less
GPU-based Branchless Distance-Driven Projection and Backprojection
Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong
2017-01-01
Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm. PMID:29333480
GPU-based Branchless Distance-Driven Projection and Backprojection.
Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong
2017-12-01
Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm.
Analytic reconstruction algorithms for triple-source CT with horizontal data truncation.
Chen, Ming; Yu, Hengyong
2015-10-01
This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and matlab. While the basic platform is constructed in matlab, the computationally intensive segments are coded in c + +, which are linked via a mex interface. A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.
Jinno, Shunta; Tachibana, Hidenobu; Moriya, Shunsuke; Mizuno, Norifumi; Takahashi, Ryo; Kamima, Tatsuya; Ishibashi, Satoru; Sato, Masanori
2018-05-21
In inhomogeneous media, there is often a large systematic difference in the dose between the conventional Clarkson algorithm (C-Clarkson) for independent calculation verification and the superposition-based algorithms of treatment planning systems (TPSs). These treatment site-dependent differences increase the complexity of the radiotherapy planning secondary check. We developed a simple and effective method of heterogeneity correction integrated with the Clarkson algorithm (L-Clarkson) to account for the effects of heterogeneity in the lateral dimension, and performed a multi-institutional study to evaluate the effectiveness of the method. In the method, a 2D image reconstructed from computed tomography (CT) images is divided according to lines extending from the reference point to the edge of the multileaf collimator (MLC) or jaw collimator for each pie sector, and the radiological path length (RPL) of each line is calculated on the 2D image to obtain a tissue maximum ratio and phantom scatter factor, allowing the dose to be calculated. A total of 261 plans (1237 beams) for conventional breast and lung treatments and lung stereotactic body radiotherapy were collected from four institutions. Disagreements in dose between the on-site TPSs and a verification program using the C-Clarkson and L-Clarkson algorithms were compared. Systematic differences with the L-Clarkson method were within 1% for all sites, while the C-Clarkson method resulted in systematic differences of 1-5%. The L-Clarkson method showed smaller variations. This heterogeneity correction integrated with the Clarkson algorithm would provide a simple evaluation within the range of -5% to +5% for a radiotherapy plan secondary check.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kieselmann, J; Bartzsch, S; Oelfke, U
Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less
SU-E-T-17: A Mathematical Model for PinPoint Chamber Correction in Measuring Small Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T; Zhang, Y; Li, X
2014-06-01
Purpose: For small field dosimetry, such as measuring the cone output factor for stereotactic radiosurgery, ion chambers often result in underestimation of the dose, due to both the volume averaging effect and the lack of electron equilibrium. The purpose of this work is to develop a mathematical model, specifically for the pinpoint chamber, to calculate the correction factors corresponding to different type of small fields, including single cone-based circular field and non-standard composite fields. Methods: A PTW 0.015cc PinPoint chamber was used in the study. Its response in a certain field was modeled as the total contribution of many smallmore » beamlets, each with different response factor depending on the relative strength, radial distance to the chamber axis, and the beam angle. To get these factors, 12 cone-shaped circular fields (5mm,7.5mm, 10mm, 12.5mm, 15mm, 20mm, 25mm, 30mm, 35mm, 40mm, 50mm, 60mm) were irradiated and measured with the PinPoint chamber. For each field size, hundreds of readings were recorded for every 2mm chamber shift in the horizontal plane. These readings were then compared with the theoretical doses as obtained with Monte Carlo calculation. A penalized-least-square optimization algorithm was developed to find out the beamlet response factors. After the parameter fitting, the established mathematical model was validated with the same MC code for other non-circular fields. Results: The optimization algorithm used for parameter fitting was stable and the resulted response factors were smooth in spatial domain. After correction with the mathematical model, the chamber reading matched with the Monte Carlo calculation for all the tested fields to within 2%. Conclusion: A novel mathematical model has been developed for the PinPoint chamber for dosimetric measurement of small fields. The current model is applicable only when the beam axis is perpendicular to the chamber axis. It can be applied to non-standard composite fields. Further validation with other type of detectors is being conducted.« less
NASA Astrophysics Data System (ADS)
O'Brien, Ricky T.; Cooper, Benjamin J.; Kipritidis, John; Shieh, Chun-Chien; Keall, Paul J.
2014-02-01
Four dimensional cone beam computed tomography (4DCBCT) images suffer from angular under sampling and bunching of projections due to a lack of feedback between the respiratory signal and the acquisition system. To address this problem, respiratory motion guided 4DCBCT (RMG-4DCBCT) regulates the gantry velocity and projection time interval, in response to the patient’s respiratory signal, with the aim of acquiring evenly spaced projections in a number of phase or displacement bins during the respiratory cycle. Our previous study of RMG-4DCBCT was limited to sinusoidal breathing traces. Here we expand on that work to provide a practical algorithm for the case of real patient breathing data. We give a complete description of RMG-4DCBCT including full details on how to implement the algorithms to determine when to move the gantry and when to acquire projections in response to the patient’s respiratory signal. We simulate a realistic working RMG-4DCBCT system using 112 breathing traces from 24 lung cancer patients. Acquisition used phase-based binning and parameter settings typically used on commercial 4DCBCT systems (4 min acquisition time, 1200 projections across 10 respiratory bins), with the acceleration and velocity constraints of current generation linear accelerators. We quantified streaking artefacts and image noise for conventional and RMG-4DCBCT methods by reconstructing projection data selected from an oversampled set of Catphan phantom projections. RMG-4DCBCT allows us to optimally trade-off image quality, acquisition time and image dose. For example, for the same image quality and acquisition time as conventional 4DCBCT approximately half the imaging dose is needed. Alternatively, for the same imaging dose, the image quality as measured by the signal to noise ratio, is improved by 63% on average. C-arm cone beam computed tomography systems, with an acceleration up to 200°/s2, a velocity up to 100°/s and the acquisition of 80 projections per second, allow the image acquisition time to be reduced to below 60 s. We have made considerable progress towards realizing a system to reduce projection clustering in conventional 4DCBCT imaging and hence reduce the imaging dose to the patient.
Soft-Collinear Mode for Jet Rates in Soft-Collinear Effective Theory
Chien, Yang-Ting; Lee, Christopher; Hornig, Andrew
2016-01-29
We propose the addition of a new "soft-collinear" mode to soft collinear effective theory (SCET) below the usual soft scale to factorize and resum logarithms of jet radii R in jet cross sections. We consider exclusive 2-jet cross sections in e +e - collisions with an energy veto Λ on additional jets. The key observation is that there are actually two pairs of energy scales whose ratio is R: the transverse momentum QR of the energetic particles inside jets and their total energy Q, and the transverse momentum ΛR of soft particles that are cut out of the jet cones and their energy Λ. The soft-collinear mode is necessary to factorize and resum logarithms of the latter hierarchy. We show how this factorization occurs in the jet thrust cross section for cone and k T-type algorithms at O(α s) and using the thrust cone algorithm at O(αmore » $$2\\atop{s}$$). We identify the presence of hard-collinear, in-jet soft, global (veto) soft, and soft-collinear modes in the jet thrust cross section. We also observe here that the in-jet soft modes measured with thrust are actually the "csoft" modes of the theory SCET +. We dub the new theory with both csoft and soft-collinear modes "SCET ++". We go on to explain the relation between the "unmeasured" jet function appearing in total exclusive jet cross sections and the hard-collinear and csoft functions in measured jet thrust cross sections. We do not resum logs that are non-global in origin, arising from the ratio of the scales of soft radiation whose thrust is measured at Q$${{\\tau}}$$/R and of the soft-collinear radiation at 2ΛR. Their resummation would require the introduction of additional operators beyond those we consider here. The steps we outline here are a necessary part of summing logs of R that are global in nature and have not been factorized and resummed beyond leading-log level previously.« less
Soft-Collinear Mode for Jet Rates in Soft-Collinear Effective Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chien, Yang-Ting; Lee, Christopher; Hornig, Andrew
We propose the addition of a new "soft-collinear" mode to soft collinear effective theory (SCET) below the usual soft scale to factorize and resum logarithms of jet radii R in jet cross sections. We consider exclusive 2-jet cross sections in e +e - collisions with an energy veto Λ on additional jets. The key observation is that there are actually two pairs of energy scales whose ratio is R: the transverse momentum QR of the energetic particles inside jets and their total energy Q, and the transverse momentum ΛR of soft particles that are cut out of the jet cones and their energy Λ. The soft-collinear mode is necessary to factorize and resum logarithms of the latter hierarchy. We show how this factorization occurs in the jet thrust cross section for cone and k T-type algorithms at O(α s) and using the thrust cone algorithm at O(αmore » $$2\\atop{s}$$). We identify the presence of hard-collinear, in-jet soft, global (veto) soft, and soft-collinear modes in the jet thrust cross section. We also observe here that the in-jet soft modes measured with thrust are actually the "csoft" modes of the theory SCET +. We dub the new theory with both csoft and soft-collinear modes "SCET ++". We go on to explain the relation between the "unmeasured" jet function appearing in total exclusive jet cross sections and the hard-collinear and csoft functions in measured jet thrust cross sections. We do not resum logs that are non-global in origin, arising from the ratio of the scales of soft radiation whose thrust is measured at Q$${{\\tau}}$$/R and of the soft-collinear radiation at 2ΛR. Their resummation would require the introduction of additional operators beyond those we consider here. The steps we outline here are a necessary part of summing logs of R that are global in nature and have not been factorized and resummed beyond leading-log level previously.« less
Diffusion Maps and Geometric Harmonics for Automatic Target Recognition (ATR). Volume 2. Appendices
2007-11-01
of the Perron - Frobenius theorem, it suffices to prove that the chain is irreducible and aperiodic. • The irreducibility is a mere consequence of the...of each atom; this is due to the linear programming constraint that the coefficients be nonnegative 4. Chen et al. [20, 21] describe two algorithms for...projection of x onto the convex cone spanned by Ψ(t) with the origin at the apex; we provide details on computing x̃(t) in Section 4.1.3. Let x̃ (t) H
NASA Astrophysics Data System (ADS)
Virgili-Llop, Josep; Zagaris, Costantinos; Park, Hyeongjun; Zappulla, Richard; Romano, Marcello
2018-03-01
An experimental campaign has been conducted to evaluate the performance of two different guidance and control algorithms on a multi-constrained docking maneuver. The evaluated algorithms are model predictive control (MPC) and inverse dynamics in the virtual domain (IDVD). A linear-quadratic approach with a quadratic programming solver is used for the MPC approach. A nonconvex optimization problem results from the IDVD approach, and a nonlinear programming solver is used. The docking scenario is constrained by the presence of a keep-out zone, an entry cone, and by the chaser's maximum actuation level. The performance metrics for the experiments and numerical simulations include the required control effort and time to dock. The experiments have been conducted in a ground-based air-bearing test bed, using spacecraft simulators that float over a granite table.
Fusion of an Ensemble of Augmented Image Detectors for Robust Object Detection
Wei, Pan; Anderson, Derek T.
2018-01-01
A significant challenge in object detection is accurate identification of an object’s position in image space, whereas one algorithm with one set of parameters is usually not enough, and the fusion of multiple algorithms and/or parameters can lead to more robust results. Herein, a new computational intelligence fusion approach based on the dynamic analysis of agreement among object detection outputs is proposed. Furthermore, we propose an online versus just in training image augmentation strategy. Experiments comparing the results both with and without fusion are presented. We demonstrate that the augmented and fused combination results are the best, with respect to higher accuracy rates and reduction of outlier influences. The approach is demonstrated in the context of cone, pedestrian and box detection for Advanced Driver Assistance Systems (ADAS) applications. PMID:29562609
[A spatial adaptive algorithm for endmember extraction on multispectral remote sensing image].
Zhu, Chang-Ming; Luo, Jian-Cheng; Shen, Zhan-Feng; Li, Jun-Li; Hu, Xiao-Dong
2011-10-01
Due to the problem that the convex cone analysis (CCA) method can only extract limited endmember in multispectral imagery, this paper proposed a new endmember extraction method by spatial adaptive spectral feature analysis in multispectral remote sensing image based on spatial clustering and imagery slice. Firstly, in order to remove spatial and spectral redundancies, the principal component analysis (PCA) algorithm was used for lowering the dimensions of the multispectral data. Secondly, iterative self-organizing data analysis technology algorithm (ISODATA) was used for image cluster through the similarity of the pixel spectral. And then, through clustering post process and litter clusters combination, we divided the whole image data into several blocks (tiles). Lastly, according to the complexity of image blocks' landscape and the feature of the scatter diagrams analysis, the authors can determine the number of endmembers. Then using hourglass algorithm extracts endmembers. Through the endmember extraction experiment on TM multispectral imagery, the experiment result showed that the method can extract endmember spectra form multispectral imagery effectively. What's more, the method resolved the problem of the amount of endmember limitation and improved accuracy of the endmember extraction. The method has provided a new way for multispectral image endmember extraction.
Mani, Ganesh Kadirampatti; Karunakaran, Kaviarasu
2016-01-01
Small fields smaller than 4×4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model‐based algorithms, X‐ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS‐Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth‐of‐dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth‐dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1×1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1×1 cm2 field showed maximum deviation, except in 6 MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower‐density materials compared to high‐density materials. PACS numbers: 87.53.Bn, 87.53.kn, 87.56.bd, 87.55.Kd, 87.56.jf PMID:26894345
Multiresolution Iterative Reconstruction in High-Resolution Extremity Cone-Beam CT
Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H; Stayman, J Webster
2016-01-01
Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution Penalized-Weighted Least Squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where computationally expensive, high-fidelity forward models are applied only to a sub-region of the field-of-view. PMID:27694701
SU-D-12A-06: A Comprehensive Parameter Analysis for Low Dose Cone-Beam CT Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, W; Southern Medical University, Guangzhou; Yan, H
Purpose: There is always a parameter in compressive sensing based iterative reconstruction (IR) methods low dose cone-beam CT (CBCT), which controls the weight of regularization relative to data fidelity. A clear understanding of the relationship between image quality and parameter values is important. The purpose of this study is to investigate this subject based on experimental data and a representative advanced IR algorithm using Tight-frame (TF) regularization. Methods: Three data sets of a Catphan phantom acquired at low, regular and high dose levels are used. For each tests, 90 projections covering a 200-degree scan range are used for reconstruction. Threemore » different regions-of-interest (ROIs) of different contrasts are used to calculate contrast-to-noise ratios (CNR) for contrast evaluation. A single point structure is used to measure modulation transfer function (MTF) for spatial-resolution evaluation. Finally, we analyze CNRs and MTFs to study the relationship between image quality and parameter selections. Results: It was found that: 1) there is no universal optimal parameter. The optimal parameter value depends on specific task and dose level. 2) There is a clear trade-off between CNR and resolution. The parameter for the best CNR is always smaller than that for the best resolution. 3) Optimal parameters are also dose-specific. Data acquired under a high dose protocol require less regularization, yielding smaller optimal parameter values. 4) Comparing with conventional FDK images, TF-based CBCT images are better under a certain optimally selected parameters. The advantages are more obvious for low dose data. Conclusion: We have investigated the relationship between image quality and parameter values in the TF-based IR algorithm. Preliminary results indicate optimal parameters are specific to both the task types and dose levels, providing guidance for selecting parameters in advanced IR algorithms. This work is supported in part by NIH (1R01CA154747-01)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, J; Zhang, W; Lu, J
Purpose: To investigate the accuracy and feasibility of dose calculations using kilovoltage cone beam computed tomography in cervical cancer radiotherapy using a correction algorithm. Methods: The Hounsfield units (HU) and electron density (HU-density) curve was obtained for both planning CT (pCT) and kilovoltage cone beam CT (CBCT) using a CIRS-062 calibration phantom. The pCT and kV-CBCT images have different HU values, and if the HU-density curve of CBCT was directly used to calculate dose in CBCT images may have a deviation on dose distribution. It is necessary to normalize the different HU values between pCT and CBCT. A HU correctionmore » algorithm was used for CBCT images (cCBCT). Fifteen intensity-modulated radiation therapy (IMRT) plans of cervical cancer were chosen, and the plans were transferred to the pCT and cCBCT data sets without any changes for dose calculations. Phantom and patient studies were carried out. The dose differences and dose distributions were compared between cCBCT plan and pCT plan. Results: The HU number of CBCT was measured by several times, and the maximum change was less than 2%. To compare with pCT, the CBCT and cCBCT has a discrepancy, the dose differences in CBCT and cCBCT images were 2.48%±0.65% (range: 1.3%∼3.8%) and 0.48%±0.21% (range: 0.1%∼0.82%) for phantom study, respectively. For dose calculation in patient images, the dose differences were 2.25%±0.43% (range: 1.4%∼3.4%) and 0.63%±0.35% (range: 0.13%∼0.97%), respectively. And for the dose distributions, the passing rate of cCBCT was higher than the CBCTs. Conclusion: The CBCT image for dose calculation is feasible in cervical cancer radiotherapy, and the correction algorithm offers acceptable accuracy. It will become a useful tool for adaptive radiation therapy.« less
Lievens, Yolande; Nulens, An; Gaber, Mousa Amr; Defraene, Gilles; De Wever, Walter; Stroobants, Sigrid; Van den Heuvel, Frank
2011-05-01
To evaluate the potential for dose escalation with intensity-modulated radiotherapy (IMRT) in positron emission tomography-based radiotherapy planning for locally advanced non-small-cell lung cancer (LA-NSCLC). For 35 LA-NSCLC patients, three-dimensional conformal radiotherapy and IMRT plans were made to a prescription dose (PD) of 66 Gy in 2-Gy fractions. Dose escalation was performed toward the maximal PD using secondary endpoint constraints for the lung, spinal cord, and heart, with de-escalation according to defined esophageal tolerance. Dose calculation was performed using the Eclipse pencil beam algorithm, and all plans were recalculated using a collapsed cone algorithm. The normal tissue complication probabilities were calculated for the lung (Grade 2 pneumonitis) and esophagus (acute toxicity, grade 2 or greater, and late toxicity). IMRT resulted in statistically significant decreases in the mean lung (p <.0001) and maximal spinal cord (p = .002 and 0005) doses, allowing an average increase in the PD of 8.6-14.2 Gy (p ≤.0001). This advantage was lost after de-escalation within the defined esophageal dose limits. The lung normal tissue complication probabilities were significantly lower for IMRT (p <.0001), even after dose escalation. For esophageal toxicity, IMRT significantly decreased the acute NTCP values at the low dose levels (p = .0009 and p <.0001). After maximal dose escalation, late esophageal tolerance became critical (p <.0001), especially when using IMRT, owing to the parallel increases in the esophageal dose and PD. In LA-NSCLC, IMRT offers the potential to significantly escalate the PD, dependent on the lung and spinal cord tolerance. However, parallel increases in the esophageal dose abolished the advantage, even when using collapsed cone algorithms. This is important to consider in the context of concomitant chemoradiotherapy schedules using IMRT. Copyright © 2011 Elsevier Inc. All rights reserved.
SUPERPOSITION OF POLYTROPES IN THE INNER HELIOSHEATH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livadiotis, G., E-mail: glivadiotis@swri.edu
2016-03-15
This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density–temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log–log scale is now generalized to a concave-downward parabola that is able to describe themore » observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ∼ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubrovsky, V. G.; Topovsky, A. V.
New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u{sup (n)}, n= 1, Horizontal-Ellipsis , N are constructed via Zakharov and Manakov {partial_derivative}-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u{sup (n)} and calculated by {partial_derivative}-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schroedinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums ofmore » special solutions u{sup (n)}. It is shown that the sums u=u{sup (k{sub 1})}+...+u{sup (k{sub m})}, 1 Less-Than-Or-Slanted-Equal-To k{sub 1} < k{sub 2} < Horizontal-Ellipsis < k{sub m} Less-Than-Or-Slanted-Equal-To N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schroedinger equation and can serve as model potentials for electrons in planar structures of modern electronics.« less
Superposition of Polytropes in the Inner Heliosheath
NASA Astrophysics Data System (ADS)
Livadiotis, G.
2016-03-01
This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density-temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log-log scale is now generalized to a concave-downward parabola that is able to describe the observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ˜ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.
Limited data tomographic image reconstruction via dual formulation of total variation minimization
NASA Astrophysics Data System (ADS)
Jang, Kwang Eun; Sung, Younghun; Lee, Kangeui; Lee, Jongha; Cho, Seungryong
2011-03-01
The X-ray mammography is the primary imaging modality for breast cancer screening. For the dense breast, however, the mammogram is usually difficult to read due to tissue overlap problem caused by the superposition of normal tissues. The digital breast tomosynthesis (DBT) that measures several low dose projections over a limited angle range may be an alternative modality for breast imaging, since it allows the visualization of the cross-sectional information of breast. The DBT, however, may suffer from the aliasing artifact and the severe noise corruption. To overcome these problems, a total variation (TV) regularized statistical reconstruction algorithm is presented. Inspired by the dual formulation of TV minimization in denoising and deblurring problems, we derived a gradient-type algorithm based on statistical model of X-ray tomography. The objective function is comprised of a data fidelity term derived from the statistical model and a TV regularization term. The gradient of the objective function can be easily calculated using simple operations in terms of auxiliary variables. After a descending step, the data fidelity term is renewed in each iteration. Since the proposed algorithm can be implemented without sophisticated operations such as matrix inverse, it provides an efficient way to include the TV regularization in the statistical reconstruction method, which results in a fast and robust estimation for low dose projections over the limited angle range. Initial tests with an experimental DBT system confirmed our finding.