Science.gov

Sample records for acenocoumarol dosing algorithm

  1. Efficiency and effectiveness of the use of an acenocoumarol pharmacogenetic dosing algorithm versus usual care in patients with venous thromboembolic disease initiating oral anticoagulation: study protocol for a randomized controlled trial

    PubMed Central

    2012-01-01

    Background Hemorrhagic events are frequent in patients on treatment with antivitamin-K oral anticoagulants due to their narrow therapeutic margin. Studies performed with acenocoumarol have shown the relationship between demographic, clinical and genotypic variants and the response to these drugs. Once the influence of these genetic and clinical factors on the dose of acenocoumarol needed to maintain a stable international normalized ratio (INR) has been demonstrated, new strategies need to be developed to predict the appropriate doses of this drug. Several pharmacogenetic algorithms have been developed for warfarin, but only three have been developed for acenocoumarol. After the development of a pharmacogenetic algorithm, the obvious next step is to demonstrate its effectiveness and utility by means of a randomized controlled trial. The aim of this study is to evaluate the effectiveness and efficiency of an acenocoumarol dosing algorithm developed by our group which includes demographic, clinical and pharmacogenetic variables (VKORC1, CYP2C9, CYP4F2 and ApoE) in patients with venous thromboembolism (VTE). Methods and design This is a multicenter, single blind, randomized controlled clinical trial. The protocol has been approved by La Paz University Hospital Research Ethics Committee and by the Spanish Drug Agency. Two hundred and forty patients with VTE in which oral anticoagulant therapy is indicated will be included. Randomization (case/control 1:1) will be stratified by center. Acenocoumarol dose in the control group will be scheduled and adjusted following common clinical practice; in the experimental arm dosing will be following an individualized algorithm developed and validated by our group. Patients will be followed for three months. The main endpoints are: 1) Percentage of patients with INR within the therapeutic range on day seven after initiation of oral anticoagulant therapy; 2) Time from the start of oral anticoagulant treatment to achievement of a

  2. Pretreatment with low doses of acenocoumarol inhibits the development of acute ischemia/reperfusion-induced pancreatitis.

    PubMed

    Warzecha, Z; Sendur, P; Ceranowicz, P; Dembinski, M; Cieszkowski, J; Kusnierz-Cabala, B; Tomaszewska, R; Dembinski, A

    2015-10-01

    Coagulative disorders are known to occur in acute pancreatitis and are related to the severity of this disease. Various experimental and clinical studies have shown protective and therapeutic effect of heparin in acute pancreatitis. Aim of the present study was to determine the influence of acenocoumarol, a vitamin K antagonist, on the development of acute pancreatitis. Studies were performed on male Wistar rats weighing 250 - 270 g. Acenocoumarol at the dose of 50, 100 or 150 μg/kg/dose or vehicle were administered once a day for 7 days before induction of acute pancreatitis. Acute pancreatitis was induced in rats by pancreatic ischemia followed by reperfusion. The severity of acute pancreatitis was assessed after 5-h reperfusion. Pretreatment with acenocoumarol given at the dose of 50 or 100 μg/kg/dose reduced morphological signs of acute pancreatitis. These effects were accompanied with a decrease in the pancreatitis-evoked increase in serum activity of lipase and serum concentration of pro-inflammatory interleukin-1β. Moreover, the pancreatitis-evoked reductions in pancreatic DNA synthesis and pancreatic blood flow were partially reversed by pretreatment with acenocoumarol given at the dose of 50 and 100 μg/kg/dose. Administration of acenocoumarol at the dose of 150 μg/kg/dose did not exhibit any protective effect against ischemia/reperfusion-induced pancreatitis. We concluded that pretreatment with low doses of acenocoumarol reduces the severity of ischemia/reperfusion-induced acute pancreatitis.

  3. A New Pharmacogenetic Algorithm to Predict the Most Appropriate Dosage of Acenocoumarol for Stable Anticoagulation in a Mixed Spanish Population.

    PubMed

    Tong, Hoi Y; Dávila-Fajardo, Cristina Lucía; Borobia, Alberto M; Martínez-González, Luis Javier; Lubomirov, Rubin; Perea León, Laura María; Blanco Bañares, María J; Díaz-Villamarín, Xando; Fernández-Capitán, Carmen; Cabeza Barrera, José; Carcas, Antonio J

    2016-01-01

    There is a strong association between genetic polymorphisms and the acenocoumarol dosage requirements. Genotyping the polymorphisms involved in the pharmacokinetics and pharmacodynamics of acenocoumarol before starting anticoagulant therapy would result in a better quality of life and a more efficient use of healthcare resources. The objective of this study is to develop a new algorithm that includes clinical and genetic variables to predict the most appropriate acenocoumarol dosage for stable anticoagulation in a wide range of patients. We recruited 685 patients from 2 Spanish hospitals and 1 primary healthcare center. We randomly chose 80% of the patients (n = 556), considering an equitable distribution of genotypes to form the generation cohort. The remaining 20% (n = 129) formed the validation cohort. Multiple linear regression was used to generate the algorithm using the acenocoumarol stable dosage as the dependent variable and the clinical and genotypic variables as the independent variables. The variables included in the algorithm were age, weight, amiodarone use, enzyme inducer status, international normalized ratio target range and the presence of CYP2C9*2 (rs1799853), CYP2C9*3 (rs1057910), VKORC1 (rs9923231) and CYP4F2 (rs2108622). The coefficient of determination (R2) explained by the algorithm was 52.8% in the generation cohort and 64% in the validation cohort. The following R2 values were evaluated by pathology: atrial fibrillation, 57.4%; valve replacement, 56.3%; and venous thromboembolic disease, 51.5%. When the patients were classified into 3 dosage groups according to the stable dosage (<11 mg/week, 11-21 mg/week, >21 mg/week), the percentage of correctly classified patients was higher in the intermediate group, whereas differences between pharmacogenetic and clinical algorithms increased in the extreme dosage groups. Our algorithm could improve acenocoumarol dosage selection for patients who will begin treatment with this drug, especially in

  4. Dosing algorithms for vitamin K antagonists across VKORC1 and CYP2C9 genotypes.

    PubMed

    Baranova, E V; Verhoef, T I; Ragia, G; le Cessie, S; Asselbergs, F W; de Boer, A; Manolopoulos, V G; Maitland-van der Zee, A H

    2017-03-01

    Essentials Prospective studies of pharmacogenetic-guided (PG) coumarin dosing produced varying results. EU-PACT acenocoumarol and phenprocoumon trials compared PG and non-PG dosing algorithms. Sub-analysis of EU-PACT identified differences between trial arms across VKORC1-CYP2C9 groups. Adjustment of the PG algorithm might lead to a higher benefit of genotyping.

  5. Protective Effect of Pretreatment with Acenocoumarol in Cerulein-Induced Acute Pancreatitis

    PubMed Central

    Warzecha, Zygmunt; Sendur, Paweł; Ceranowicz, Piotr; Dembiński, Marcin; Cieszkowski, Jakub; Kuśnierz-Cabala, Beata; Olszanecki, Rafał; Tomaszewska, Romana; Ambroży, Tadeusz; Dembiński, Artur

    2016-01-01

    Coagulation is recognized as a key player in inflammatory and autoimmune diseases. The aim of the current research was to examine the effect of pretreatment with acenocoumarol on the development of acute pancreatitis (AP) evoked by cerulein. Methods: AP was induced in rats by cerulein administered intraperitoneally. Acenocoumarol (50, 100 or 150 µg/kg/dose/day) or saline were given once daily for seven days before AP induction. Results: In rats with AP, pretreatment with acenocoumarol administered at the dose of 50 or 100 µg/kg/dose/day improved pancreatic histology, reducing the degree of edema and inflammatory infiltration, and vacuolization of acinar cells. Moreover, pretreatment with acenocoumarol given at the dose of 50 or 100 µg/kg/dose/day reduced the AP-evoked increase in pancreatic weight, serum activity of amylase and lipase, and serum concentration of pro-inflammatory interleukin-1β, as well as ameliorated pancreatic DNA synthesis and pancreatic blood flow. In contrast, acenocoumarol given at the dose of 150 μg/kg/dose did not exhibit any protective effect against cerulein-induced pancreatitis. Conclusion: Low doses of acenocoumarol, given before induction of AP by cerulein, inhibit the development of that inflammation. PMID:27754317

  6. Comparison of tinzaparin and acenocoumarol for the secondary prevention of venous thromboembolism: a multicentre, randomized study.

    PubMed

    Pérez-de-Llano, Luis A; Leiro-Fernández, Virginia; Golpe, Rafael; Núñez-Delgado, Jose M; Palacios-Bartolomé, Ana; Méndez-Marote, Lidia; Colomé-Nafria, Esteve

    2010-12-01

    The objective of the present study was to evaluate the efficacy, safety and healthcare resource utilization of long-term treatment with tinzaparin in symptomatic patients with acute pulmonary embolism as compared to standard therapy. In this open-label trial, 102 patients with objectively confirmed pulmonary embolism were randomized to receive, after initial treatment with tinzaparin, either tinzaparin (175 IU/kg/day) or international normalized ratio-adjusted acenocoumarol for 6 months. Clinical endpoints were assessed during the 6 months of treatment. A pharmacoeconomic analysis was carried out to evaluate the cost of the long-term treatment with tinzaparin in comparison with the standard one. In an intention-to-treat analysis, one of 52 patients developed recurrent venous thromboembolism in the tinzaparin group compared with none of the 50 patients in the acenocoumarol group. One patient in each group had a major haemorrhagic complication. Six patients in the acenocoumarol group had minor bleeding compared with none in the tinzaparin group (P = 0.027). Median hospital length of stay was shorter in the tinzaparin group compared to the acenocoumarol group (7 versus 9 days; P = 0.014). When all the direct and indirect cost components were combined for the entire population, we found a slight, nonstatistically significant (mean difference €345; 95% CI 1382-2071; P = 0.69) reduction in total cost with tinzaparin. Symptomatic acute pulmonary embolism treatment with full therapeutic doses of tinzaparin for 6 months is a feasible alternative to conventional treatment with vitamin K antagonists.

  7. Acenocoumarol sensitivity and pharmacokinetic characterization of CYP2C9 *5/*8,*8/*11,*9/*11 and VKORC1*2 in black African healthy Beninese subjects.

    PubMed

    Allabi, Aurel Constant; Horsmans, Yves; Alvarez, Jean-Claude; Bigot, André; Verbeeck, Roger K; Yasar, Umit; Gala, Jean-Luc

    2012-06-01

    This study aimed at investigating the contribution of CYP2C9 and VKORC1 genetic polymorphisms to inter-individual variability of acenocoumarol pharmacokinetics and pharmacodynamics in Black Africans from Benin. Fifty-one healthy volunteers were genotyped for VKORC1 1173C>T polymorphism. All of the subjects had previously been genotyped for CYP2C9*5, CYP2C9*6, CYP2C9*8, CYP2C9*9 and CYP2C9*11 alleles. Thirty-six subjects were phenotyped with a single 8 mg oral dose of acenocoumarol by measuring plasma concentrations of (R)- and (S)-acenocoumarol 8 and 24 h after the administration using chiral liquid-chromatography tandem mass-spectrometry. International normalized ratio (INR) values were determined prior to and 24 h after the drug intake. The allele frequency of VKORC1 variant (1173C>T) was 1.96% (95% CI 0.0-4.65%). The INR values did not show statistically significant difference between the CYP2C9 genotypes, but were correlated with body mass index and age at 24 h post-dosing (P < 0.05). At 8 h post dose, the (S)-acenocoumarol concentrations in the CYP2C9*5/*8 and CYP2C9*9/*11 genotypes were about 1.9 and 5.1 fold higher compared with the CYP2C9*1/*1 genotype and 2.2- and 6.0-fold higher compared with the CYP2C9*1/*9 group, respectively. The results indicated that pharmacodynamic response to acenocoumarol is highly variable between the subjects. This variability seems to be associated with CYP2C9*5/*8 and *9/*11 variant and demographic factors (age and weight) in Beninese subjects. Significant association between plasma (S)-acenocoumarol concentration and CYP2C9 genotypes suggested the use of (S)-acenocoumarol for the phenotyping purpose. Larger number of subjects is needed to study the effect of VKORC1 1173C>T variant due to its low frequency in Beninese population.

  8. A New Proton Dose Algorithm for Radiotherapy

    NASA Astrophysics Data System (ADS)

    Lee, Chungchi (Chris).

    This algorithm recursively propagates the proton distribution in energy, angle and space at one level in an absorbing medium to another, at slightly greater depth, until all the protons are stopped. The angular transition density describing the proton trajectory is based on Moliere's multiple scattering theory and Vavilov's theory of energy loss along the proton's path increment. These multiple scattering and energy loss distributions are sampled using equal probability spacing to optimize computational speed while maintaining calculational accuracy. Nuclear interactions are accounted for by using a simple exponential expression to describe the loss of protons along a given path increment and the fraction of the original energy retained by the proton is deposited locally. Two levels of testing for the algorithm are provided: (1) Absolute dose comparisons with PTRAN Monte Carlo simulations in homogeneous water media. (2) Modeling of a fixed beam line including the scattering system and range modulator and comparisons with measured data in a homogeneous water phantom. The dose accuracy of this algorithm is shown to be within +/-5% throughout the range of a 200-MeV proton when compared to measurements except in the shoulder region of the lateral profile at the Bragg peak where a dose difference as large as 11% can be found. The numerical algorithm has an adequate spatial accuracy of 3 mm. Measured data as input is not required.

  9. Protective Effect of Pretreatment with Acenocoumarol in Cerulein-Induced Acute Pancreatitis.

    PubMed

    Warzecha, Zygmunt; Sendur, Paweł; Ceranowicz, Piotr; Dembiński, Marcin; Cieszkowski, Jakub; Kuśnierz-Cabala, Beata; Olszanecki, Rafał; Tomaszewska, Romana; Ambroży, Tadeusz; Dembiński, Artur

    2016-10-12

    Coagulation is recognized as a key player in inflammatory and autoimmune diseases. The aim of the current research was to examine the effect of pretreatment with acenocoumarol on the development of acute pancreatitis (AP) evoked by cerulein.

  10. Dose Calculation Accuracy of the Monte Carlo Algorithm for CyberKnife Compared with Other Commercially Available Dose Calculation Algorithms

    SciTech Connect

    Sharma, Subhash; Ott, Joseph Williams, Jamone; Dickow, Danny

    2011-01-01

    Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required.

  11. Dose calculation accuracy of the Monte Carlo algorithm for CyberKnife compared with other commercially available dose calculation algorithms.

    PubMed

    Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny

    2011-01-01

    Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required.

  12. An algorithm for unfolding neutron dose and dose equivalent from digitized recoil-particle tracks

    SciTech Connect

    Bolch, W.E.; Turner, J.E.; Hamm, R.N.

    1986-10-01

    Previous work had demonstrated the feasibility of a digital approach to neutron dosimetry. A Monte Carlo simulation code of one detector design utilizing the operating principles of time-projection chambers was completed. This thesis presents and verifies one version of the dosimeter's computer algorithm. This algorithm processes the output of the ORNL simulation code, but is applicable to all detectors capable of digitizing recoil-particle tracks. Key features include direct measurement of track lengths and identification of particle type for each registered event. The resulting dosimeter should allow more accurate determinations of neutron dose and dose equivalent compared with conventional dosimeters, which cannot measure these quantities directly. Verification of the algorithm was accomplished by running a variety of recoil particles through the simulated detector volume and comparing the resulting absorbed dose and dose equivalent to those unfolded by the algorithm.

  13. Optimization of warfarin dose by population-specific pharmacogenomic algorithm.

    PubMed

    Pavani, A; Naushad, S M; Rupasree, Y; Kumar, T R; Malempati, A R; Pinjala, R K; Mishra, R C; Kutala, V K

    2012-08-01

    To optimize the warfarin dose, a population-specific pharmacogenomic algorithm was developed using multiple linear regression model with vitamin K intake and cytochrome P450 IIC polypeptide9 (CYP2C9(*)2 and (*)3), vitamin K epoxide reductase complex 1 (VKORC1(*)3, (*)4, D36Y and -1639 G>A) polymorphism profile of subjects who attained therapeutic international normalized ratio as predictors. New algorithm was validated by correlating with Wadelius, International Warfarin Pharmacogenetics Consortium and Gage algorithms; and with the therapeutic dose (r=0.64, P<0.0001). New algorithm was more accurate (Overall: 0.89 vs 0.51, warfarin resistant: 0.96 vs 0.77 and warfarin sensitive: 0.80 vs 0.24), more sensitive (0.87 vs 0.52) and specific (0.93 vs 0.50) compared with clinical data. It has significantly reduced the rate of overestimation (0.06 vs 0.50) and underestimation (0.13 vs 0.48). To conclude, this population-specific algorithm has greater clinical utility in optimizing the warfarin dose, thereby decreasing the adverse effects of suboptimal dose.

  14. Fast dose algorithm for generation of dose coverage probability for robustness analysis of fractionated radiotherapy

    NASA Astrophysics Data System (ADS)

    Tilly, David; Ahnesjö, Anders

    2015-07-01

    A fast algorithm is constructed to facilitate dose calculation for a large number of randomly sampled treatment scenarios, each representing a possible realisation of a full treatment with geometric, fraction specific displacements for an arbitrary number of fractions. The algorithm is applied to construct a dose volume coverage probability map (DVCM) based on dose calculated for several hundred treatment scenarios to enable the probabilistic evaluation of a treatment plan. For each treatment scenario, the algorithm calculates the total dose by perturbing a pre-calculated dose, separately for the primary and scatter dose components, for the nominal conditions. The ratio of the scenario specific accumulated fluence, and the average fluence for an infinite number of fractions is used to perturb the pre-calculated dose. Irregularities in the accumulated fluence may cause numerical instabilities in the ratio, which is mitigated by regularisation through convolution with a dose pencil kernel. Compared to full dose calculations the algorithm demonstrates a speedup factor of ~1000. The comparisons to full calculations show a 99% gamma index (2%/2 mm) pass rate for a single highly modulated beam in a virtual water phantom subject to setup errors during five fractions. The gamma comparison shows a 100% pass rate in a moving tumour irradiated by a single beam in a lung-like virtual phantom. DVCM iso-probability lines computed with the fast algorithm, and with full dose calculation for each of the fractions, for a hypo-fractionated prostate case treated with rotational arc therapy treatment were almost indistinguishable.

  15. Stability of the complexes of some lanthanides with coumarin derivatives. II. Neodymium(III)-acenocoumarol.

    PubMed

    Kostova, Irena; Manolov, Ilia; Radulova, Maritza

    2004-06-01

    A complex of neodymium(III) with 4-hydroxy-3[1-(4-nitrophenyl)-3-oxobutyl]-2H-1-benzopyran-2-one (acenocoumarol) was synthesized by mixing water solutions of neodymium(III) nitrate and the ligand (metal to ligand molar ratio of 1:3). The complex was characterized and identified by elemental analysis, conductivity, IR, 1H NMR and mass spectral data. DTA and TGA were applied to study the composition of the compound. Elemental and mass spectral analysis of the complex indicated the formation of a compound of the composition NdR3 x 6H2O, where R = C19H14NO6-) The reaction of neodymium(III) with acenocoumarol was studied in detail by the spectrophotometric method. The stepwise formation of three complexes, vis., NdR2+, NdR2+ and NdR3 was established in the pH region studied (pH 3.0-7.5). The equilibrium constants for 1:1, 1:2 and 1:3 complexes were determined to be log K1 = 6.20 +/- 0.06; log K2 = 3.46 +/- 0.07 and log K2) = 2.58 +/- 0.05, respectively.

  16. A heterogeneous algorithm for PDT dose optimization for prostate

    NASA Astrophysics Data System (ADS)

    Altschuler, Martin D.; Zhu, Timothy C.; Hu, Yida; Finlay, Jarod C.; Dimofte, Andreea; Wang, Ken; Li, Jun; Cengel, Keith; Malkowicz, S. B.; Hahn, Stephen M.

    2009-02-01

    The object of this study is to develop optimization procedures that account for both the optical heterogeneity as well as photosensitizer (PS) drug distribution of the patient prostate and thereby enable delivery of uniform photodynamic dose to that gland. We use the heterogeneous optical properties measured for a patient prostate to calculate a light fluence kernel (table). PS distribution is then multiplied with the light fluence kernel to form the PDT dose kernel. The Cimmino feasibility algorithm, which is fast, linear, and always converges reliably, is applied as a search tool to choose the weights of the light sources to optimize PDT dose. Maximum and minimum PDT dose limits chosen for sample points in the prostate constrain the solution for the source strengths of the cylindrical diffuser fibers (CDF). We tested the Cimmino optimization procedures using the light fluence kernel generated for heterogeneous optical properties, and compared the optimized treatment plans with those obtained using homogeneous optical properties. To study how different photosensitizer distributions in the prostate affect optimization, comparisons of light fluence rate and PDT dose distributions were made with three distributions of photosensitizer: uniform, linear spatial distribution, and the measured PS distribution. The study shows that optimization of individual light source positions and intensities are feasible for the heterogeneous prostate during PDT.

  17. A heterogeneous algorithm for PDT dose optimization for prostate

    PubMed Central

    Altschuler, Martin D.; Zhu, Timothy C.; Hu, Yida; Finlay, Jarod C.; Dimofte, Andreea; Wang, Ken; Li, Jun; Cengel, Keith; Malkowicz, S.B.; Hahn, Stephen M.

    2015-01-01

    The object of this study is to develop optimization procedures that account for both the optical heterogeneity as well as photosensitizer (PS) drug distribution of the patient prostate and thereby enable delivery of uniform photodynamic dose to that gland. We use the heterogeneous optical properties measured for a patient prostate to calculate a light fluence kernel (table). PS distribution is then multiplied with the light fluence kernel to form the PDT dose kernel. The Cimmino feasibility algorithm, which is fast, linear, and always converges reliably, is applied as a search tool to choose the weights of the light sources to optimize PDT dose. Maximum and minimum PDT dose limits chosen for sample points in the prostate constrain the solution for the source strengths of the cylindrical diffuser fibers (CDF). We tested the Cimmino optimization procedures using the light fluence kernel generated for heterogeneous optical properties, and compared the optimized treatment plans with those obtained using homogeneous optical properties. To study how different photosensitizer distributions in the prostate affect optimization, comparisons of light fluence rate and PDT dose distributions were made with three distributions of photosensitizer: uniform, linear spatial distribution, and the measured PS distribution. The study shows that optimization of individual light source positions and intensities are feasible for the heterogeneous prostate during PDT. PMID:25914793

  18. Performance of dose calculation algorithms from three generations in lung SBRT: comparison with full Monte Carlo-based dose distributions.

    PubMed

    Ojala, Jarkko J; Kapanen, Mika K; Hyödynmaa, Simo J; Wigren, Tuija K; Pitkänen, Maunu A

    2014-03-06

    The accuracy of dose calculation is a key challenge in stereotactic body radiotherapy (SBRT) of the lung. We have benchmarked three photon beam dose calculation algorithms--pencil beam convolution (PBC), anisotropic analytical algorithm (AAA), and Acuros XB (AXB)--implemented in a commercial treatment planning system (TPS), Varian Eclipse. Dose distributions from full Monte Carlo (MC) simulations were regarded as a reference. In the first stage, for four patients with central lung tumors, treatment plans using 3D conformal radiotherapy (CRT) technique applying 6 MV photon beams were made using the AXB algorithm, with planning criteria according to the Nordic SBRT study group. The plans were recalculated (with same number of monitor units (MUs) and identical field settings) using BEAMnrc and DOSXYZnrc MC codes. The MC-calculated dose distributions were compared to corresponding AXB-calculated dose distributions to assess the accuracy of the AXB algorithm, to which then other TPS algorithms were compared. In the second stage, treatment plans were made for ten patients with 3D CRT technique using both the PBC algorithm and the AAA. The plans were recalculated (with same number of MUs and identical field settings) with the AXB algorithm, then compared to original plans. Throughout the study, the comparisons were made as a function of the size of the planning target volume (PTV), using various dose-volume histogram (DVH) and other parameters to quantitatively assess the plan quality. In the first stage also, 3D gamma analyses with threshold criteria 3%/3mm and 2%/2 mm were applied. The AXB-calculated dose distributions showed relatively high level of agreement in the light of 3D gamma analysis and DVH comparison against the full MC simulation, especially with large PTVs, but, with smaller PTVs, larger discrepancies were found. Gamma agreement index (GAI) values between 95.5% and 99.6% for all the plans with the threshold criteria 3%/3 mm were achieved, but 2%/2 mm

  19. Global convergence analysis of fast multiobjective gradient-based dose optimization algorithms for high-dose-rate brachytherapy.

    PubMed

    Lahanas, M; Baltas, D; Giannouli, S

    2003-03-07

    We consider the problem of the global convergence of gradient-based optimization algorithms for interstitial high-dose-rate (HDR) brachytherapy dose optimization using variance-based objectives. Possible local minima could lead to only sub-optimal solutions. We perform a configuration space analysis using a representative set of the entire non-dominated solution space. A set of three prostate implants is used in this study. We compare the results obtained by conjugate gradient algorithms, two variable metric algorithms and fast-simulated annealing. For the variable metric algorithm BFGS from numerical recipes, large fluctuations are observed. The limited memory L-BFGS algorithm and the conjugate gradient algorithm FRPR are globally convergent. Local minima or degenerate states are not observed. We study the possibility of obtaining a representative set of non-dominated solutions using optimal solution rearrangement and a warm start mechanism. For the surface and volume dose variance and their derivatives, a method is proposed which significantly reduces the number of required operations. The optimization time, ignoring a preprocessing step, is independent of the number of sampling points in the planning target volume. Multiobjective dose optimization in HDR brachytherapy using L-BFGS and a new modified computation method for the objectives and derivatives has been accelerated, depending on the number of sampling points, by a factor in the range 10-100.

  20. Comparison of dose distributions calculated by the cyberknife Monte Carlo and ray tracing algorithms for lung tumors: a phantom study

    NASA Astrophysics Data System (ADS)

    Koksal, Canan; Akbas, Ugur; Okutan, Murat; Demir, Bayram; Hakki Sarpun, Ismail

    2015-07-01

    Commercial treatment planning systems with have different dose calculation algorithms have been developed for radiotherapy plans. The Ray Tracing and the Monte Carlo dose calculation algorithms are available for MultiPlan treatment planning system. Many studies indicated that the Monte Carlo algorithm enables the more accurate dose distributions in heterogeneous regions such a lung than the Ray Tracing algorithm. The purpose of this study was to compare the Ray Tracing algorithm with the Monte Carlo algorithm for lung tumors in CyberKnife System. An Alderson Rando anthropomorphic phantom was used for creating CyberKnife treatment plans. The treatment plan was developed using the Ray Tracing algorithm. Then, this plan was recalculated with the Monte Carlo algorithm. EBT3 radiochromic films were put in the phantom to obtain measured dose distributions. The calculated doses were compared with the measured doses. The Monte Carlo algorithm is the more accurate dose calculation method than the Ray Tracing algorithm in nonhomogeneous structures.

  1. Influence of different dose calculation algorithms on the estimate of NTCP for lung complications.

    PubMed

    Hedin, Emma; Bäck, Anna

    2013-09-06

    Due to limitations and uncertainties in dose calculation algorithms, different algorithms can predict different dose distributions and dose-volume histograms for the same treatment. This can be a problem when estimating the normal tissue complication probability (NTCP) for patient-specific dose distributions. Published NTCP model parameters are often derived for a different dose calculation algorithm than the one used to calculate the actual dose distribution. The use of algorithm-specific NTCP model parameters can prevent errors caused by differences in dose calculation algorithms. The objective of this work was to determine how to change the NTCP model parameters for lung complications derived for a simple correction-based pencil beam dose calculation algorithm, in order to make them valid for three other common dose calculation algorithms. NTCP was calculated with the relative seriality (RS) and Lyman-Kutcher-Burman (LKB) models. The four dose calculation algorithms used were the pencil beam (PB) and collapsed cone (CC) algorithms employed by Oncentra, and the pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) employed by Eclipse. Original model parameters for lung complications were taken from four published studies on different grades of pneumonitis, and new algorithm-specific NTCP model parameters were determined. The difference between original and new model parameters was presented in relation to the reported model parameter uncertainties. Three different types of treatments were considered in the study: tangential and locoregional breast cancer treatment and lung cancer treatment. Changing the algorithm without the derivation of new model parameters caused changes in the NTCP value of up to 10 percentage points for the cases studied. Furthermore, the error introduced could be of the same magnitude as the confidence intervals of the calculated NTCP values. The new NTCP model parameters were tabulated as the algorithm was varied from PB

  2. Pediatric chest HRCT using the iDose4 Hybrid Iterative Reconstruction Algorithm: Which iDose level to choose?

    NASA Astrophysics Data System (ADS)

    Smarda, M.; Alexopoulou, E.; Mazioti, A.; Kordolaimi, S.; Ploussi, A.; Priftis, K.; Efstathopoulos, E.

    2015-09-01

    Purpose of the study is to determine the appropriate iterative reconstruction (IR) algorithm level that combines image quality and diagnostic confidence, for pediatric patients undergoing high-resolution computed tomography (HRCT). During the last 2 years, a total number of 20 children up to 10 years old with a clinical presentation of chronic bronchitis underwent HRCT in our department's 64-detector row CT scanner using the iDose IR algorithm, with almost similar image settings (80kVp, 40-50 mAs). CT images were reconstructed with all iDose levels (level 1 to 7) as well as with filtered-back projection (FBP) algorithm. Subjective image quality was evaluated by 2 experienced radiologists in terms of image noise, sharpness, contrast and diagnostic acceptability using a 5-point scale (1=excellent image, 5=non-acceptable image). Artifacts existance was also pointed out. All mean scores from both radiologists corresponded to satisfactory image quality (score ≤3), even with the FBP algorithm use. Almost excellent (score <2) overall image quality was achieved with iDose levels 5 to 7, but oversmoothing artifacts appearing with iDose levels 6 and 7 affected the diagnostic confidence. In conclusion, the use of iDose level 5 enables almost excellent image quality without considerable artifacts affecting the diagnosis. Further evaluation is needed in order to draw more precise conclusions.

  3. Comparison of dose calculation algorithms for colorectal cancer brachytherapy treatment with a shielded applicator

    SciTech Connect

    Yan Xiangsheng; Poon, Emily; Reniers, Brigitte; Vuong, Te; Verhaegen, Frank

    2008-11-15

    Colorectal cancer patients are treated at our hospital with {sup 192}Ir high dose rate (HDR) brachytherapy using an applicator that allows the introduction of a lead or tungsten shielding rod to reduce the dose to healthy tissue. The clinical dose planning calculations are, however, currently performed without taking the shielding into account. To study the dose distributions in shielded cases, three techniques were employed. The first technique was to adapt a shielding algorithm which is part of the Nucletron PLATO HDR treatment planning system. The isodose pattern exhibited unexpected features but was found to be a reasonable approximation. The second technique employed a ray tracing algorithm that assigns a constant dose ratio with/without shielding behind the shielding along a radial line originating from the source. The dose calculation results were similar to the results from the first technique but with improved accuracy. The third and most accurate technique used a dose-matrix-superposition algorithm, based on Monte Carlo calculations. The results from the latter technique showed quantitatively that the dose to healthy tissue is reduced significantly in the presence of shielding. However, it was also found that the dose to the tumor may be affected by the presence of shielding; for about a quarter of the patients treated the volume covered by the 100% isodose lines was reduced by more than 5%, leading to potential tumor cold spots. Use of any of the three shielding algorithms results in improved dose estimates to healthy tissue and the tumor.

  4. SU-E-T-91: Accuracy of Dose Calculation Algorithms for Patients Undergoing Stereotactic Ablative Radiotherapy

    SciTech Connect

    Tajaldeen, A; Ramachandran, P; Geso, M

    2015-06-15

    Purpose: The purpose of this study was to investigate and quantify the variation in dose distributions in small field lung cancer radiotherapy using seven different dose calculation algorithms. Methods: The study was performed in 21 lung cancer patients who underwent Stereotactic Ablative Body Radiotherapy (SABR). Two different methods (i) Same dose coverage to the target volume (named as same dose method) (ii) Same monitor units in all algorithms (named as same monitor units) were used for studying the performance of seven different dose calculation algorithms in XiO and Eclipse treatment planning systems. The seven dose calculation algorithms include Superposition, Fast superposition, Fast Fourier Transform ( FFT) Convolution, Clarkson, Anisotropic Analytic Algorithm (AAA), Acurous XB and pencil beam (PB) algorithms. Prior to this, a phantom study was performed to assess the accuracy of these algorithms. Superposition algorithm was used as a reference algorithm in this study. The treatment plans were compared using different dosimetric parameters including conformity, heterogeneity and dose fall off index. In addition to this, the dose to critical structures like lungs, heart, oesophagus and spinal cord were also studied. Statistical analysis was performed using Prism software. Results: The mean±stdev with conformity index for Superposition, Fast superposition, Clarkson and FFT convolution algorithms were 1.29±0.13, 1.31±0.16, 2.2±0.7 and 2.17±0.59 respectively whereas for AAA, pencil beam and Acurous XB were 1.4±0.27, 1.66±0.27 and 1.35±0.24 respectively. Conclusion: Our study showed significant variations among the seven different algorithms. Superposition and AcurosXB algorithms showed similar values for most of the dosimetric parameters. Clarkson, FFT convolution and pencil beam algorithms showed large differences as compared to superposition algorithms. Based on our study, we recommend Superposition and AcurosXB algorithms as the first choice of

  5. Comprehensive evaluation and clinical implementation of commercially available Monte Carlo dose calculation algorithm.

    PubMed

    Zhang, Aizhen; Wen, Ning; Nurushev, Teamour; Burmeister, Jay; Chetty, Indrin J

    2013-03-04

    A commercial electron Monte Carlo (eMC) dose calculation algorithm has become available in Eclipse treatment planning system. The purpose of this work was to evaluate the eMC algorithm and investigate the clinical implementation of this system. The beam modeling of the eMC algorithm was performed for beam energies of 6, 9, 12, 16, and 20 MeV for a Varian Trilogy and all available applicator sizes in the Eclipse treatment planning system. The accuracy of the eMC algorithm was evaluated in a homogeneous water phantom, solid water phantoms containing lung and bone materials, and an anthropomorphic phantom. In addition, dose calculation accuracy was compared between pencil beam (PB) and eMC algorithms in the same treatment planning system for heterogeneous phantoms. The overall agreement between eMC calculations and measurements was within 3%/2 mm, while the PB algorithm had large errors (up to 25%) in predicting dose distributions in the presence of inhomogeneities such as bone and lung. The clinical implementation of the eMC algorithm was investigated by performing treatment planning for 15 patients with lesions in the head and neck, breast, chest wall, and sternum. The dose distributions were calculated using PB and eMC algorithms with no smoothing and all three levels of 3D Gaussian smoothing for comparison. Based on a routine electron beam therapy prescription method, the number of eMC calculated monitor units (MUs) was found to increase with increased 3D Gaussian smoothing levels. 3D Gaussian smoothing greatly improved the visual usability of dose distributions and produced better target coverage. Differences of calculated MUs and dose distributions between eMC and PB algorithms could be significant when oblique beam incidence, surface irregularities, and heterogeneous tissues were present in the treatment plans. In our patient cases, monitor unit differences of up to 7% were observed between PB and eMC algorithms. Monitor unit calculations were also preformed

  6. Evaluation of six TPS algorithms in computing entrance and exit doses.

    PubMed

    Tan, Yun I; Metwaly, Mohamed; Glegg, Martin; Baggarley, Shaun; Elliott, Alex

    2014-05-08

    Entrance and exit doses are commonly measured in in vivo dosimetry for comparison with expected values, usually generated by the treatment planning system (TPS), to verify accuracy of treatment delivery. This report aims to evaluate the accuracy of six TPS algorithms in computing entrance and exit doses for a 6 MV beam. The algorithms tested were: pencil beam convolution (Eclipse PBC), analytical anisotropic algorithm (Eclipse AAA), AcurosXB (Eclipse AXB), FFT convolution (XiO Convolution), multigrid superposition (XiO Superposition), and Monte Carlo photon (Monaco MC). Measurements with ionization chamber (IC) and diode detector in water phantoms were used as a reference. Comparisons were done in terms of central axis point dose, 1D relative profiles, and 2D absolute gamma analysis. Entrance doses computed by all TPS algorithms agreed to within 2% of the measured values. Exit doses computed by XiO Convolution, XiO Superposition, Eclipse AXB, and Monaco MC agreed with the IC measured doses to within 2%-3%. Meanwhile, Eclipse PBC and Eclipse AAA computed exit doses were higher than the IC measured doses by up to 5.3% and 4.8%, respectively. Both algorithms assume that full backscatter exists even at the exit level, leading to an overestimation of exit doses. Despite good agreements at the central axis for Eclipse AXB and Monaco MC, 1D relative comparisons showed profiles mismatched at depths beyond 11.5 cm. Overall, the 2D absolute gamma (3%/3 mm) pass rates were better for Monaco MC, while Eclipse AXB failed mostly at the outer 20% of the field area. The findings of this study serve as a useful baseline for the implementation of entrance and exit in vivo dosimetry in clinical departments utilizing any of these six common TPS algorithms for reference comparison.

  7. Comparison of selected dose calculation algorithms in radiotherapy treatment planning for tissues with inhomogeneities

    NASA Astrophysics Data System (ADS)

    Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.

    2016-03-01

    Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.

  8. Specification of absorbed dose to water using model-based dose calculation algorithms for treatment planning in brachytherapy

    NASA Astrophysics Data System (ADS)

    Carlsson Tedgren, Åsa; Alm Carlsson, Gudrun

    2013-04-01

    Model-based dose calculation algorithms (MBDCAs), recently introduced in treatment planning systems (TPS) for brachytherapy, calculate tissue absorbed doses. In the TPS framework, doses have hereto been reported as dose to water and water may still be preferred as a dose specification medium. Dose to tissue medium Dmed then needs to be converted into dose to water in tissue Dw,med. Methods to calculate absorbed dose to differently sized water compartments/cavities inside tissue, infinitesimal (used for definition of absorbed dose), small, large or intermediate, are reviewed. Burlin theory is applied to estimate photon energies at which cavity sizes in the range 1 nm-10 mm can be considered small or large. Photon and electron energy spectra are calculated at 1 cm distance from the central axis in cylindrical phantoms of bone, muscle and adipose tissue for 20, 50, 300 keV photons and photons from 125I, 169Yb and 192Ir sources; ratios of mass-collision-stopping powers and mass energy absorption coefficients are calculated as applicable to convert Dmed into Dw,med for small and large cavities. Results show that 1-10 nm sized cavities are small at all investigated photon energies; 100 µm cavities are large only at photon energies <20 keV. A choice of an appropriate conversion coefficient Dw, med/Dmed is discussed in terms of the cavity size in relation to the size of important cellular targets. Free radicals from DNA bound water of nanometre dimensions contribute to DNA damage and cell killing and may be the most important water compartment in cells implying use of ratios of mass-collision-stopping powers for converting Dmed into Dw,med.

  9. Specification of absorbed dose to water using model-based dose calculation algorithms for treatment planning in brachytherapy.

    PubMed

    Tedgren, Åsa Carlsson; Carlsson, Gudrun Alm

    2013-04-21

    Model-based dose calculation algorithms (MBDCAs), recently introduced in treatment planning systems (TPS) for brachytherapy, calculate tissue absorbed doses. In the TPS framework, doses have hereto been reported as dose to water and water may still be preferred as a dose specification medium. Dose to tissue medium Dmed then needs to be converted into dose to water in tissue Dw,med. Methods to calculate absorbed dose to differently sized water compartments/cavities inside tissue, infinitesimal (used for definition of absorbed dose), small, large or intermediate, are reviewed. Burlin theory is applied to estimate photon energies at which cavity sizes in the range 1 nm-10 mm can be considered small or large. Photon and electron energy spectra are calculated at 1 cm distance from the central axis in cylindrical phantoms of bone, muscle and adipose tissue for 20, 50, 300 keV photons and photons from (125)I, (169)Yb and (192)Ir sources; ratios of mass-collision-stopping powers and mass energy absorption coefficients are calculated as applicable to convert Dmed into Dw,med for small and large cavities. Results show that 1-10 nm sized cavities are small at all investigated photon energies; 100 µm cavities are large only at photon energies <20 keV. A choice of an appropriate conversion coefficient Dw, med/Dmed is discussed in terms of the cavity size in relation to the size of important cellular targets. Free radicals from DNA bound water of nanometre dimensions contribute to DNA damage and cell killing and may be the most important water compartment in cells implying use of ratios of mass-collision-stopping powers for converting Dmed into Dw,med.

  10. A single TLD dose algorithm to satisfy federal standards and typical field conditions

    SciTech Connect

    Stanford, N.; McCurdy, D.E. )

    1990-06-01

    Modern whole-body dosimeters are often required to accurately measure the absorbed dose in a wide range of radiation fields. While programs are commonly developed around the fields tested as part of the National Voluntary Accreditation Program (NVLAP), the actual fields of application may be significantly different. Dose algorithms designed to meet the NVLAP standard, which emphasizes photons and high-energy beta radiation, may not be capable of the beta-energy discrimination necessary for accurate assessment of absorbed dose in the work environment. To address this problem, some processors use one algorithm for NVLAP testing and one or more different algorithms for the work environments. After several years of experience with a multiple algorithm approach, the Dosimetry Services Group of Yankee Atomic Electric Company (YAEC) developed a one-algorithm system for use with a four-element TLD badge using Li2B4O7 and CaSO4 phosphors. The design of the dosimeter allows the measurement of the effective energies of both photon and beta components of the radiation field, resulting in excellent mixed-field capability. The algorithm was successfully tested in all of the NVLAP photon and beta fields, as well as several non-NVLAP fields representative of the work environment. The work environment fields, including low- and medium-energy beta radiation and mixed fields of low-energy photons and beta particles, are often more demanding than the NVLAP fields. This paper discusses the development of the algorithm as well as some results of the system testing including: mixed-field irradiations, angular response, and a unique test to demonstrate the stability of the algorithm. An analysis of the uncertainty of the reported doses under various irradiation conditions is also presented.

  11. A single TLD dose algorithm to satisfy federal standards and typical field conditions.

    PubMed

    Stanford, N; McCurdy, D E

    1990-06-01

    Modern whole-body dosimeters are often required to accurately measure the absorbed dose in a wide range of radiation fields. While programs are commonly developed around the fields tested as part of the National Voluntary Accreditation Program (NVLAP), the actual fields of application may be significantly different. Dose algorithms designed to meet the NVLAP standard, which emphasizes photons and high-energy beta radiation, may not be capable of the beta-energy discrimination necessary for accurate assessment of absorbed dose in the work environment. To address this problem, some processors use one algorithm for NVLAP testing and one or more different algorithms for the work environments. After several years of experience with a multiple algorithm approach, the Dosimetry Services Group of Yankee Atomic Electric Company (YAEC) developed a one-algorithm system for use with a four-element TLD badge using Li2B4O7 and CaSO4 phosphors. The design of the dosimeter allows the measurement of the effective energies of both photon and beta components of the radiation field, resulting in excellent mixed-field capability. The algorithm was successfully tested in all of the NVLAP photon and beta fields, as well as several non-NVLAP fields representative of the work environment. The work environment fields, including low- and medium-energy beta radiation and mixed fields of low-energy photons and beta particles, are often more demanding than the NVLAP fields. This paper discusses the development of the algorithm as well as some results of the system testing including: mixed-field irradiations, angular response, and a unique test to demonstrate the stability of the algorithm. An analysis of the uncertainty of the reported doses under various irradiation conditions is also presented.

  12. Evaluation of a new commercial Monte Carlo dose calculation algorithm for electron beams

    SciTech Connect

    Vandervoort, Eric J. Cygler, Joanna E.; Tchistiakova, Ekaterina; La Russa, Daniel J.

    2014-02-15

    Purpose: In this report the authors present the validation of a Monte Carlo dose calculation algorithm (XiO EMC from Elekta Software) for electron beams. Methods: Calculated and measured dose distributions were compared for homogeneous water phantoms and for a 3D heterogeneous phantom meant to approximate the geometry of a trachea and spine. Comparisons of measurements and calculated data were performed using 2D and 3D gamma index dose comparison metrics. Results: Measured outputs agree with calculated values within estimated uncertainties for standard and extended SSDs for open applicators, and for cutouts, with the exception of the 17 MeV electron beam at extended SSD for cutout sizes smaller than 5 × 5 cm{sup 2}. Good agreement was obtained between calculated and experimental depth dose curves and dose profiles (minimum number of measurements that pass a 2%/2 mm agreement 2D gamma index criteria for any applicator or energy was 97%). Dose calculations in a heterogeneous phantom agree with radiochromic film measurements (>98% of pixels pass a 3 dimensional 3%/2 mm γ-criteria) provided that the steep dose gradient in the depth direction is considered. Conclusions: Clinically acceptable agreement (at the 2%/2 mm level) between the measurements and calculated data for measurements in water are obtained for this dose calculation algorithm. Radiochromic film is a useful tool to evaluate the accuracy of electron MC treatment planning systems in heterogeneous media.

  13. A grid algorithm for high throughput fitting of dose-response curve data.

    PubMed

    Wang, Yuhong; Jadhav, Ajit; Southal, Noel; Huang, Ruili; Nguyen, Dac-Trung

    2010-10-21

    We describe a novel algorithm, Grid algorithm, and the corresponding computer program for high throughput fitting of dose-response curves that are described by the four-parameter symmetric logistic dose-response model. The Grid algorithm searches through all points in a grid of four dimensions (parameters) and finds the optimum one that corresponds to the best fit. Using simulated dose-response curves, we examined the Grid program's performance in reproducing the actual values that were used to generate the simulated data and compared it with the DRC package for the language and environment R and the XLfit add-in for Microsoft Excel. The Grid program was robust and consistently recovered the actual values for both complete and partial curves with or without noise. Both DRC and XLfit performed well on data without noise, but they were sensitive to and their performance degraded rapidly with increasing noise. The Grid program is automated and scalable to millions of dose-response curves, and it is able to process 100,000 dose-response curves from high throughput screening experiment per CPU hour. The Grid program has the potential of greatly increasing the productivity of large-scale dose-response data analysis and early drug discovery processes, and it is also applicable to many other curve fitting problems in chemical, biological, and medical sciences.

  14. "Zeus" a new oral anticoagulant therapy dosing algorithm: a cohort study.

    PubMed

    Cafolla, A; Melizzi, R; Baldacci, E; Pignoloni, P; Dragoni, F; Campanelli, M; Caraccini, R; Foà, R

    2011-10-01

    The demand for oral anticoagulant therapy (OAT) has constantly increased during the last ten years with an extended use of computer assistance. Many mathematical algorithms have been projected to suggest doses and time to next visit for patients on OAT. We designed a new algorithm: "Zeus". A "before-after" study was planned to compare the efficacy and safety of this algorithm dosing OAT with manual dosage decided by the same expert physicians according to the target of International Normalized Ratio (INR). The study analysed data of 1876 patients managed with each of the two modalities for eight months, with an interval of two years between them. The aim was to verify the increased quality of therapy by time spent in INR target and efficiency and safety of Zeus algorithm. Time in therapeutic range (TTR) was significantly (p < 0.0001) higher during the algorithm dosing period in comparison with the TTR during manual management period (62.3% vs 50.3%). The number of PT/INR tests above 5 was significantly (p < 0.001) reduced by algorithm suggested prescriptions in comparison with manual those (254 vs 537 times). The anticoagulant drug amount prescribed according to the algorithm suggestions was significantly (p < 0.0001) lower than that of the manual method. The number of clinical events observed in patients during the algorithm management time was significantly (p < 0.05) lower than that in those managed with the manual dosage. This study confirms the clinical utility of the computer-assisted OAT and shows the efficacy and safety of the Zeus algorithm.

  15. An algorithm for intelligent sorting of CT-related dose parameters.

    PubMed

    Cook, Tessa S; Zimmerman, Stefan L; Steingall, Scott R; Boonn, William W; Kim, Woojin

    2012-02-01

    Imaging centers nationwide are seeking innovative means to record and monitor computed tomography (CT)-related radiation dose in light of multiple instances of patient overexposure to medical radiation. As a solution, we have developed RADIANCE, an automated pipeline for extraction, archival, and reporting of CT-related dose parameters. Estimation of whole-body effective dose from CT dose length product (DLP)--an indirect estimate of radiation dose--requires anatomy-specific conversion factors that cannot be applied to total DLP, but instead necessitate individual anatomy-based DLPs. A challenge exists because the total DLP reported on a dose sheet often includes multiple separate examinations (e.g., chest CT followed by abdominopelvic CT). Furthermore, the individual reported series DLPs may not be clearly or consistently labeled. For example, "arterial" could refer to the arterial phase of the triple liver CT or the arterial phase of a CT angiogram. To address this problem, we have designed an intelligent algorithm to parse dose sheets for multi-series CT examinations and correctly separate the total DLP into its anatomic components. The algorithm uses information from the departmental PACS to determine how many distinct CT examinations were concurrently performed. Then, it matches the number of distinct accession numbers to the series that were acquired and anatomically matches individual series DLPs to their appropriate CT examinations. This algorithm allows for more accurate dose analytics, but there remain instances where automatic sorting is not feasible. To ultimately improve radiology patient care, we must standardize series names and exam names to unequivocally sort exams by anatomy and correctly estimate whole-body effective dose.

  16. SU-E-T-313: The Accuracy of the Acuros XB Advanced Dose Calculation Algorithm for IMRT Dose Distributions in Head and Neck

    SciTech Connect

    Araki, F; Onizuka, R; Ohno, T; Tomiyama, Y; Hioki, K

    2014-06-01

    Purpose: To investigate the accuracy of the Acuros XB version 11 (AXB11) advanced dose calculation algorithm by comparing with Monte Caro (MC) calculations. The comparisons were performed with dose distributions for a virtual inhomogeneity phantom and intensity-modulated radiotherapy (IMRT) in head and neck. Methods: Recently, AXB based on Linear Boltzmann Transport Equation has been installed in the Eclipse treatment planning system (Varian Medical Oncology System, USA). The dose calculation accuracy of AXB11 was tested by the EGSnrc-MC calculations. In additions, AXB version 10 (AXB10) and Analytical Anisotropic Algorithm (AAA) were also used. First the accuracy of an inhomogeneity correction for AXB and AAA algorithms was evaluated by comparing with MC-calculated dose distributions for a virtual inhomogeneity phantom that includes water, bone, air, adipose, muscle, and aluminum. Next the IMRT dose distributions for head and neck were compared with the AXB and AAA algorithms and MC by means of dose volume histograms and three dimensional gamma analysis for each structure (CTV, OAR, etc.). Results: For dose distributions with the virtual inhomogeneity phantom, AXB was in good agreement with those of MC, except the dose in air region. The dose in air region decreased in order of MCalgorithms, ie: 0.700 MeV for MC, 0.711 MeV for AXB11, and 1.011 MeV for AXB 10. Since the AAA algorithm is based on the dose kernel of water, the doses in regions for air, bone, and aluminum considerably became higher than those of AXB and MC. The pass rates of the gamma analysis for IMRT dose distributions in head and neck were similar to those of MC in order of AXB11dose calculation accuracy of AXB11 was almost equivalent to the MC dose calculation.

  17. A study on the safety, efficacy, and efficiency of sulodexide compared with acenocoumarol in secondary prophylaxis in patients with deep venous thrombosis.

    PubMed

    Cirujeda, J Lasierra; Granado, P Coronel

    2006-01-01

    This study was carried out to study the safety and efficacy of a fixed dosage of sulodexide compared to adjusted dosages (INR) of acenocoumarol as secondary prophylaxis in patients with deep vein thrombosis (DVT) in lower limbs. An economic evaluation based on the criteria of use in normal clinical practice was also performed. One hundred and fifty patients of both sexes were included, all over 18 years of age and diagnosed with proximal DVT of the lower limbs by color echo-Doppler, and with clinical evolution of less than 1 month. The patients were initially treated with low-molecular-weight heparin (LMWH) and urokinase in accordance with the established protocol. They were then randomized to continue treatment with acenocoumarol and INR adjustments every 30 days, or with sulodexide. Treatment was extended for 3 months with monthly follow-up visits and a final visit at 3 months posttreatment. No differences between the groups were detected concerning demographic or basal characteristics in clinical evolution or adverse reactions. In the group treated with sulodexide, no major/minor hemorrhagic complications were detected. On the other hand, in the acenocoumarol group, 1 major hemorrhage and 9 minor hemorrhages were produced (13.3%), reaching statistical difference in relation to the sulodexide group (p = 0.014; CI from 95% of 4.7% to 19.4%). Regarding the economic impact, treatment costs with sulodexide are much less than those with acenocoumarol, the data confirmed by the sensitivity analyses performed. The results prove the efficacy, safety, and efficiency of sulodexide as a secondary prophylaxis in thromboembolic disease, avoiding hemorrhagic risks and the monitoring of patients, and providing significant savings to the health system.

  18. Comparison of dose calculation algorithms in slab phantoms with cortical bone equivalent heterogeneities.

    PubMed

    Carrasco, P; Jornet, N; Duch, M A; Panettieri, V; Weber, L; Eudaldo, T; Ginjaume, M; Ribas, M

    2007-08-01

    To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10 x 10, 5 x 5, and 2 x 2 cm2) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2 x 2 cm2 field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values

  19. Comparison of dose calculation algorithms in slab phantoms with cortical bone equivalent heterogeneities

    SciTech Connect

    Carrasco, P.; Jornet, N.; Duch, M. A.; Panettieri, V.; Weber, L.; Eudaldo, T.; Ginjaume, M.; Ribas, M.

    2007-08-15

    To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10x10, 5x5, and 2x2 cm{sup 2}) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2x2 cm{sup 2} field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values

  20. A comparison between anisotropic analytical and multigrid superposition dose calculation algorithms in radiotherapy treatment planning

    SciTech Connect

    Wu, Vincent W.C.; Tse, Teddy K.H.; Ho, Cola L.M.; Yeung, Eric C.Y.

    2013-07-01

    Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time.

  1. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy

    NASA Astrophysics Data System (ADS)

    Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.

    2014-08-01

    The purpose of this study was to assess the possibility of introducing site-specific range margins to replace current generic margins in proton therapy. Further, the goal was to study the potential of reducing margins with current analytical dose calculations methods. For this purpose we investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo (MC) simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for seven disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head and neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and MC algorithms to obtain the average range differences and root mean square deviation for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing MC dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head and neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be

  2. Development of a deformable dosimetric phantom to verify dose accumulation algorithms for adaptive radiotherapy

    PubMed Central

    Zhong, Hualiang; Adams, Jeffrey; Glide-Hurst, Carri; Zhang, Hualin; Li, Haisen; Chetty, Indrin J.

    2016-01-01

    Adaptive radiotherapy may improve treatment outcomes for lung cancer patients. Because of the lack of an effective tool for quality assurance, this therapeutic modality is not yet accepted in clinic. The purpose of this study is to develop a deformable physical phantom for validation of dose accumulation algorithms in regions with heterogeneous mass. A three-dimensional (3D) deformable phantom was developed containing a tissue-equivalent tumor and heterogeneous sponge inserts. Thermoluminescent dosimeters (TLDs) were placed at multiple locations in the phantom each time before dose measurement. Doses were measured with the phantom in both the static and deformed cases. The deformation of the phantom was actuated by a motor driven piston. 4D computed tomography images were acquired to calculate 3D doses at each phase using Pinnacle and EGSnrc/DOSXYZnrc. These images were registered using two registration software packages: VelocityAI and Elastix. With the resultant displacement vector fields (DVFs), the calculated 3D doses were accumulated using a mass-and energy congruent mapping method and compared to those measured by the TLDs at four typical locations. In the static case, TLD measurements agreed with all the algorithms by 1.8% at the center of the tumor volume and by 4.0% in the penumbra. In the deformable case, the phantom's deformation was reproduced within 1.1 mm. For the 3D dose calculated by Pinnacle, the total dose accumulated with the Elastix DVF agreed well to the TLD measurements with their differences <2.5% at four measured locations. When the VelocityAI DVF was used, their difference increased up to 11.8%. For the 3D dose calculated by EGSnrc/DOSXYZnrc, the total doses accumulated with the two DVFs were within 5.7% of the TLD measurements which are slightly over the rate of 5% for clinical acceptance. The detector-embedded deformable phantom allows radiation dose to be measured in a dynamic environment, similar to deforming lung tissues, supporting

  3. The photon dose calculation algorithm used in breast radiotherapy has significant impact on the parameters of radiobiological models.

    PubMed

    Petillion, Saskia; Swinnen, Ans; Defraene, Gilles; Verhoeven, Karolien; Weltens, Caroline; Van den Heuvel, Frank

    2014-07-08

    The comparison of the pencil beam dose calculation algorithm with modified Batho heterogeneity correction (PBC-MB) and the analytical anisotropic algorithm (AAA) and the mutual comparison of advanced dose calculation algorithms used in breast radiotherapy have focused on the differences between the physical dose distributions. Studies on the radiobiological impact of the algorithm (both on the tumor control and the moderate breast fibrosis prediction) are lacking. We, therefore, investigated the radiobiological impact of the dose calculation algorithm in whole breast radiotherapy. The clinical dose distributions of 30 breast cancer patients, calculated with PBC-MB, were recalculated with fixed monitor units using more advanced algorithms: AAA and Acuros XB. For the latter, both dose reporting modes were used (i.e., dose-to-medium and dose-to-water). Next, the tumor control probability (TCP) and the normal tissue complication probability (NTCP) of each dose distribution were calculated with the Poisson model and with the relative seriality model, respectively. The endpoint for the NTCP calculation was moderate breast fibrosis five years post treatment. The differences were checked for significance with the paired t-test. The more advanced algorithms predicted a significantly lower TCP and NTCP of moderate breast fibrosis then found during the corresponding clinical follow-up study based on PBC calculations. The differences varied between 1% and 2.1% for the TCP and between 2.9% and 5.5% for the NTCP of moderate breast fibrosis. The significant differences were eliminated by determination of algorithm-specific model parameters using least square fitting. Application of the new parameters on a second group of 30 breast cancer patients proved their appropriateness. In this study, we assessed the impact of the dose calculation algorithms used in whole breast radiotherapy on the parameters of the radiobiological models. The radiobiological impact was eliminated by

  4. Advantages of multiple algorithm support in treatment planning system for external beam dose calculations.

    PubMed

    2005-01-01

    The complexity of interactions and the nature of the approximations made in the formulation of the algorithm require that the user be familiar with the limitations of various models. As computer power keeps growing, calculation algorithms are tending more towards physically based models. The nature and quantity of the data required varies according to the model which may be either measurement based models or physical based models. Multiple dose calculation algorithm support found in XiO Treatment Planning System can be used to advantage when choice is to be made between speed and accuracy. Thus XiO allows end users generate plans accurately and quickly to optimize the delivery of radiation therapy.

  5. Monte Carlo photon beam modeling and commissioning for radiotherapy dose calculation algorithm.

    PubMed

    Toutaoui, A; Ait chikh, S; Khelassi-Toutaoui, N; Hattali, B

    2014-11-01

    The aim of the present work was a Monte Carlo verification of the Multi-grid superposition (MGS) dose calculation algorithm implemented in the CMS XiO (Elekta) treatment planning system and used to calculate the dose distribution produced by photon beams generated by the linear accelerator (linac) Siemens Primus. The BEAMnrc/DOSXYZnrc (EGSnrc package) Monte Carlo model of the linac head was used as a benchmark. In the first part of the work, the BEAMnrc was used for the commissioning of a 6 MV photon beam and to optimize the linac description to fit the experimental data. In the second part, the MGS dose distributions were compared with DOSXYZnrc using relative dose error comparison and γ-index analysis (2%/2 mm, 3%/3 mm), in different dosimetric test cases. Results show good agreement between simulated and calculated dose in homogeneous media for square and rectangular symmetric fields. The γ-index analysis confirmed that for most cases the MGS model and EGSnrc doses are within 3% or 3 mm.

  6. MO-E-17A-05: Individualized Patient Dosimetry in CT Using the Patient Dose (PATDOSE) Algorithm

    SciTech Connect

    Hernandez, A; Boone, J

    2014-06-15

    Purpose: Radiation dose to the patient undergoing a CT examination has been the focus of many recent studies. While CTDIvol and SSDE-based methods are important tools for patient dose management, the CT image data provides important information with respect to CT dose and its distribution. Coupled with the known geometry and output factors (kV, mAs, pitch, etc.) of the CT scanner, the CT dataset can be used directly for computing absorbed dose. Methods: The HU numbers in a patient's CT data set can be converted to linear attenuation coefficients (LACs) with some assumptions. With this (PAT-DOSE) method, which is not Monte Carlo-based, the primary and scatter dose are computed separately. The primary dose is computed directly from the geometry of the scanner, x-ray spectrum, and the known patient LACs. Once the primary dose has been computed to all voxels in the patient, the scatter dose algorithm redistributes a fraction of the absorbed primary dose (based on the HU number of each source voxel), and the methods here invoke both tissue attenuation and absorption and solid angle geometry. The scatter dose algorithm can be run N times to include Nth-scatter redistribution. PAT-DOSE was deployed using simple PMMA phantoms, to validate its performance against Monte Carlo-derived dose distributions. Results: Comparison between PAT-DOSE and MCNPX primary dose distributions showed excellent agreement for several scan lengths. The 1st-scatter dose distributions showed relatively higher-amplitude, long-range scatter tails for the PAT-DOSE algorithm then for MCNPX simulations. Conclusion: The PAT-DOSE algorithm provides a fast, deterministic assessment of the 3-D dose distribution in CT, making use of scanner geometry and the patient image data set. The preliminary implementation of the algorithm produces accurate primary dose distributions however achieving scatter distribution agreement is more challenging. Addressing the polyenergetic x-ray spectrum and spatially dependent

  7. Evaluation of a commercial MRI Linac based Monte Carlo dose calculation algorithm with GEANT 4

    SciTech Connect

    Ahmad, Syed Bilal; Sarfehnia, Arman; Kim, Anthony; Sahgal, Arjun; Keller, Brian; Paudel, Moti Raj; Hissoiny, Sami

    2016-02-15

    Purpose: This paper provides a comparison between a fast, commercial, in-patient Monte Carlo dose calculation algorithm (GPUMCD) and GEANT4. It also evaluates the dosimetric impact of the application of an external 1.5 T magnetic field. Methods: A stand-alone version of the Elekta™ GPUMCD algorithm, to be used within the Monaco treatment planning system to model dose for the Elekta™ magnetic resonance imaging (MRI) Linac, was compared against GEANT4 (v10.1). This was done in the presence or absence of a 1.5 T static magnetic field directed orthogonally to the radiation beam axis. Phantoms with material compositions of water, ICRU lung, ICRU compact-bone, and titanium were used for this purpose. Beams with 2 MeV monoenergetic photons as well as a 7 MV histogrammed spectrum representing the MRI Linac spectrum were emitted from a point source using a nominal source-to-surface distance of 142.5 cm. Field sizes ranged from 1.5 × 1.5 to 10 × 10 cm{sup 2}. Dose scoring was performed using a 3D grid comprising 1 mm{sup 3} voxels. The production thresholds were equivalent for both codes. Results were analyzed based upon a voxel by voxel dose difference between the two codes and also using a volumetric gamma analysis. Results: Comparisons were drawn from central axis depth doses, cross beam profiles, and isodose contours. Both in the presence and absence of a 1.5 T static magnetic field the relative differences in doses scored along the beam central axis were less than 1% for the homogeneous water phantom and all results matched within a maximum of ±2% for heterogeneous phantoms. Volumetric gamma analysis indicated that more than 99% of the examined volume passed gamma criteria of 2%—2 mm (dose difference and distance to agreement, respectively). These criteria were chosen because the minimum primary statistical uncertainty in dose scoring voxels was 0.5%. The presence of the magnetic field affects the dose at the interface depending upon the density of the material

  8. SU-F-BRD-09: A Random Walk Model Algorithm for Proton Dose Calculation

    SciTech Connect

    Yao, W; Farr, J

    2015-06-15

    Purpose: To develop a random walk model algorithm for calculating proton dose with balanced computation burden and accuracy. Methods: Random walk (RW) model is sometimes referred to as a density Monte Carlo (MC) simulation. In MC proton dose calculation, the use of Gaussian angular distribution of protons due to multiple Coulomb scatter (MCS) is convenient, but in RW the use of Gaussian angular distribution requires an extremely large computation and memory. Thus, our RW model adopts spatial distribution from the angular one to accelerate the computation and to decrease the memory usage. From the physics and comparison with the MC simulations, we have determined and analytically expressed those critical variables affecting the dose accuracy in our RW model. Results: Besides those variables such as MCS, stopping power, energy spectrum after energy absorption etc., which have been extensively discussed in literature, the following variables were found to be critical in our RW model: (1) inverse squared law that can significantly reduce the computation burden and memory, (2) non-Gaussian spatial distribution after MCS, and (3) the mean direction of scatters at each voxel. In comparison to MC results, taken as reference, for a water phantom irradiated by mono-energetic proton beams from 75 MeV to 221.28 MeV, the gamma test pass rate was 100% for the 2%/2mm/10% criterion. For a highly heterogeneous phantom consisting of water embedded by a 10 cm cortical bone and a 10 cm lung in the Bragg peak region of the proton beam, the gamma test pass rate was greater than 98% for the 3%/3mm/10% criterion. Conclusion: We have determined key variables in our RW model for proton dose calculation. Compared with commercial pencil beam algorithms, our RW model much improves the dose accuracy in heterogeneous regions, and is about 10 times faster than MC simulations.

  9. Dosimetric validation of the Acuros XB Advanced Dose Calculation algorithm: fundamental characterization in water

    NASA Astrophysics Data System (ADS)

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Mancosu, Pietro; Cozzi, Luca

    2011-05-01

    This corrigendum intends to clarify some important points that were not clearly or properly addressed in the original paper, and for which the authors apologize. The original description of the first Acuros algorithm is from the developers, published in Physics in Medicine and Biology by Vassiliev et al (2010) in the paper entitled 'Validation of a new grid-based Boltzmann equation solver for dose calculation in radiotherapy with photon beams'. The main equations describing the algorithm reported in our paper, implemented as the 'Acuros XB Advanced Dose Calculation Algorithm' in the Varian Eclipse treatment planning system, were originally described (for the original Acuros algorithm) in the above mentioned paper by Vassiliev et al. The intention of our description in our paper was to give readers an overview of the algorithm, not pretending to have authorship of the algorithm itself (used as implemented in the planning system). Unfortunately our paper was not clear, particularly in not allocating full credit to the work published by Vassiliev et al on the original Acuros algorithm. Moreover, it is important to clarify that we have not adapted any existing algorithm, but have used the Acuros XB implementation in the Eclipse planning system from Varian. In particular, the original text of our paper should have been as follows: On page 1880 the sentence 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008, 2010). Acuros XB builds upon many of the methods in Attila, but represents a ground-up rewrite of the solver where the methods were adapted especially for external photon beam dose calculations' should be corrected to 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008). A new algorithm called Acuros, developed by the Transpire Inc. group, was

  10. Development of a pharmacogenetic-guided warfarin dosing algorithm for Puerto Rican patients

    PubMed Central

    Ramos, Alga S; Seip, Richard L; Rivera-Miranda, Giselle; Felici-Giovanini, Marcos E; Garcia-Berdecia, Rafael; Alejandro-Cowan, Yirelia; Kocherla, Mohan; Cruz, Iadelisse; Feliu, Juan F; Cadilla, Carmen L; Renta, Jessica Y; Gorowski, Krystyna; Vergara, Cunegundo; Ruaño, Gualberto; Duconge, Jorge

    2012-01-01

    Aim This study was aimed at developing a pharmacogenetic-driven warfarin-dosing algorithm in 163 admixed Puerto Rican patients on stable warfarin therapy. Patients & methods A multiple linear-regression analysis was performed using log-transformed effective warfarin dose as the dependent variable, and combining CYP2C9 and VKORC1 genotyping with other relevant nongenetic clinical and demographic factors as independent predictors. Results The model explained more than two-thirds of the observed variance in the warfarin dose among Puerto Ricans, and also produced significantly better ‘ideal dose’ estimates than two pharmacogenetic models and clinical algorithms published previously, with the greatest benefit seen in patients ultimately requiring <7 mg/day. We also assessed the clinical validity of the model using an independent validation cohort of 55 Puerto Rican patients from Hartford, CT, USA (R2 = 51%). Conclusion Our findings provide the basis for planning prospective pharmacogenetic studies to demonstrate the clinical utility of genotyping warfarin-treated Puerto Rican patients. PMID:23215886

  11. Adapted Prescription Dose for Monte Carlo Algorithm in Lung SBRT: Clinical Outcome on 205 Patients

    PubMed Central

    Bibault, Jean-Emmanuel; Mirabel, Xavier; Lacornerie, Thomas; Tresch, Emmanuelle; Reynaert, Nick; Lartigau, Eric

    2015-01-01

    Purpose SBRT is the standard of care for inoperable patients with early-stage lung cancer without lymph node involvement. Excellent local control rates have been reported in a large number of series. However, prescription doses and calculation algorithms vary to a great extent between studies, even if most teams prescribe to the D95 of the PTV. Type A algorithms are known to produce dosimetric discrepancies in heterogeneous tissues such as lungs. This study was performed to present a Monte Carlo (MC) prescription dose for NSCLC adapted to lesion size and location and compare the clinical outcomes of two cohorts of patients treated with a standard prescription dose calculated by a type A algorithm or the proposed MC protocol. Patients and Methods Patients were treated from January 2011 to April 2013 with a type B algorithm (MC) prescription with 54 Gy in three fractions for peripheral lesions with a diameter under 30 mm, 60 Gy in 3 fractions for lesions with a diameter over 30 mm, and 55 Gy in five fractions for central lesions. Clinical outcome was compared to a series of 121 patients treated with a type A algorithm (TA) with three fractions of 20 Gy for peripheral lesions and 60 Gy in five fractions for central lesions prescribed to the PTV D95 until January 2011. All treatment plans were recalculated with both algorithms for this study. Spearman’s rank correlation coefficient was calculated for GTV and PTV. Local control, overall survival and toxicity were compared between the two groups. Results 205 patients with 214 lesions were included in the study. Among these, 93 lesions were treated with MC and 121 were treated with TA. Overall survival rates were 86% and 94% at one and two years, respectively. Local control rates were 79% and 93% at one and two years respectively. There was no significant difference between the two groups for overall survival (p = 0.785) or local control (p = 0.934). Fifty-six patients (27%) developed grade I lung fibrosis without

  12. Development of a dose algorithm for the modified panasonic UD-802 personal dosimeter used at three mile island

    SciTech Connect

    Miklos, J. A.; Plato, P.

    1988-01-01

    During the fall of 1981, the personnel dosimetry group at GPU Nuclear Corporation at Three Mile Island (TMI) requested assistance from The University of Michigan (UM) in developing a dose algorithm for use at TMI-2. The dose algorithm had to satisfy the specific needs of TMI-2, particularly the need to distinguish beta-particle emitters of different energies, as well as having the capability of satisfying the requirements of the American National Standards Institute (ANSI) N13.11-1983 standard. A standard Panasonic UD-802 dosimeter was modified by having the plastic filter over element 2 removed. The dosimeter and hanger consists of the elements with a 14 mg/cm/sup 2/ density thickness and the filtrations shown. The hanger on this dosimeter had a double open window to facilitate monitoring for low-energy beta particles. The dose algorithm was written to satisfy the requirements of the ANSI N13.11-1983 standard, to include /sup 204/Tl with mixtures of /sup 204/Tl with /sup 90/Sr//sup 90/Y and /sup 137/Cs, and to include 81- and 200-keV average energy X-ray spectra. Stress tests were conducted to observe the algorithm performance to low doses, temperature, humidity, and the residual response following high-dose irradiations. The ability of the algorithm to determine dose from the beta particles of /sup 147/Pm was also investigated.

  13. Toward adaptive radiotherapy for head and neck patients: Uncertainties in dose warping due to the choice of deformable registration algorithm

    SciTech Connect

    Veiga, Catarina Royle, Gary; Lourenço, Ana Mónica; Mouinuddin, Syed; Herk, Marcel van; Modat, Marc; Ourselin, Sébastien; McClelland, Jamie R.

    2015-02-15

    Purpose: The aims of this work were to evaluate the performance of several deformable image registration (DIR) algorithms implemented in our in-house software (NiftyReg) and the uncertainties inherent to using different algorithms for dose warping. Methods: The authors describe a DIR based adaptive radiotherapy workflow, using CT and cone-beam CT (CBCT) imaging. The transformations that mapped the anatomy between the two time points were obtained using four different DIR approaches available in NiftyReg. These included a standard unidirectional algorithm and more sophisticated bidirectional ones that encourage or ensure inverse consistency. The forward (CT-to-CBCT) deformation vector fields (DVFs) were used to propagate the CT Hounsfield units and structures to the daily geometry for “dose of the day” calculations, while the backward (CBCT-to-CT) DVFs were used to remap the dose of the day onto the planning CT (pCT). Data from five head and neck patients were used to evaluate the performance of each implementation based on geometrical matching, physical properties of the DVFs, and similarity between warped dose distributions. Geometrical matching was verified in terms of dice similarity coefficient (DSC), distance transform, false positives, and false negatives. The physical properties of the DVFs were assessed calculating the harmonic energy, determinant of the Jacobian, and inverse consistency error of the transformations. Dose distributions were displayed on the pCT dose space and compared using dose difference (DD), distance to dose difference, and dose volume histograms. Results: All the DIR algorithms gave similar results in terms of geometrical matching, with an average DSC of 0.85 ± 0.08, but the underlying properties of the DVFs varied in terms of smoothness and inverse consistency. When comparing the doses warped by different algorithms, we found a root mean square DD of 1.9% ± 0.8% of the prescribed dose (pD) and that an average of 9% ± 4% of

  14. Experimental evaluation of a GPU-based Monte Carlo dose calculation algorithm in the Monaco treatment planning system.

    PubMed

    Paudel, Moti R; Kim, Anthony; Sarfehnia, Arman; Ahmad, Sayed B; Beachey, David J; Sahgal, Arjun; Keller, Brian M

    2016-11-01

    A new GPU-based Monte Carlo dose calculation algorithm (GPUMCD), developed by the vendor Elekta for the Monaco treatment planning system (TPS), is capable of modeling dose for both a standard linear accelerator and an Elekta MRI linear accelerator. We have experimentally evaluated this algorithm for a standard Elekta Agility linear accelerator. A beam model was developed in the Monaco TPS (research version 5.09.06) using the commissioned beam data for a 6 MV Agility linac. A heterogeneous phantom representing several scenarios - tumor-in-lung, lung, and bone-in-tissue - was designed and built. Dose calculations in Monaco were done using both the current clinical Monte Carlo algorithm, XVMC, and the new GPUMCD algorithm. Dose calculations in a Pinnacle TPS were also produced using the collapsed cone convolution (CCC) algorithm with heterogeneity correction. Calculations were compared with the measured doses using an ionization chamber (A1SL) and Gafchromic EBT3 films for 2×2 cm2,5×5 cm2, and 10×2 cm2 field sizes. The percentage depth doses (PDDs) calculated by XVMC and GPUMCD in a homogeneous solid water phantom were within 2%/2 mm of film measurements and within 1% of ion chamber measurements. For the tumor-in-lung phantom, the calculated doses were within 2.5%/2.5 mm of film measurements for GPUMCD. For the lung phantom, doses calculated by all of the algorithms were within 3%/3 mm of film measurements, except for the 2×2 cm2 field size where the CCC algorithm underestimated the depth dose by ∼5% in a larger extent of the lung region. For the bone phantom, all of the algorithms were equivalent and calculated dose to within 2%/2 mm of film measurements, except at the interfaces. Both GPUMCD and XVMC showed interface effects, which were more pronounced for GPUMCD and were comparable to film measurements, whereas the CCC algorithm showed these effects poorly. PACS number(s): 87.53.Bn, 87.55.dh, 87.55.km.

  15. Characterization of adaptive statistical iterative reconstruction algorithm for dose reduction in CT: A pediatric oncology perspective

    SciTech Connect

    Brady, S. L.; Yee, B. S.; Kaufman, R. A.

    2012-09-15

    Purpose: This study demonstrates a means of implementing an adaptive statistical iterative reconstruction (ASiR Trade-Mark-Sign ) technique for dose reduction in computed tomography (CT) while maintaining similar noise levels in the reconstructed image. The effects of image quality and noise texture were assessed at all implementation levels of ASiR Trade-Mark-Sign . Empirically derived dose reduction limits were established for ASiR Trade-Mark-Sign for imaging of the trunk for a pediatric oncology population ranging from 1 yr old through adolescence/adulthood. Methods: Image quality was assessed using metrics established by the American College of Radiology (ACR) CT accreditation program. Each image quality metric was tested using the ACR CT phantom with 0%-100% ASiR Trade-Mark-Sign blended with filtered back projection (FBP) reconstructed images. Additionally, the noise power spectrum (NPS) was calculated for three common reconstruction filters of the trunk. The empirically derived limitations on ASiR Trade-Mark-Sign implementation for dose reduction were assessed using (1, 5, 10) yr old and adolescent/adult anthropomorphic phantoms. To assess dose reduction limits, the phantoms were scanned in increments of increased noise index (decrementing mA using automatic tube current modulation) balanced with ASiR Trade-Mark-Sign reconstruction to maintain noise equivalence of the 0% ASiR Trade-Mark-Sign image. Results: The ASiR Trade-Mark-Sign algorithm did not produce any unfavorable effects on image quality as assessed by ACR criteria. Conversely, low-contrast resolution was found to improve due to the reduction of noise in the reconstructed images. NPS calculations demonstrated that images with lower frequency noise had lower noise variance and coarser graininess at progressively higher percentages of ASiR Trade-Mark-Sign reconstruction; and in spite of the similar magnitudes of noise, the image reconstructed with 50% or more ASiR Trade-Mark-Sign presented a more

  16. Six iterative reconstruction algorithms in brain CT: a phantom study on image quality at different radiation dose levels

    PubMed Central

    Olsson, M-L; Siemund, R; Stålhammar, F; Björkman-Burtscher, I M; Söderberg, M

    2013-01-01

    Objective: To evaluate the image quality produced by six different iterative reconstruction (IR) algorithms in four CT systems in the setting of brain CT, using different radiation dose levels and iterative image optimisation levels. Methods: An image quality phantom, supplied with a bone mimicking annulus, was examined using four CT systems from different vendors and four radiation dose levels. Acquisitions were reconstructed using conventional filtered back-projection (FBP), three levels of statistical IR and, when available, a model-based IR algorithm. The evaluated image quality parameters were CT numbers, uniformity, noise, noise-power spectra, low-contrast resolution and spatial resolution. Results: Compared with FBP, noise reduction was achieved by all six IR algorithms at all radiation dose levels, with further improvement seen at higher IR levels. Noise-power spectra revealed changes in noise distribution relative to the FBP for most statistical IR algorithms, especially the two model-based IR algorithms. Compared with FBP, variable degrees of improvements were seen in both objective and subjective low-contrast resolutions for all IR algorithms. Spatial resolution was improved with both model-based IR algorithms and one of the statistical IR algorithms. Conclusion: The four statistical IR algorithms evaluated in the study all improved the general image quality compared with FBP, with improvement seen for most or all evaluated quality criteria. Further improvement was achieved with one of the model-based IR algorithms. Advances in knowledge: The six evaluated IR algorithms all improve the image quality in brain CT but show different strengths and weaknesses. PMID:24049128

  17. Feasibility study of a simple approximation algorithm for in-vivo dose reconstruction by using the transit dose measured using an EPID

    NASA Astrophysics Data System (ADS)

    Hwang, Ui-Jung; Song, Mi Hee; Baek, Tae Seong; Chung, Eun Ji; Yoon, Myonggeun

    2015-02-01

    The purpose of this study is to verify the accuracy of the dose delivered to the patient during intensity-modulated radiation therapy (IMRT) by using in-vivo dosimetry and to avoid accidental exposure to healthy tissues and organs close to tumors. The in-vivo dose was reconstructed by back projection of the transit dose with a simple approximation that considered only the percent depth dose and inverse square law. While the average gamma index for comparisons of dose distributions between the calculated dose map and the film measurement was less than the one for 96.3% of all pixels with the homogeneous phantom, the passing rate was reduced to 92.8% with the inhomogeneous phantom, suggesting that the reduction was apparently due to the inaccuracy of the reconstruction algorithm for inhomogeneity. The proposed method of calculating the dose inside a phantom was of comparable or better accuracy than the treatment planning system, suggesting that it can be used to verify the accuracy of the dose delivered to the patient during treatment.

  18. SU-E-T-416: Experimental Evaluation of a Commercial GPU-Based Monte Carlo Dose Calculation Algorithm

    SciTech Connect

    Paudel, M R; Beachey, D J; Sarfehnia, A; Sahgal, A; Keller, B; Kim, A; Ahmad, S

    2015-06-15

    Purpose: A new commercial GPU-based Monte Carlo dose calculation algorithm (GPUMCD) developed by the vendor Elekta™ to be used in the Monaco Treatment Planning System (TPS) is capable of modeling dose for both a standard linear accelerator and for an Elekta MRI-Linear accelerator (modeling magnetic field effects). We are evaluating this algorithm in two parts: commissioning the algorithm for an Elekta Agility linear accelerator (the focus of this work) and evaluating the algorithm’s ability to model magnetic field effects for an MRI-linear accelerator. Methods: A beam model was developed in the Monaco TPS (v.5.09.06) using the commissioned beam data for a 6MV Agility linac. A heterogeneous phantom representing tumor-in-lung, lung, bone-in-tissue, and prosthetic was designed/built. Dose calculations in Monaco were done using the current clinical algorithm (XVMC) and the new GPUMCD algorithm (1 mm3 voxel size, 0.5% statistical uncertainty) and in the Pinnacle TPS using the collapsed cone convolution (CCC) algorithm. These were compared with the measured doses using an ionization chamber (A1SL) and Gafchromic EBT3 films for 2×2 cm{sup 2}, 5×5 cm{sup 2}, and 10×10 cm{sup 2} field sizes. Results: The calculated central axis percentage depth doses (PDDs) in homogeneous solid water were within 2% compared to measurements for XVMC and GPUMCD. For tumor-in-lung and lung phantoms, doses calculated by all of the algorithms were within the experimental uncertainty of the measurements (±2% in the homogeneous phantom and ±3% for the tumor-in-lung or lung phantoms), except for 2×2 cm{sup 2} field size where only the CCC algorithm differs from film by 5% in the lung region. The analysis for bone-in-tissue and the prosthetic phantoms are ongoing. Conclusion: The new GPUMCD algorithm calculated dose comparable to both the XVMC algorithm and to measurements in both a homogeneous solid water medium and the heterogeneous phantom representing lung or tumor-in-lung for 2×2 cm

  19. Clinical implementation of a digital tomosynthesis-based seed reconstruction algorithm for intraoperative postimplant dose evaluation in low dose rate prostate brachytherapy

    SciTech Connect

    Brunet-Benkhoucha, Malik; Verhaegen, Frank; Lassalle, Stephanie; Beliveau-Nadeau, Dominic; Reniers, Brigitte; Donath, David; Taussky, Daniel; Carrier, Jean-Francois

    2009-11-15

    Purpose: The low dose rate brachytherapy procedure would benefit from an intraoperative postimplant dosimetry verification technique to identify possible suboptimal dose coverage and suggest a potential reimplantation. The main objective of this project is to develop an efficient, operator-free, intraoperative seed detection technique using the imaging modalities available in a low dose rate brachytherapy treatment room. Methods: This intraoperative detection allows a complete dosimetry calculation that can be performed right after an I-125 prostate seed implantation, while the patient is still under anesthesia. To accomplish this, a digital tomosynthesis-based algorithm was developed. This automatic filtered reconstruction of the 3D volume requires seven projections acquired over a total angle of 60 deg. with an isocentric imaging system. Results: A phantom study was performed to validate the technique that was used in a retrospective clinical study involving 23 patients. In the patient study, the automatic tomosynthesis-based reconstruction yielded seed detection rates of 96.7% and 2.6% false positives. The seed localization error obtained with a phantom study is 0.4{+-}0.4 mm. The average time needed for reconstruction is below 1 min. The reconstruction algorithm also provides the seed orientation with an uncertainty of 10 deg. {+-}8 deg. The seed detection algorithm presented here is reliable and was efficiently used in the clinic. Conclusions: When combined with an appropriate coregistration technique to identify the organs in the seed coordinate system, this algorithm will offer new possibilities for a next generation of clinical brachytherapy systems.

  20. A Shearlet-based algorithm for quantum noise removal in low-dose CT images

    NASA Astrophysics Data System (ADS)

    Zhang, Aguan; Jiang, Huiqin; Ma, Ling; Liu, Yumin; Yang, Xiaopeng

    2016-03-01

    Low-dose CT (LDCT) scanning is a potential way to reduce the radiation exposure of X-ray in the population. It is necessary to improve the quality of low-dose CT images. In this paper, we propose an effective algorithm for quantum noise removal in LDCT images using shearlet transform. Because the quantum noise can be simulated by Poisson process, we first transform the quantum noise by using anscombe variance stabilizing transform (VST), producing an approximately Gaussian noise with unitary variance. Second, the non-noise shearlet coefficients are obtained by adaptive hard-threshold processing in shearlet domain. Third, we reconstruct the de-noised image using the inverse shearlet transform. Finally, an anscombe inverse transform is applied to the de-noised image, which can produce the improved image. The main contribution is to combine the anscombe VST with the shearlet transform. By this way, edge coefficients and noise coefficients can be separated from high frequency sub-bands effectively. A number of experiments are performed over some LDCT images by using the proposed method. Both quantitative and visual results show that the proposed method can effectively reduce the quantum noise while enhancing the subtle details. It has certain value in clinical application.

  1. Accuracy of pencil-beam redefinition algorithm dose calculations in patient-like cylindrical phantoms for bolus electron conformal therapy

    SciTech Connect

    Carver, Robert L.; Hogstrom, Kenneth R.; Chu, Connel; Fields, Robert S.; Sprunger, Conrad P.

    2013-07-15

    Purpose: The purpose of this study was to document the improved accuracy of the pencil beam redefinition algorithm (PBRA) compared to the pencil beam algorithm (PBA) for bolus electron conformal therapy using cylindrical patient phantoms based on patient computed tomography (CT) scans of retromolar trigone and nose cancer.Methods: PBRA and PBA electron dose calculations were compared with measured dose in retromolar trigone and nose phantoms both with and without bolus. For the bolus treatment plans, a radiation oncologist outlined a planning target volume (PTV) on the central axis slice of the CT scan for each phantom. A bolus was designed using the planning.decimal{sup Registered-Sign} (p.d) software (.decimal, Inc., Sanford, FL) to conform the 90% dose line to the distal surface of the PTV. Dose measurements were taken with thermoluminescent dosimeters placed into predrilled holes. The Pinnacle{sup 3} (Philips Healthcare, Andover, MD) treatment planning system was used to calculate PBA dose distributions. The PBRA dose distributions were calculated with an in-house C++ program. In order to accurately account for the phantom materials a table correlating CT number to relative electron stopping and scattering powers was compiled and used for both PBA and PBRA dose calculations. Accuracy was determined by comparing differences in measured and calculated dose, as well as distance to agreement for each measurement point.Results: The measured doses had an average precision of 0.9%. For the retromolar trigone phantom, the PBRA dose calculations had an average {+-}1{sigma} dose difference (calculated - measured) of -0.65%{+-} 1.62% without the bolus and -0.20%{+-} 1.54% with the bolus. The PBA dose calculation had an average dose difference of 0.19%{+-} 3.27% without the bolus and -0.05%{+-} 3.14% with the bolus. For the nose phantom, the PBRA dose calculations had an average dose difference of 0.50%{+-} 3.06% without bolus and -0.18%{+-} 1.22% with the bolus. The PBA

  2. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  3. SU-E-T-317: Dosimetric Evaluation of Acuros XB Advanced Dose Calculation Algorithm in Head and Neck Patients

    SciTech Connect

    Faught, A; Wu, Q

    2015-06-15

    Purpose: The Acuros XB photon dose calculation algorithm is a newly implemented calculation technique within the Eclipse treatment planning system using deterministic solutions to the linear Boltzmann transport equations. The goal of this study is to assess the clinical impact of dose differences arising from a retrospective comparison of calculations performed using the Analytical Anisotropic Algorithm (AAA) and Acuros XB on patients. Methods: Ten head and neck patients receiving intensity modulated radiation therapy were selected as a pilot study. Initial evaluation was based on the percentage of the planning target volume (PTV) covered by the prescription dose, minimum dose within the PTV, and dose differences in critical structures. For patients receiving boost plans, dosimetric evaluations were performed on the plan sum of the primary and boost plans. Results: Among the ten patients there were a total of 21 PTVs corresponding to primary and boost volumes. Using the same normalization within Eclipse, the average percentage of the PTVs receiving the prescription dose were 95.6% for AAA and Acuros XB. The average minimum doses within the PTVs, expressed as a percentage of the prescription to the volume, were 82.3% and 83.6% for AAA and Acuros XB respectively. Neither comparison showed differences with statistical significance when subjected to a paired t-test. Statistical significance was found in the average difference of the maximum dose for the mandible (242.5cGy, p=0.0005) and cord with a 5mm radial expansion (105.0cGy, p=0.0005) and in the median dose for the left parotid (25.0cGy, p=0.0423) and oral cavity (36.3cGy, p=0.002). Conclusion: The Acuros XB dose calculation algorithm did not exhibit significant differences in PTV coverage when compared to the AAA algorithm. Significant differences in critical structures are likely attributed to the structures proximity to high atomic number materials or air cavities, regions of known difficulty for the AAA

  4. New noise reduction method for reducing CT scan dose: Combining Wiener filtering and edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Anam, Choirul; Haryanto, Freddy; Widita, Rena; Arif, Idam

    2015-09-01

    New noise reduction method for reducing dose of CT scans has been proposed. The new method is expected to address the major problems in the noise reduction algorithm, i.e. the decreasing in the spatial resolution of the image. The proposed method was developed by combining adaptive Wiener filtering and edge detection algorithms. The first step, the image was filtered with a Wiener filter. Separately, edge detection operation performed on the original image using the Prewitt method. The next step, a new image was generated based on the edge detection operation. At the edge area, the image was taken from the original image, while at the non-edge area, the image was taken from the image that had been filtered with a Wiener filter. The new method was tested on a CT image of the spatial resolution phantom, which was scanned by different current-time multiplication, namely 80, 130 and 200 mAs, while other exposure factors were kept in constant conditions. The spatial resolution phantom consists of six sets of bar pattern made of plexi-glass and separated at some distance by water. The new image quality assessed from the amount of noise and the magnitude of spatial resolution. Noise was calculated by determining the standard deviation of the homogeneous regions, while the spatial resolution was assessed by observation of the area sets of the bar pattern. In addition, to evaluate the performance of this new method has also been tested on patient CT images. From the measurements, the new method can reduce the noise to an average 64.85%, with a spatial resolution does not decrease significantly. Visually, the third set bar on the image phantom (the distance between the bar 1.0 mm) can still be distinguished, as well as on the original image. Meanwhile, if the image is only processed using Wiener filter, the second set bar (the distance between the bar 1.3 mm) are distinguishable. Testing this new method to patient image, its results in relatively the same. Thus, using this

  5. Development of a phantom to validate high-dose-rate brachytherapy treatment planning systems with heterogeneous algorithms

    SciTech Connect

    Moura, Eduardo S.; Rostelato, Maria Elisa C. M.; Zeituni, Carlos A.

    2015-04-15

    Purpose: This work presents the development of a phantom to verify the treatment planning system (TPS) algorithms used for high-dose-rate (HDR) brachytherapy. It is designed to measure the relative dose in a heterogeneous media. The experimental details used, simulation methods, and comparisons with a commercial TPS are also provided. Methods: To simulate heterogeneous conditions, four materials were used: Virtual Water™ (VM), BR50/50™, cork, and aluminum. The materials were arranged in 11 heterogeneity configurations. Three dosimeters were used to measure the relative response from a HDR {sup 192}Ir source: TLD-100™, Gafchromic{sup ®} EBT3 film, and an Exradin™ A1SL ionization chamber. To compare the results from the experimental measurements, the various configurations were modeled in the PENELOPE/penEasy Monte Carlo code. Images of each setup geometry were acquired from a CT scanner and imported into BrachyVision™ TPS software, which includes a grid-based Boltzmann solver Acuros™. The results of the measurements performed in the heterogeneous setups were normalized to the dose values measured in the homogeneous Virtual Water™ setup and the respective differences due to the heterogeneities were considered. Additionally, dose values calculated based on the American Association of Physicists in Medicine-Task Group 43 formalism were compared to dose values calculated with the Acuros™ algorithm in the phantom. Calculated doses were compared at the same points, where measurements have been performed. Results: Differences in the relative response as high as 11.5% were found from the homogeneous setup when the heterogeneous materials were inserted into the experimental phantom. The aluminum and cork materials produced larger differences than the plastic materials, with the BR50/50™ material producing results similar to the Virtual Water™ results. Our experimental methods agree with the PENELOPE/penEasy simulations for most setups and dosimeters. The

  6. TH-E-BRE-07: Development of Dose Calculation Error Predictors for a Widely Implemented Clinical Algorithm

    SciTech Connect

    Egan, A; Laub, W

    2014-06-15

    Purpose: Several shortcomings of the current implementation of the analytic anisotropic algorithm (AAA) may lead to dose calculation errors in highly modulated treatments delivered to highly heterogeneous geometries. Here we introduce a set of dosimetric error predictors that can be applied to a clinical treatment plan and patient geometry in order to identify high risk plans. Once a problematic plan is identified, the treatment can be recalculated with more accurate algorithm in order to better assess its viability. Methods: Here we focus on three distinct sources dosimetric error in the AAA algorithm. First, due to a combination of discrepancies in smallfield beam modeling as well as volume averaging effects, dose calculated through small MLC apertures can be underestimated, while that behind small MLC blocks can overestimated. Second, due the rectilinear scaling of the Monte Carlo generated pencil beam kernel, energy is not properly transported through heterogeneities near, but not impeding, the central axis of the beamlet. And third, AAA overestimates dose in regions very low density (< 0.2 g/cm{sup 3}). We have developed an algorithm to detect the location and magnitude of each scenario within the patient geometry, namely the field-size index (FSI), the heterogeneous scatter index (HSI), and the lowdensity index (LDI) respectively. Results: Error indices successfully identify deviations between AAA and Monte Carlo dose distributions in simple phantom geometries. Algorithms are currently implemented in the MATLAB computing environment and are able to run on a typical RapidArc head and neck geometry in less than an hour. Conclusion: Because these error indices successfully identify each type of error in contrived cases, with sufficient benchmarking, this method can be developed into a clinical tool that may be able to help estimate AAA dose calculation errors and when it might be advisable to use Monte Carlo calculations.

  7. Assessing image quality and dose reduction of a new x-ray computed tomography iterative reconstruction algorithm using model observers

    SciTech Connect

    Tseng, Hsin-Wu Kupinski, Matthew A.; Fan, Jiahua; Sainath, Paavana; Hsieh, Jiang

    2014-07-15

    Purpose: A number of different techniques have been developed to reduce radiation dose in x-ray computed tomography (CT) imaging. In this paper, the authors will compare task-based measures of image quality of CT images reconstructed by two algorithms: conventional filtered back projection (FBP), and a new iterative reconstruction algorithm (IR). Methods: To assess image quality, the authors used the performance of a channelized Hotelling observer acting on reconstructed image slices. The selected channels are dense difference Gaussian channels (DDOG).A body phantom and a head phantom were imaged 50 times at different dose levels to obtain the data needed to assess image quality. The phantoms consisted of uniform backgrounds with low contrast signals embedded at various locations. The tasks the observer model performed included (1) detection of a signal of known location and shape, and (2) detection and localization of a signal of known shape. The employed DDOG channels are based on the response of the human visual system. Performance was assessed using the areas under ROC curves and areas under localization ROC curves. Results: For signal known exactly (SKE) and location unknown/signal shape known tasks with circular signals of different sizes and contrasts, the authors’ task-based measures showed that a FBP equivalent image quality can be achieved at lower dose levels using the IR algorithm. For the SKE case, the range of dose reduction is 50%–67% (head phantom) and 68%–82% (body phantom). For the study of location unknown/signal shape known, the dose reduction range can be reached at 67%–75% for head phantom and 67%–77% for body phantom case. These results suggest that the IR images at lower dose settings can reach the same image quality when compared to full dose conventional FBP images. Conclusions: The work presented provides an objective way to quantitatively assess the image quality of a newly introduced CT IR algorithm. The performance of the

  8. Influence of radiation dose and reconstruction algorithm in MDCT assessment of airway wall thickness: A phantom study

    SciTech Connect

    Gomez-Cardona, Daniel; Nagle, Scott K.; Li, Ke; Chen, Guang-Hong; Robinson, Terry E.

    2015-10-15

    Purpose: Wall thickness (WT) is an airway feature of great interest for the assessment of morphological changes in the lung parenchyma. Multidetector computed tomography (MDCT) has recently been used to evaluate airway WT, but the potential risk of radiation-induced carcinogenesis—particularly in younger patients—might limit a wider use of this imaging method in clinical practice. The recent commercial implementation of the statistical model-based iterative reconstruction (MBIR) algorithm, instead of the conventional filtered back projection (FBP) algorithm, has enabled considerable radiation dose reduction in many other clinical applications of MDCT. The purpose of this work was to study the impact of radiation dose and MBIR in the MDCT assessment of airway WT. Methods: An airway phantom was scanned using a clinical MDCT system (Discovery CT750 HD, GE Healthcare) at 4 kV levels and 5 mAs levels. Both FBP and a commercial implementation of MBIR (Veo{sup TM}, GE Healthcare) were used to reconstruct CT images of the airways. For each kV–mAs combination and each reconstruction algorithm, the contrast-to-noise ratio (CNR) of the airways was measured, and the WT of each airway was measured and compared with the nominal value; the relative bias and the angular standard deviation in the measured WT were calculated. For each airway and reconstruction algorithm, the overall performance of WT quantification across all of the 20 kV–mAs combinations was quantified by the sum of squares (SSQs) of the difference between the measured and nominal WT values. Finally, the particular kV–mAs combination and reconstruction algorithm that minimized radiation dose while still achieving a reference WT quantification accuracy level was chosen as the optimal acquisition and reconstruction settings. Results: The wall thicknesses of seven airways of different sizes were analyzed in the study. Compared with FBP, MBIR improved the CNR of the airways, particularly at low radiation dose

  9. Development of an algorithm to improve the accuracy of dose delivery in Gamma Knife radiosurgery

    NASA Astrophysics Data System (ADS)

    Cernica, George Dumitru

    2007-12-01

    Gamma Knife stereotactic radiosurgery has demonstrated decades of successful treatments. Despite its high spatial accuracy, the Gamma Knife's planning software, GammaPlan, uses a simple exponential as the TPR curve for all four collimator sizes, and a skull scaling device to acquire ruler measurements to interpolate a threedimensional spline to model the patient's skull. The consequences of these approximations have not been previously investigated. The true TPR curves of the four collimators were measured by blocking 200 of the 201 sources with steel plugs. Additional attenuation was provided through the use of a 16 cm tungsten sphere, designed to enable beamlet measurements along one axis. TPR, PDD, and beamlet profiles were obtained using both an ion chamber and GafChromic EBT film for all collimators. Additionally, an in-house planning algorithm able to calculate the contour of the skull directly from an image set and implement the measured beamlet data in shot time calculations was developed. Clinical and theoretical Gamma Knife cases were imported into our algorithm. The TPR curves showed small deviations from a simple exponential curve, with average discrepancies under 1%, but with a maximum discrepancy of 2% found for the 18 mm collimator beamlet at shallow depths. The consequences on the PDD of the of the beamlets were slight, with a maximum of 1.6% found with the 18 mm collimator beamlet. Beamlet profiles of the 4 mm, 8 mm, and 14 mm showed some underestimates of the off-axis ratio near the shoulders (up to 10%). The toes of the profiles were underestimated for all collimators, with differences up to 7%. Shot times were affected by up to 1.6% due to TPR differences, but clinical cases showed deviations by no more than 0.5%. The beamlet profiles affected the dose calculations more significantly, with shot time calculations differing by as much as 0.8%. The skull scaling affected the shot time calculations the most significantly, with differences of up to 5

  10. Extracting Gene Networks for Low-Dose Radiation Using Graph Theoretical Algorithms

    PubMed Central

    Voy, Brynn H; Scharff, Jon A; Perkins, Andy D; Saxton, Arnold M; Borate, Bhavesh; Chesler, Elissa J; Branstetter, Lisa K; Langston, Michael A

    2006-01-01

    Genes with common functions often exhibit correlated expression levels, which can be used to identify sets of interacting genes from microarray data. Microarrays typically measure expression across genomic space, creating a massive matrix of co-expression that must be mined to extract only the most relevant gene interactions. We describe a graph theoretical approach to extracting co-expressed sets of genes, based on the computation of cliques. Unlike the results of traditional clustering algorithms, cliques are not disjoint and allow genes to be assigned to multiple sets of interacting partners, consistent with biological reality. A graph is created by thresholding the correlation matrix to include only the correlations most likely to signify functional relationships. Cliques computed from the graph correspond to sets of genes for which significant edges are present between all members of the set, representing potential members of common or interacting pathways. Clique membership can be used to infer function about poorly annotated genes, based on the known functions of better-annotated genes with which they share clique membership (i.e., “guilt-by-association”). We illustrate our method by applying it to microarray data collected from the spleens of mice exposed to low-dose ionizing radiation. Differential analysis is used to identify sets of genes whose interactions are impacted by radiation exposure. The correlation graph is also queried independently of clique to extract edges that are impacted by radiation. We present several examples of multiple gene interactions that are altered by radiation exposure and thus represent potential molecular pathways that mediate the radiation response. PMID:16854212

  11. Extracting gene networks for low-dose radiation using graph theoretical algorithms.

    PubMed

    Voy, Brynn H; Scharff, Jon A; Perkins, Andy D; Saxton, Arnold M; Borate, Bhavesh; Chesler, Elissa J; Branstetter, Lisa K; Langston, Michael A

    2006-07-21

    Genes with common functions often exhibit correlated expression levels, which can be used to identify sets of interacting genes from microarray data. Microarrays typically measure expression across genomic space, creating a massive matrix of co-expression that must be mined to extract only the most relevant gene interactions. We describe a graph theoretical approach to extracting co-expressed sets of genes, based on the computation of cliques. Unlike the results of traditional clustering algorithms, cliques are not disjoint and allow genes to be assigned to multiple sets of interacting partners, consistent with biological reality. A graph is created by thresholding the correlation matrix to include only the correlations most likely to signify functional relationships. Cliques computed from the graph correspond to sets of genes for which significant edges are present between all members of the set, representing potential members of common or interacting pathways. Clique membership can be used to infer function about poorly annotated genes, based on the known functions of better-annotated genes with which they share clique membership (i.e., "guilt-by-association"). We illustrate our method by applying it to microarray data collected from the spleens of mice exposed to low-dose ionizing radiation. Differential analysis is used to identify sets of genes whose interactions are impacted by radiation exposure. The correlation graph is also queried independently of clique to extract edges that are impacted by radiation. We present several examples of multiple gene interactions that are altered by radiation exposure and thus represent potential molecular pathways that mediate the radiation response.

  12. Critical Appraisal of Acuros XB and Anisotropic Analytic Algorithm Dose Calculation in Advanced Non-Small-Cell Lung Cancer Treatments

    SciTech Connect

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Cozzi, Luca

    2012-08-01

    Purpose: To assess the clinical impact of the Acuros XB algorithm (implemented in the Varian Eclipse treatment-planning system) in non-small-cell lung cancer (NSCLC) cases. Methods and Materials: A CT dataset of 10 patients presenting with advanced NSCLC was selected and contoured for planning target volume, lungs, heart, and spinal cord. Plans were created for 6-MV and 15-MV beams using three-dimensional conformal therapy, intensity-modulated therapy, and volumetric modulated arc therapy with RapidArc. Calculations were performed with Acuros XB and the Anisotropic Analytical Algorithm. To distinguish between differences coming from the different heterogeneity management and those coming from the algorithm and its implementation, all the plans were recalculated assigning Hounsfield Unit (HU) = 0 (Water) to the CT dataset. Results: Differences in dose distributions between the two algorithms calculated in Water were <0.5%. This suggests that the differences in the real CT dataset can be ascribed mainly to the different heterogeneity management, which is proven to be more accurate in the Acuros XB calculations. The planning target dose difference was stratified between the target in soft tissue, where the mean dose was found to be lower for Acuros XB, with a range of 0.4% {+-} 0.6% (intensity-modulated therapy, 6 MV) to 1.7% {+-} 0.2% (three-dimensional conformal therapy, 6 MV), and the target in lung tissue, where the mean dose was higher for 6 MV (from 0.2% {+-} 0.2% to 1.2% {+-} 0.5%) and lower for 15 MV (from 0.5% {+-} 0.5% to 2.0% {+-} 0.9%). Mean doses to organs at risk presented differences up to 3% of the mean structure dose in the worst case. No particular or systematic differences were found related to the various modalities. Calculation time ratios between calculation time for Acuros XB and the Anisotropic Analytical Algorithm were 7 for three-dimensional conformal therapy, 5 for intensity-modulated therapy, and 0.2 for volumetric modulated arc therapy

  13. SU-E-T-456: Impact of Dose Calculation Algorithms On Biologically Optimized VMAT Plans for Esophageal Cancer

    SciTech Connect

    Thiyagarajan, Rajesh; Vikraman, S; Karrthick, KP; Ramu, M; Sambasivaselli, R; Senniandavar, V; Kataria, Tejinder; Nambiraj, N Arunai; Sigamani, Ashokkumar; Subbarao, Bargavan

    2015-06-15

    Purpose: To evaluate the impact of dose calculation algorithm on the dose distribution of biologically optimized Volumatric Modulated Arc Therapy (VMAT) plans for Esophgeal cancer. Methods: Eighteen retrospectively treated patients with carcinoma esophagus were studied. VMAT plans were optimized using biological objectives in Monaco (5.0) TPS for 6MV photon beam (Elekta Infinity). These plans were calculated for final dose using Monte Carlo (MC), Collapsed Cone Convolution (CCC) & Pencil Beam Convolution (PBC) algorithms from Monaco and Oncentra Masterplan TPS. A dose grid of 2mm was used for all algorithms and 1% per plan uncertainty maintained for MC calculation. MC based calculations were considered as the reference for CCC & PBC. Dose volume histogram (DVH) indices (D95, D98, D50 etc) of Target (PTV) and critical structures were compared to study the impact of all three algorithms. Results: Beam models were consistent with measured data. The mean difference observed in reference with MC calculation for D98, D95, D50 & D2 of PTV were 0.37%, −0.21%, 1.51% & 1.18% respectively for CCC and 3.28%, 2.75%, 3.61% & 3.08% for PBC. Heart D25 mean difference was 4.94% & 11.21% for CCC and PBC respectively. Lung Dmean mean difference was 1.5% (CCC) and 4.1% (PBC). Spinal cord D2 mean difference was 2.35% (CCC) and 3.98% (PBC). Similar differences were observed for liver and kidneys. The overall mean difference found for target and critical structures was 0.71±1.52%, 2.71±3.10% for CCC and 3.18±1.55%, 6.61±5.1% for PBC respectively. Conclusion: We observed a significant overestimate of dose distribution by CCC and PBC as compared to MC. The dose prediction of CCC is closer (<3%) to MC than that of PBC. This can be attributed to poor performance of CCC and PBC in inhomogeneous regions around esophagus. CCC can be considered as an alternate in the absence of MC algorithm.

  14. Feasibility of CBCT-based proton dose calculation using a histogram-matching algorithm in proton beam therapy.

    PubMed

    Arai, Kazuhiro; Kadoya, Noriyuki; Kato, Takahiro; Endo, Hiromitsu; Komori, Shinya; Abe, Yoshitomo; Nakamura, Tatsuya; Wada, Hitoshi; Kikuchi, Yasuhiro; Takai, Yoshihiro; Jingu, Keiichi

    2017-01-01

    The aim of this study was to confirm On-Board Imager cone-beam computed tomography (CBCT) using the histogram-matching algorithm as a useful method for proton dose calculation. We studied one head and neck phantom, one pelvic phantom, and ten patients with head and neck cancer treated using intensity-modulated radiation therapy (IMRT) and proton beam therapy. We modified Hounsfield unit (HU) values of CBCT and generated two modified CBCTs (mCBCT-RR, mCBCT-DIR) using the histogram-matching algorithm: modified CBCT with rigid registration (mCBCT-RR) and that with deformable image registration (mCBCT-DIR). Rigid and deformable image registration were applied to match the CBCT to planning CT. To evaluate the accuracy of the proton dose calculation, we compared dose differences in the dosimetric parameters (D2% and D98%) for clinical target volume (CTV) and planning target volume (PTV). We also evaluated the accuracy of the dosimetric parameters (Dmean and D2%) for some organs at risk, and compared the proton ranges (PR) between planning CT (reference) and CBCT or mCBCTs, and the gamma passing rates of CBCT and mCBCTs. For patients, the average dose and PR differences of mCBCTs were smaller than those of CBCT. Additionally, the average gamma passing rates of mCBCTs were larger than those of CBCT (e.g., 94.1±3.5% in mCBCT-DIR vs. 87.8±7.4% in CBCT). We evaluated the accuracy of the proton dose calculation in CBCT and mCBCTs for two phantoms and ten patients. Our results showed that HU modification using the histogram-matching algorithm could improve the accuracy of the proton dose calculation.

  15. Development of a novel individualized warfarin dose algorithm based on a population pharmacokinetic model with improved prediction accuracy for Chinese patients after heart valve replacement

    PubMed Central

    Zhu, Yu-bin; Hong, Xian-hua; Wei, Meng; Hu, Jing; Chen, Xin; Wang, Shu-kui; Zhu, Jun-rong; Yu, Feng; Sun, Jian-guo

    2017-01-01

    The gene-guided dosing strategy of warfarin generally leads to over-dose in patients at doses lower than 2 mg/kg, and only 50% of individual variability in daily stable doses can be explained. In this study, we developed a novel population pharmacokinetic (PK) model based on a warfarin dose algorithm for Han Chinese patients with valve replacement for improving the dose prediction accuracy, especially in patients with low doses. The individual pharmacokinetic (PK) parameter - apparent clearance of S- and R-warfarin (CLs) was obtained after establishing and validating the population PK model from 296 recruited patients with valve replacement. Then, the individual estimation of CLs, VKORC1 genotypes, the steady-state international normalized ratio (INR) values and age were used to describe the maintenance doses by multiple linear regression for 144 steady-state patients. The newly established dosing algorithm was then validated in an independent group of 42 patients and was compared with other dosing algorithms for the accuracy and precision of prediction. The final regression model developed was as follows: Dose=-0.023×AGE+1.834×VKORC1+0.952×INR+2.156×CLs (the target INR value ranges from 1.8 to 2.5). The validation of the algorithm in another group of 42 patients showed that the individual variation rate (71.6%) was higher than in the gene-guided dosing models. The over-estimation rate in patients with low doses (<2 mg/kg) was lower than the other dosing methods. This novel dosing algorithm based on a population PK model improves the predictive performance of the maintenance dose of warfarin, especially for low dose (<2 mg/d) patients. PMID:28216623

  16. Study of the radiation dose reduction capability of a CT reconstruction algorithm: LCD performance assessment using mathematical model observers

    NASA Astrophysics Data System (ADS)

    Fan, Jiahua; Tseng, Hsin-Wu; Kupinski, Matthew; Cao, Guangzhi; Sainath, Paavana; Hsieh, Jiang

    2013-03-01

    Radiation dose on patient has become a major concern today for Computed Tomography (CT) imaging in clinical practice. Various hardware and algorithm solutions have been designed to reduce dose. Among them, iterative reconstruction (IR) has been widely expected to be an effective dose reduction approach for CT. However, there is no clear understanding on the exact amount of dose saving an IR approach can offer for various clinical applications. We know that quantitative image quality assessment should be task-based. This work applied mathematical model observers to study detectability performance of CT scan data reconstructed using an advanced IR approach as well as the conventional filtered back-projection (FBP) approach. The purpose of this work is to establish a practical and robust approach for CT IR detectability image quality evaluation and to assess the dose saving capability of the IR method under study. Low contrast (LC) objects imbedded in head size and body size phantoms were imaged multiple times with different dose levels. Independent signal present and absent pairs were generated for model observer study training and testing. Receiver Operating Characteristic (ROC) curves for location known exact and location ROC (LROC) curves for location unknown as well as their corresponding the area under the curve (AUC) values were calculated. Results showed approximately 3 times dose reduction has been achieved using the IR method under study.

  17. SU-E-T-477: An Efficient Dose Correction Algorithm Accounting for Tissue Heterogeneities in LDR Brachytherapy

    SciTech Connect

    Mashouf, S; Lai, P; Karotki, A; Keller, B; Beachey, D; Pignol, J

    2014-06-01

    Purpose: Seed brachytherapy is currently used for adjuvant radiotherapy of early stage prostate and breast cancer patients. The current standard for calculation of dose surrounding the brachytherapy seeds is based on American Association of Physicist in Medicine Task Group No. 43 (TG-43 formalism) which generates the dose in homogeneous water medium. Recently, AAPM Task Group No. 186 emphasized the importance of accounting for tissue heterogeneities. This can be done using Monte Carlo (MC) methods, but it requires knowing the source structure and tissue atomic composition accurately. In this work we describe an efficient analytical dose inhomogeneity correction algorithm implemented using MIM Symphony treatment planning platform to calculate dose distributions in heterogeneous media. Methods: An Inhomogeneity Correction Factor (ICF) is introduced as the ratio of absorbed dose in tissue to that in water medium. ICF is a function of tissue properties and independent of source structure. The ICF is extracted using CT images and the absorbed dose in tissue can then be calculated by multiplying the dose as calculated by the TG-43 formalism times ICF. To evaluate the methodology, we compared our results with Monte Carlo simulations as well as experiments in phantoms with known density and atomic compositions. Results: The dose distributions obtained through applying ICF to TG-43 protocol agreed very well with those of Monte Carlo simulations as well as experiments in all phantoms. In all cases, the mean relative error was reduced by at least 50% when ICF correction factor was applied to the TG-43 protocol. Conclusion: We have developed a new analytical dose calculation method which enables personalized dose calculations in heterogeneous media. The advantages over stochastic methods are computational efficiency and the ease of integration into clinical setting as detailed source structure and tissue segmentation are not needed. University of Toronto, Natural Sciences and

  18. Optimizing clopidogrel dose response: a new clinical algorithm comprising CYP2C19 pharmacogenetics and drug interactions

    PubMed Central

    Saab, Yolande B; Zeenny, Rony; Ramadan, Wijdan H

    2015-01-01

    Purpose Response to clopidogrel varies widely with nonresponse rates ranging from 4% to 30%. A reduced function of the gene variant of the CYP2C19 has been associated with lower drug metabolite levels, and hence diminished platelet inhibition. Drugs that alter CYP2C19 activity may also mimic genetic variants. The aim of the study is to investigate the cumulative effect of CYP2C19 gene polymorphisms and drug interactions that affects clopidogrel dosing, and apply it into a new clinical-pharmacogenetic algorithm that can be used by clinicians in optimizing clopidogrel-based treatment. Method Clopidogrel dose optimization was analyzed based on two main parameters that affect clopidogrel metabolite area under the curve: different CYP2C19 genotypes and concomitant drug intake. Clopidogrel adjusted dose was computed based on area under the curve ratios for different CYP2C19 genotypes when a drug interacting with CYP2C19 is added to clopidogrel treatment. A clinical-pharmacogenetic algorithm was developed based on whether clopidogrel shows 1) expected effect as per indication, 2) little or no effect, or 3) clinical features that patients experience and fit with clopidogrel adverse drug reactions. Results The study results show that all patients under clopidogrel treatment, whose genotypes are different from *1*1, and concomitantly taking other drugs metabolized by CYP2C19 require clopidogrel dose adjustment. To get a therapeutic effect and avoid adverse drug reactions, therapeutic dose of 75 mg clopidogrel, for example, should be lowered to 6 mg or increased to 215 mg in patients with different genotypes. Conclusion The implementation of clopidogrel new algorithm has the potential to maximize the benefit of clopidogrel pharmacological therapy. Clinicians would be able to personalize treatment to enhance efficacy and limit toxicity. PMID:26445541

  19. Prospective Evaluation of Prior Image Constrained Compressed Sensing (PICCS) Algorithm in Abdominal CT: A comparison of reduced dose with standard dose imaging

    PubMed Central

    Lubner, Meghan G.; Pickhardt, Perry J.; Kim, David H.; Tang, Jie; Munoz del Rio, Alejandro; Chen, Guang-Hong

    2014-01-01

    Purpose To prospectively study CT dose reduction using the “prior image constrained compressed sensing” (PICCS) reconstruction technique. Methods Immediately following routine standard dose (SD) abdominal MDCT, 50 patients (mean age, 57.7 years; mean BMI, 28.8) underwent a second reduced-dose (RD) scan (targeted dose reduction, 70-90%). DLP, CTDIvol and SSDE were compared. Several reconstruction algorithms (FBP, ASIR, and PICCS) were applied to the RD series. SD images with FBP served as reference standard. Two blinded readers evaluated each series for subjective image quality and focal lesion detection. Results Mean DLP, CTDIvol, and SSDE for RD series was 140.3 mGy*cm (median 79.4), 3.7 mGy (median 1.8), and 4.2 mGy (median 2.3) compared with 493.7 mGy*cm (median 345.8), 12.9 mGy (median 7.9 mGy) and 14.6 mGy (median 10.1) for SD series, respectively. Mean effective patient diameter was 30.1 cm (median 30), which translates to a mean SSDE reduction of 72% (p<0.001). RD-PICCS image quality score was 2.8±0.5, improved over the RD-FBP (1.7±0.7) and RD-ASIR(1.9±0.8)(p<0.001), but lower than SD (3.5±0.5)(p<0.001). Readers detected 81% (184/228) of focal lesions on RD-PICCS series, versus 67% (153/228) and 65% (149/228) for RD-FBP and RD-ASIR, respectively. Mean image noise was significantly reduced on RD-PICCS series (13.9 HU) compared with RD-FBP (57.2) and RD-ASIR (44.1) (p<0.001). Conclusion PICCS allows for marked dose reduction at abdominal CT with improved image quality and diagnostic performance over reduced-dose FBP and ASIR. Further study is needed to determine indication-specific dose reduction levels that preserve acceptable diagnostic accuracy relative to higher-dose protocols. PMID:24943136

  20. Comparison of dose calculation algorithms in phantoms with lung equivalent heterogeneities under conditions of lateral electronic disequilibrium.

    PubMed

    Carrasco, P; Jornet, N; Duch, M A; Weber, L; Ginjaume, M; Eudaldo, T; Jurado, D; Ruiz, A; Ribas, M

    2004-10-01

    An extensive set of benchmark measurement of PDDs and beam profiles was performed in a heterogeneous layer phantom, including a lung equivalent heterogeneity, by means of several detectors and compared against the predicted dose values by different calculation algorithms in two treatment planning systems. PDDs were measured with TLDs, plane parallel and cylindrical ionization chambers and beam profiles with films. Additionally, Monte Carlo simulations by means of the PENELOPE code were performed. Four different field sizes (10 x 10, 5 x 5, 2 x 2, and 1 x 1 cm2) and two lung equivalent materials (CIRS, p(w)e=0.195 and St. Bartholomew Hospital, London, p(w)e=0.244-0.322) were studied. The performance of four correction-based algorithms and one based on convolution-superposition was analyzed. The correction-based algorithms were the Batho, the Modified Batho, and the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system and the TMS Pencil Beam from the Helax-TMS (Nucletron) treatment planning system. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. The only studied calculation methods that correlated successfully with the measured values with a 2% average inside all media were the Collapsed Cone and the Monte Carlo simulation. The biggest difference between the predicted and the delivered dose in the beam axis was found for the EqTAR algorithm inside the CIRS lung equivalent material in a 2 x 2 cm2 18 MV x-ray beam. In these conditions, average and maximum difference against the TLD measurements were 32% and 39%, respectively. In the water equivalent part of the phantom every algorithm correctly predicted the dose (within 2%) everywhere except very close to the interfaces where differences up to 24% were found for 2 x 2 cm2 18 MV photon beams. Consistent values were found between the reference detector (ionization chamber in water and TLD in lung) and Monte Carlo simulations, yielding minimal

  1. Comparison of build-up region doses in oblique tangential 6 MV photon beams calculated by AAA and CCC algorithms in breast Rando phantom

    NASA Astrophysics Data System (ADS)

    Masunun, P.; Tangboonduangjit, P.; Dumrongkijudom, N.

    2016-03-01

    The purpose of this study is to compare the build-up region doses on breast Rando phantom surface with the bolus covered, the doses in breast Rando phantom and also the doses in a lung that is the heterogeneous region by two algorithms. The AAA in Eclipse TPS and the collapsed cone convolution algorithm in Pinnacle treatment planning system were used to plan in tangential field technique with 6 MV photon beam at 200 cGy total doses in Breast Rando phantom with bolus covered (5 mm and 10 mm). TLDs were calibrated with Cobalt-60 and used to measure the doses in irradiation process. The results in treatment planning show that the doses in build-up region and the doses in breast phantom were closely matched in both algorithms which are less than 2% differences. However, overestimate of doses in a lung (L2) were found in AAA with 13.78% and 6.06% differences at 5 mm and 10 mm bolus thickness, respectively when compared with CCC algorithm. The TLD measurements show the underestimate in buildup region and in breast phantom but the doses in a lung (L2) were overestimated when compared with the doses in the two plannings at both thicknesses of the bolus.

  2. Difference in dose-volumetric data between the analytical anisotropic algorithm, the dose-to-medium, and the dose-to-water reporting modes of the Acuros XB for lung stereotactic body radiation therapy.

    PubMed

    Mampuya, Wambaka A; Nakamura, Mitsuhiro; Hirose, Yoshinori; Kitsuda, Kenji; Ishigaki, Takashi; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-09-08

    The purpose of this study was to evaluate the difference in dose-volumetric data between the analytical anisotropic algorithms (AAA) and the two dose reporting modes of the Acuros XB, namely, the dose to water (AXB_Dw) and dose to medium (AXB_Dm) in lung stereotactic body radiotherapy (SBRT). Thirty-eight plans were generated using the AXB_Dm in Eclipse Treatment Planning System (TPS) and then recalculated with the AXB_Dw and AAA, using identical beam setup. A dose of 50 Gy in 4 fractions was prescribed to the isocenter and the planning target volume (PTV) D95%. The isocenter was always inside the PTV. The following dose-volumetric parameters were evaluated; D2%, D50%, D95%, and D98% for the internal target volume (ITV) and the PTV. Two-tailed paired Student's t-tests determined the statistical significance. Although for most of the parameters evaluated, the mean differences observed between the AAA, AXB_Dm, and AXB_Dw were statistically significant (p < 0.05), absolute differences were rather small, in general less than 5% points. The maximum mean difference was observed in the ITV D50% between the AXB_Dm and the AAA and was 1.7% points under the isocenter prescription and 3.3% points under the D95 prescription. AXB_Dm produced higher values than AXB_Dw with differences ranging from 0.4 to 1.1% points under isocenter prescription and 0.0 to 0.7% points under the PTV D95% prescription. The differences observed under the PTV D95% prescription were larger compared to those observed for the isocenter prescription between AXB_Dm and AAA, AXB_Dm and AXB_Dw, and AXB_Dw and AAA. Although statistically significant, the mean differences between the three algorithms are within 3.3% points.

  3. An algorithm for kilovoltage x-ray dose calculations with applications in kV-CBCT scans and 2D planar projected radiographs

    NASA Astrophysics Data System (ADS)

    Pawlowski, Jason M.; Ding, George X.

    2014-04-01

    A new model-based dose calculation algorithm is presented for kilovoltage x-rays and is tested for the cases of calculating the radiation dose from kilovoltage cone-beam CT (kV-CBCT) and 2D planar projected radiographs. This algorithm calculates the radiation dose to water-like media as the sum of primary and scattered dose components. The scatter dose is calculated by convolution of a newly introduced, empirically parameterized scatter dose kernel with the primary photon fluence. Several approximations are introduced to increase the scatter dose calculation efficiency: (1) the photon energy spectrum is approximated as monoenergetic; (2) density inhomogeneities are accounted for by implementing a global distance scaling factor in the scatter kernel; (3) kernel tilting is ignored. These approximations allow for efficient calculation of the scatter dose convolution with the fast Fourier transform. Monte Carlo simulations were used to obtain the model parameters. The accuracy of using this model-based algorithm was validated by comparing with the Monte Carlo method for calculating dose distributions for real patients resulting from radiotherapy image guidance procedures including volumetric kV-CBCT scans and 2D planar projected radiographs. For all patients studied, mean dose-to-water errors for kV-CBCT are within 0.3% with a maximum standard deviation error of 4.1%. Using a medium-dependent correction method to account for the effects of photoabsorption in bone on the dose distribution, mean dose-to-medium errors for kV-CBCT are within 3.6% for bone and 2.4% for soft tissues. This algorithm offers acceptable accuracy and has the potential to extend the applicability of model-based dose calculation algorithms from megavoltage to kilovoltage photon beams.

  4. SU-E-T-37: A GPU-Based Pencil Beam Algorithm for Dose Calculations in Proton Radiation Therapy

    SciTech Connect

    Kalantzis, G; Leventouri, T; Tachibana, H; Shang, C

    2015-06-15

    Purpose: Recent developments in radiation therapy have been focused on applications of charged particles, especially protons. Over the years several dose calculation methods have been proposed in proton therapy. A common characteristic of all these methods is their extensive computational burden. In the current study we present for the first time, to our best knowledge, a GPU-based PBA for proton dose calculations in Matlab. Methods: In the current study we employed an analytical expression for the protons depth dose distribution. The central-axis term is taken from the broad-beam central-axis depth dose in water modified by an inverse square correction while the distribution of the off-axis term was considered Gaussian. The serial code was implemented in MATLAB and was launched on a desktop with a quad core Intel Xeon X5550 at 2.67GHz with 8 GB of RAM. For the parallelization on the GPU, the parallel computing toolbox was employed and the code was launched on a GTX 770 with Kepler architecture. The performance comparison was established on the speedup factors. Results: The performance of the GPU code was evaluated for three different energies: low (50 MeV), medium (100 MeV) and high (150 MeV). Four square fields were selected for each energy, and the dose calculations were performed with both the serial and parallel codes for a homogeneous water phantom with size 300×300×300 mm3. The resolution of the PBs was set to 1.0 mm. The maximum speedup of ∼127 was achieved for the highest energy and the largest field size. Conclusion: A GPU-based PB algorithm for proton dose calculations in Matlab was presented. A maximum speedup of ∼127 was achieved. Future directions of the current work include extension of our method for dose calculation in heterogeneous phantoms.

  5. Applications of nonlocal means algorithm in low-dose X-ray CT image processing and reconstruction: A review.

    PubMed

    Zhang, Hao; Zeng, Dong; Zhang, Hua; Wang, Jing; Liang, Zhengrong; Ma, Jianhua

    2017-03-01

    Low-dose X-ray computed tomography (LDCT) imaging is highly recommended for use in the clinic because of growing concerns over excessive radiation exposure. However, the CT images reconstructed by the conventional filtered back-projection (FBP) method from low-dose acquisitions may be severely degraded with noise and streak artifacts due to excessive X-ray quantum noise, or with view-aliasing artifacts due to insufficient angular sampling. In 2005, the nonlocal means (NLM) algorithm was introduced as a non-iterative edge-preserving filter to denoise natural images corrupted by additive Gaussian noise, and showed superior performance. It has since been adapted and applied to many other image types and various inverse problems. This paper specifically reviews the applications of the NLM algorithm in LDCT image processing and reconstruction, and explicitly demonstrates its improving effects on the reconstructed CT image quality from low-dose acquisitions. The effectiveness of these applications on LDCT and their relative performance are described in detail.

  6. Monte Carlo simulation of ruthenium eye plaques with GEANT4: influence of multiple scattering algorithms, the spectrum and the geometry on depth dose profiles

    NASA Astrophysics Data System (ADS)

    Sommer, H.; Ebenau, M.; Spaan, B.; Eichmann, M.

    2017-03-01

    Previous studies show remarkable differences in the simulation of electron depth dose profiles of ruthenium eye plaques. We examined the influence of the scoring and simulation geometry, the source spectrum and the multiple scattering algorithm on the depth dose profile using GEANT4. The simulated absolute dose deposition agrees with absolute dose data from the manufacturer within the measurement uncertainty. Variations in the simulation geometry as well as the source spectrum have only a small influence on the depth dose profiles. However, the multiple scattering algorithms have the largest influence on the depth dose profiles. They deposit up to 20% less dose compared to the single scattering implementation. We recommend researchers who are interested in simulating low- to medium-energy electrons to examine their simulation under the influence of different multiple scattering settings. Since the simulation and scoring geometry as well as the exact physics settings are best described by the source code of the application, we made the code publicly available.

  7. Monte Carlo simulation of ruthenium eye plaques with GEANT4: influence of multiple scattering algorithms, the spectrum and the geometry on depth dose profiles.

    PubMed

    Sommer, H; Ebenau, M; Spaan, B; Eichmann, M

    2017-03-07

    Previous studies show remarkable differences in the simulation of electron depth dose profiles of ruthenium eye plaques. We examined the influence of the scoring and simulation geometry, the source spectrum and the multiple scattering algorithm on the depth dose profile using GEANT4. The simulated absolute dose deposition agrees with absolute dose data from the manufacturer within the measurement uncertainty. Variations in the simulation geometry as well as the source spectrum have only a small influence on the depth dose profiles. However, the multiple scattering algorithms have the largest influence on the depth dose profiles. They deposit up to 20% less dose compared to the single scattering implementation. We recommend researchers who are interested in simulating low- to medium-energy electrons to examine their simulation under the influence of different multiple scattering settings. Since the simulation and scoring geometry as well as the exact physics settings are best described by the source code of the application, we made the code publicly available.

  8. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-09-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose

  9. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms.

    PubMed

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M; Asma, Evren; Kinahan, Paul E; De Man, Bruno

    2015-10-07

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  10. SU-E-T-427: Feasibility Study for Evaluation of IMRT Dose Distribution Using Geant4-Based Automated Algorithms

    SciTech Connect

    Choi, H; Shin, W; Testa, M; Min, C; Kim, J

    2015-06-15

    Purpose: For intensity-modulated radiation therapy (IMRT) treatment planning validation using Monte Carlo (MC) simulations, a precise and automated procedure is necessary to evaluate the patient dose distribution. The aim of this study is to develop an automated algorithm for IMRT simulations using DICOM files and to evaluate the patient dose based on 4D simulation using the Geant4 MC toolkit. Methods: The head of a clinical linac (Varian Clinac 2300 IX) was modeled in Geant4 along with particular components such as the flattening filter and the multi-leaf collimator (MLC). Patient information and the position of the MLC were imported from the DICOM-RT interface. For each position of the MLC, a step- and-shoot technique was adopted. PDDs and lateral profiles were simulated in a water phantom (50×50×40 cm{sup 3}) and compared to measurement data. We used a lung phantom and MC-dose calculations were compared to the clinical treatment planning used at the Seoul National University Hospital. Results: In order to reproduce the measurement data, we tuned three free parameters: mean and standard deviation of the primary electron beam energy and the beam spot size. These parameters for 6 MV were found to be 5.6 MeV, 0.2378 MeV and 1 mm FWHM respectively. The average dose difference between measurements and simulations was less than 2% for PDDs and radial profiles. The lung phantom study showed fairly good agreement between MC and planning dose despite some unavoidable statistical fluctuation. Conclusion: The current feasibility study using the lung phantom shows the potential for IMRT dose validation using 4D MC simulations using Geant4 tool kits. This research was supported by Korea Institute of Nuclear safety and Development of Measurement Standards for Medical Radiation funded by Korea research Institute of Standards and Science. (KRISS-2015-15011032)

  11. SU-E-T-252: Developing a Pencil Beam Dose Calculation Algorithm for CyberKnife System

    SciTech Connect

    Liang, B; Liu, B; Zhou, F; Xu, S; Wu, Q

    2015-06-15

    Purpose: Currently there are two dose calculation algorithms available in the Cyberknife planning system: ray-tracing and Monte Carlo, which is either not accurate or time-consuming for irregular field shaped by the MLC that was recently introduced. The purpose of this study is to develop a fast and accurate pencil beam dose calculation algorithm which can handle irregular field. Methods: A pencil beam dose calculation algorithm widely used in Linac system is modified. The algorithm models both primary (short range) and scatter (long range) components with a single input parameter: TPR{sub 20}/{sub 10}. The TPR{sub 20}/{sub 20}/{sub 10} value was first estimated to derive an initial set of pencil beam model parameters (PBMP). The agreement between predicted and measured TPRs for all cones were evaluated using the root mean square of the difference (RMSTPR), which was then minimized by adjusting PBMPs. PBMPs are further tuned to minimize OCR RMS (RMSocr) by focusing at the outfield region. Finally, an arbitrary intensity profile is optimized by minimizing RMSocr difference at infield region. To test model validity, the PBMPs were obtained by fitting to only a subset of cones (4) and applied to all cones (12) for evaluation. Results: With RMS values normalized to the dmax and all cones combined, the average RMSTPR at build-up and descending region is 2.3% and 0.4%, respectively. The RMSocr at infield, penumbra and outfield region is 1.5%, 7.8% and 0.6%, respectively. Average DTA in penumbra region is 0.5mm. There is no trend found in TPR or OCR agreement among cones or depths. Conclusion: We have developed a pencil beam algorithm for Cyberknife system. The prediction agrees well with commissioning data. Only a subset of measurements is needed to derive the model. Further improvements are needed for TPR buildup region and OCR penumbra. Experimental validations on MLC shaped irregular field needs to be performed. This work was partially supported by the National

  12. Enhancements to commissioning techniques and quality assurance of brachytherapy treatment planning systems that use model-based dose calculation algorithms.

    PubMed

    Rivard, Mark J; Beaulieu, Luc; Mourtada, Firas

    2010-06-01

    The current standard for brachytherapy dose calculations is based on the AAPM TG-43 formalism. Simplifications used in the TG-43 formalism have been challenged by many publications over the past decade. With the continuous increase in computing power, approaches based on fundamental physics processes or physics models such as the linear-Boltzmann transport equation are now applicable in a clinical setting. Thus, model-based dose calculation algorithms (MBDCAs) have been introduced to address TG-43 limitations for brachytherapy. The MBDCA approach results in a paradigm shift, which will require a concerted effort to integrate them properly into the radiation therapy community. MBDCA will improve treatment planning relative to the implementation of the traditional TG-43 formalism by accounting for individualized, patient-specific radiation scatter conditions, and the radiological effect of material heterogeneities differing from water. A snapshot of the current status of MBDCA and AAPM Task Group reports related to the subject of QA recommendations for brachytherapy treatment planning is presented. Some simplified Monte Carlo simulation results are also presented to delineate the effects MBDCA are called to account for and facilitate the discussion on suggestions for (i) new QA standards to augment current societal recommendations, (ii) consideration of dose specification such as dose to medium in medium, collisional kerma to medium in medium, or collisional kerma to water in medium, and (iii) infrastructure needed to uniformly introduce these new algorithms. Suggestions in this Vision 20/20 article may serve as a basis for developing future standards to be recommended by professional societies such as the AAPM, ESTRO, and ABS toward providing consistent clinical implementation throughout the brachytherapy community and rigorous quality management of MBDCA-based treatment planning systems.

  13. Performance evaluation of iterative reconstruction algorithms for achieving CT radiation dose reduction - a phantom study.

    PubMed

    Dodge, Cristina T; Tamm, Eric P; Cody, Dianna D; Liu, Xinming; Jensen, Corey T; Wei, Wei; Kundra, Vikas; Rong, X John

    2016-03-08

    The purpose of this study was to characterize image quality and dose performance with GE CT iterative reconstruction techniques, adaptive statistical iterative recontruction (ASiR), and model-based iterative reconstruction (MBIR), over a range of typical to low-dose intervals using the Catphan 600 and the anthropomorphic Kyoto Kagaku abdomen phantoms. The scope of the project was to quantitatively describe the advantages and limitations of these approaches. The Catphan 600 phantom, supplemented with a fat-equivalent oval ring, was scanned using a GE Discovery HD750 scanner at 120 kVp, 0.8 s rotation time, and pitch factors of 0.516, 0.984, and 1.375. The mA was selected for each pitch factor to achieve CTDIvol values of 24, 18, 12, 6, 3, 2, and 1 mGy. Images were reconstructed at 2.5 mm thickness with filtered back-projection (FBP); 20%, 40%, and 70% ASiR; and MBIR. The potential for dose reduction and low-contrast detectability were evaluated from noise and contrast-to-noise ratio (CNR) measurements in the CTP 404 module of the Catphan. Hounsfield units (HUs) of several materials were evaluated from the cylinder inserts in the CTP 404 module, and the modulation transfer function (MTF) was calculated from the air insert. The results were con-firmed in the anthropomorphic Kyoto Kagaku abdomen phantom at 6, 3, 2, and 1mGy. MBIR reduced noise levels five-fold and increased CNR by a factor of five compared to FBP below 6mGy CTDIvol, resulting in a substantial improvement in image quality. Compared to ASiR and FBP, HU in images reconstructed with MBIR were consistently lower, and this discrepancy was reversed by higher pitch factors in some materials. MBIR improved the conspicuity of the high-contrast spatial resolution bar pattern, and MTF quantification confirmed the superior spatial resolution performance of MBIR versus FBP and ASiR at higher dose levels. While ASiR and FBP were relatively insensitive to changes in dose and pitch, the spatial resolution for MBIR

  14. A simplified analytical dose calculation algorithm accounting for tissue heterogeneity for low-energy brachytherapy sources

    NASA Astrophysics Data System (ADS)

    Mashouf, Shahram; Lechtman, Eli; Beaulieu, Luc; Verhaegen, Frank; Keller, Brian M.; Ravi, Ananth; Pignol, Jean-Philippe

    2013-09-01

    The American Association of Physicists in Medicine Task Group No. 43 (AAPM TG-43) formalism is the standard for seeds brachytherapy dose calculation. But for breast seed implants, Monte Carlo simulations reveal large errors due to tissue heterogeneity. Since TG-43 includes several factors to account for source geometry, anisotropy and strength, we propose an additional correction factor, called the inhomogeneity correction factor (ICF), accounting for tissue heterogeneity for Pd-103 brachytherapy. This correction factor is calculated as a function of the media linear attenuation coefficient and mass energy absorption coefficient, and it is independent of the source internal structure. Ultimately the dose in heterogeneous media can be calculated as a product of dose in water as calculated by TG-43 protocol times the ICF. To validate the ICF methodology, dose absorbed in spherical phantoms with large tissue heterogeneities was compared using the TG-43 formalism corrected for heterogeneity versus Monte Carlo simulations. The agreement between Monte Carlo simulations and the ICF method remained within 5% in soft tissues up to several centimeters from a Pd-103 source. Compared to Monte Carlo, the ICF methods can easily be integrated into a clinical treatment planning system and it does not require the detailed internal structure of the source or the photon phase-space.

  15. A simplified analytical dose calculation algorithm accounting for tissue heterogeneity for low-energy brachytherapy sources.

    PubMed

    Mashouf, Shahram; Lechtman, Eli; Beaulieu, Luc; Verhaegen, Frank; Keller, Brian M; Ravi, Ananth; Pignol, Jean-Philippe

    2013-09-21

    The American Association of Physicists in Medicine Task Group No. 43 (AAPM TG-43) formalism is the standard for seeds brachytherapy dose calculation. But for breast seed implants, Monte Carlo simulations reveal large errors due to tissue heterogeneity. Since TG-43 includes several factors to account for source geometry, anisotropy and strength, we propose an additional correction factor, called the inhomogeneity correction factor (ICF), accounting for tissue heterogeneity for Pd-103 brachytherapy. This correction factor is calculated as a function of the media linear attenuation coefficient and mass energy absorption coefficient, and it is independent of the source internal structure. Ultimately the dose in heterogeneous media can be calculated as a product of dose in water as calculated by TG-43 protocol times the ICF. To validate the ICF methodology, dose absorbed in spherical phantoms with large tissue heterogeneities was compared using the TG-43 formalism corrected for heterogeneity versus Monte Carlo simulations. The agreement between Monte Carlo simulations and the ICF method remained within 5% in soft tissues up to several centimeters from a Pd-103 source. Compared to Monte Carlo, the ICF methods can easily be integrated into a clinical treatment planning system and it does not require the detailed internal structure of the source or the photon phase-space.

  16. A dose calculation algorithm with correction for proton-nucleus interactions in non-water materials for proton radiotherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Inaniwa, T.; Kanematsu, N.; Sato, S.; Kohno, R.

    2016-01-01

    In treatment planning for proton radiotherapy, the dose measured in water is applied to the patient dose calculation with density scaling by stopping power ratio {ρ\\text{S}} . Since the body tissues are chemically different from water, this approximation may cause dose calculation errors, especially due to differences in nuclear interactions. We proposed and validated an algorithm for correcting these errors. The dose in water is decomposed into three constituents according to the physical interactions of protons in water: the dose from primary protons continuously slowing down by electromagnetic interactions, the dose from protons scattered by elastic and/or inelastic interactions, and the dose resulting from nonelastic interactions. The proportions of the three dose constituents differ between body tissues and water. We determine correction factors for the proportion of dose constituents with Monte Carlo simulations in various standard body tissues, and formulated them as functions of their {ρ\\text{S}} for patient dose calculation. The influence of nuclear interactions on dose was assessed by comparing the Monte Carlo simulated dose and the uncorrected dose in common phantom materials. The influence around the Bragg peak amounted to  -6% for polytetrafluoroethylene and 0.3% for polyethylene. The validity of the correction method was confirmed by comparing the simulated and corrected doses in the materials. The deviation was below 0.8% for all materials. The accuracy of the correction factors derived with Monte Carlo simulations was separately verified through irradiation experiments with a 235 MeV proton beam using common phantom materials. The corrected doses agreed with the measurements within 0.4% for all materials except graphite. The influence on tumor dose was assessed in a prostate case. The dose reduction in the tumor was below 0.5%. Our results verify that this algorithm is practical and accurate for proton radiotherapy treatment planning, and

  17. SU-E-T-481: Dosimetric Comparison of Acuros XB and Anisotropic Analytic Algorithm with Commercial Monte Carlo Based Dose Calculation Algorithm for Stereotactic Body Radiation Therapy of Lung Cancer

    SciTech Connect

    Cao, M; Tenn, S; Lee, C; Yang, Y; Lamb, J; Agazaryan, N; Lee, P; Low, D

    2014-06-01

    Purpose: To evaluate performance of three commercially available treatment planning systems for stereotactic body radiation therapy (SBRT) of lung cancer using the following algorithms: Boltzmann transport equation based algorithm (AcurosXB AXB), convolution based algorithm Anisotropic Analytic Algorithm (AAA); and Monte Carlo based algorithm (XVMC). Methods: A total of 10 patients with early stage non-small cell peripheral lung cancer were included. The initial clinical plans were generated using the XVMC based treatment planning system with a prescription of 54Gy in 3 fractions following RTOG0613 protocol. The plans were recalculated with the same beam parameters and monitor units using AAA and AXB algorithms. A calculation grid size of 2mm was used for all algorithms. The dose distribution, conformity, and dosimetric parameters for the targets and organs at risk (OAR) are compared between the algorithms. Results: The average PTV volume was 19.6mL (range 4.2–47.2mL). The volume of PTV covered by the prescribed dose (PTV-V100) were 93.97±2.00%, 95.07±2.07% and 95.10±2.97% for XVMC, AXB and AAA algorithms, respectively. There was no significant difference in high dose conformity index; however, XVMC predicted slightly higher values (p=0.04) for the ratio of 50% prescription isodose volume to PTV (R50%). The percentage volume of total lungs receiving dose >20Gy (LungV20Gy) were 4.03±2.26%, 3.86±2.22% and 3.85±2.21% for XVMC, AXB and AAA algorithms. Examination of dose volume histograms (DVH) revealed small differences in targets and OARs for most patients. However, the AAA algorithm was found to predict considerable higher PTV coverage compared with AXB and XVMC algorithms in two cases. The dose difference was found to be primarily located at the periphery region of the target. Conclusion: For clinical SBRT lung treatment planning, the dosimetric differences between three commercially available algorithms are generally small except at target periphery. XVMC

  18. SU-E-T-356: Accuracy of Eclipse Electron Macro Monte Carlo Dose Algorithm for Use in Bolus Electron Conformal Therapy

    SciTech Connect

    Carver, R; Popple, R; Benhabib, S; Antolak, J; Sprunger, C; Hogstrom, K

    2014-06-01

    Purpose: To evaluate the accuracy of electron dose distribution calculated by the Varian Eclipse electron Monte Carlo (eMC) algorithm for use with recent commercially available bolus electron conformal therapy (ECT). Methods: eMC-calculated electron dose distributions for bolus ECT have been compared to those previously measured for cylindrical phantoms (retromolar trigone and nose), whose axial cross sections were based on the mid-PTV CT anatomy for each site. The phantoms consisted of SR4 muscle substitute, SR4 bone substitute, and air. The bolus ECT treatment plans were imported into the Eclipse treatment planning system and calculated using the maximum allowable histories (2×10{sup 9}), resulting in a statistical error of <0.2%. Smoothing was not used for these calculations. Differences between eMC-calculated and measured dose distributions were evaluated in terms of absolute dose difference as well as distance to agreement (DTA). Results: Results from the eMC for the retromolar trigone phantom showed 89% (41/46) of dose points within 3% dose difference or 3 mm DTA. There was an average dose difference of −0.12% with a standard deviation of 2.56%. Results for the nose phantom showed 95% (54/57) of dose points within 3% dose difference or 3 mm DTA. There was an average dose difference of 1.12% with a standard deviation of 3.03%. Dose calculation times for the retromolar trigone and nose treatment plans were 15 min and 22 min, respectively, using 16 processors (Intel Xeon E5-2690, 2.9 GHz) on a Varian Eclipse framework agent server (FAS). Results of this study were consistent with those previously reported for accuracy of the eMC electron dose algorithm and for the .decimal, Inc. pencil beam redefinition algorithm used to plan the bolus. Conclusion: These results show that the accuracy of the Eclipse eMC algorithm is suitable for clinical implementation of bolus ECT.

  19. TH-E-BRE-11: Adaptive-Beamlet Based Finite Size Pencil Beam (AB-FSPB) Dose Calculation Algorithm for Independent Verification of IMRT and VMAT

    SciTech Connect

    Park, C; Arhjoul, L; Yan, G; Lu, B; Li, J; Liu, C

    2014-06-15

    Purpose: In current IMRT and VMAT settings, the use of sophisticated dose calculation procedure is inevitable in order to account complex treatment field created by MLCs. As a consequence, independent volumetric dose verification procedure is time consuming which affect the efficiency of clinical workflow. In this study, the authors present an efficient Pencil Beam based dose calculation algorithm that minimizes the computational procedure while preserving the accuracy. Methods: The computational time of Finite Size Pencil Beam (FSPB) algorithm is proportional to the number of infinitesimal identical beamlets that constitute the arbitrary field shape. In AB-FSPB, the dose distribution from each beamlet is mathematically modelled such that the sizes of beamlets to represent arbitrary field shape are no longer needed to be infinitesimal nor identical. In consequence, it is possible to represent arbitrary field shape with combinations of different sized and minimal number of beamlets. Results: On comparing FSPB with AB-FSPB, the complexity of the algorithm has been reduced significantly. For 25 by 25 cm2 squared shaped field, 1 beamlet of 25 by 25 cm2 was sufficient to calculate dose in AB-FSPB, whereas in conventional FSPB, minimum 2500 beamlets of 0.5 by 0.5 cm2 size were needed to calculate dose that was comparable to the Result computed from Treatment Planning System (TPS). The algorithm was also found to be GPU compatible to maximize its computational speed. On calculating 3D dose of IMRT (∼30 control points) and VMAT plan (∼90 control points) with grid size 2.0 mm (200 by 200 by 200), the dose could be computed within 3∼5 and 10∼15 seconds. Conclusion: Authors have developed an efficient Pencil Beam type dose calculation algorithm called AB-FSPB. The fast computation nature along with GPU compatibility has shown performance better than conventional FSPB. This completely enables the implantation of AB-FSPB in the clinical environment for independent

  20. Feasibility of a fast inverse dose optimization algorithm for IMRT via matrix inversion without negative beamlet intensities

    SciTech Connect

    Goldman, S.P.; Chen, J.Z.; Battista, J.J.

    2005-09-15

    A fast optimization algorithm is very important for inverse planning of intensity modulated radiation therapy (IMRT), and for adaptive radiotherapy of the future. Conventional numerical search algorithms such as the conjugate gradient search, with positive beam weight constraints, generally require numerous iterations and may produce suboptimal dose results due to trapping in local minima. A direct solution of the inverse problem using conventional quadratic objective functions without positive beam constraints is more efficient but will result in unrealistic negative beam weights. We present here a direct solution of the inverse problem that does not yield unphysical negative beam weights. The objective function for the optimization of a large number of beamlets is reformulated such that the optimization problem is reduced to a linear set of equations. The optimal set of intensities is found through a matrix inversion, and negative beamlet intensities are avoided without the need for externally imposed ad-hoc constraints. The method has been demonstrated with a test phantom and a few clinical radiotherapy cases, using primary dose calculations. We achieve highly conformal primary dose distributions with very rapid optimization times. Typical optimization times for a single anatomical slice (two dimensional) (head and neck) using a LAPACK matrix inversion routine in a single processor desktop computer, are: 0.03 s for 500 beamlets; 0.28 s for 1000 beamlets; 3.1 s for 2000 beamlets; and 12 s for 3000 beamlets. Clinical implementation will require the additional time of a one-time precomputation of scattered radiation for all beamlets, but will not impact the optimization speed. In conclusion, the new method provides a fast and robust technique to find a global minimum that yields excellent results for the inverse planning of IMRT.

  1. Feasibility of a fast inverse dose optimization algorithm for IMRT via matrix inversion without negative beamlet intensities.

    PubMed

    Goldman, S P; Chen, J Z; Battista, J J

    2005-09-01

    A fast optimization algorithm is very important for inverse planning of intensity modulated radiation therapy (IMRT), and for adaptive radiotherapy of the future. Conventional numerical search algorithms such as the conjugate gradient search, with positive beam weight constraints, generally require numerous iterations and may produce suboptimal dose results due to trapping in local minima. A direct solution of the inverse problem using conventional quadratic objective functions without positive beam constraints is more efficient but will result in unrealistic negative beam weights. We present here a direct solution of the inverse problem that does not yield unphysical negative beam weights. The objective function for the optimization of a large number of beamlets is reformulated such that the optimization problem is reduced to a linear set of equations. The optimal set of intensities is found through a matrix inversion, and negative beamlet intensities are avoided without the need for externally imposed ad-hoc constraints. The method has been demonstrated with a test phantom and a few clinical radiotherapy cases, using primary dose calculations. We achieve highly conformal primary dose distributions with very rapid optimization times. Typical optimization times for a single anatomical slice (two dimensional) (head and neck) using a LAPACK matrix inversion routine in a single processor desktop computer, are: 0.03 s for 500 beamlets; 0.28 s for 1000 beamlets; 3.1 s for 2000 beamlets; and 12 s for 3000 beamlets. Clinical implementation will require the additional time of a one-time precomputation of scattered radiation for all beamlets, but will not impact the optimization speed. In conclusion, the new method provides a fast and robust technique to find a global minimum that yields excellent results for the inverse planning of IMRT.

  2. Difference in dose-volumetric data between the analytical anisotropic algorithm, the dose-to-medium, and the dose-to-water reporting modes of the Acuros XB for lung stereotactic body radiation therapy.

    PubMed

    Mampuya, Wambaka A; Nakamura, Mitsuhiro; Hirose, Yoshinori; Ishigaki, Takashi; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-09-01

    The purpose of this study was to evaluate the difference in dose-volumetric data between the analytical anisotropic algorithms (AAA) and the two dose reporting modes of the Acuros XB, namely, the dose to water (AXB-Dw) and dose to medium (AXB-Dm) in lung stereotactic body radiotherapy (SBRT). Thirty-eight plans were generated using the AXB-Dm in Eclipse Treatment Planning System (TPS) and then recalculated with the AXB-Dw and AAA, using identical beam setup. A dose of 50 Gy in 4 fractions was prescribed to the isocenter and the planning target volume (PTV) D95%. The isocenter was always inside the PTV. The following dose-volumetric parameters were evaluated; D2%, D50%, D95%, and D98% for the internal target volume (ITV) and the PTV. Two-tailed paired Student's t-tests determined the statistical significance. Although for most of the parameters evaluated, the mean differences observed between the AAA, AXB-Dmand AXB-Dw were statistically significant (p<0.05), absolute differences were rather small, in general less than 5% points. The maximum mean difference was observed in the ITV D50% between the AXB-Dm and the AAA and was 1.7% points under the isocenter prescription and 3.3% points under the D95 prescription. AXB-Dm produced higher values than AXB-Dw with differences ranging from 0.4 to 1.1% points under isocenter prescription and 0.0 to 0.7% points under the PTV D95% prescription. The differences observed under the PTV D95% prescription were larger compared to those observed for the isocenter prescription between AXB-Dm and AAA, AXB-Dm and AXB-Dw, and AXB-Dw and AAA. Although statistically significant, the mean differences between the three algorithms are within 3.3% points. PACS number(s): 87.55.x, 87.55.D-, 87.55.dk.

  3. Potential of a Pharmacogenetic-Guided Algorithm to Predict Optimal Warfarin Dosing in a High-Risk Hispanic Patient

    PubMed Central

    Hernandez-Suarez, Dagmar F.; Claudio-Campos, Karla; Mirabal-Arroyo, Javier E.; Torres-Hernández, Bianca A.; López-Candales, Angel; Melin, Kyle; Duconge, Jorge

    2016-01-01

    Deep abdominal vein thrombosis is extremely rare among thrombotic events secondary to the use of contraceptives. A case to illustrate the clinical utility of ethno-specific pharmacogenetic testing in warfarin management of a Hispanic patient is reported. A 37-year-old Hispanic Puerto Rican, non-gravid female with past medical history of abnormal uterine bleeding on hormonal contraceptive therapy was evaluated for abdominal pain. Physical exam was remarkable for unspecific diffuse abdominal tenderness, and general initial laboratory results—including coagulation parameters—were unremarkable. A contrast-enhanced computed tomography showed a massive thrombosis of the main portal, splenic, and superior mesenteric veins. On admission the patient was started on oral anticoagulation therapy with warfarin at 5 mg/day and low-molecular-weight heparin. The prediction of an effective warfarin dose of 7.5 mg/day, estimated by using a recently developed pharmacogenetic-guided algorithm for Caribbean Hispanics, coincided with the actual patient’s warfarin dose to reach the international normalized ratio target. We speculate that the slow rise in patient’s international normalized ratio observed on the initiation of warfarin therapy, the resulting high risk for thromboembolic events, and the required warfarin dose of 7.5 mg/day are attributable in some part to the presence of the NQO1*2 (g.559C>T, p.P187S) polymorphism, which seems to be significantly associated with resistance to warfarin in Hispanics. By adding genotyping results of this novel variant, the predictive model can inform clinicians better about the optimal warfarin dose in Caribbean Hispanics. The results highlight the potential for pharmacogenetic testing of warfarin to improve patient care. PMID:28210634

  4. Calculation algorithm for determination of dose versus LET using recombination method

    NASA Astrophysics Data System (ADS)

    Dobrzyńska, Magdalena

    2015-09-01

    Biological effectiveness of any type of radiation can be related to absorbed dose versus linear energy transfer (LET) associated with the particular radiation field. In complex radiation fields containing neutrons, especially in fields of high-energy particles or in stray radiation fields, radiation quality factor can be determined using detectors which response depends on LET. Recombination chambers, which are high-pressure, tissue equivalent ionization chambers operating under conditions of initial recombination of ions form a class of such detectors. Recombination Microdosimetric Method (RMM) is based on analysis of the shape of current-voltage characteristic (saturation curve) of recombination chamber. The ion collection process in the chamber is described by theoretical formula that contains a number of coefficients which depend on LET. The coefficients are calculated by fitting the shape of the theoretical curve to the experimental data. The purpose of the present project was to develop such a program for determination of radiation quality factor, basing on calculation of dose distribution versus LET using RMM.

  5. Quantitative assessment of the accuracy of dose calculation using pencil beam and Monte Carlo algorithms and requirements for clinical quality assurance

    SciTech Connect

    Ali, Imad; Ahmad, Salahuddin

    2013-10-01

    To compare the doses calculated using the BrainLAB pencil beam (PB) and Monte Carlo (MC) algorithms for tumors located in various sites including the lung and evaluate quality assurance procedures required for the verification of the accuracy of dose calculation. The dose-calculation accuracy of PB and MC was also assessed quantitatively with measurement using ionization chamber and Gafchromic films placed in solid water and heterogeneous phantoms. The dose was calculated using PB convolution and MC algorithms in the iPlan treatment planning system from BrainLAB. The dose calculation was performed on the patient's computed tomography images with lesions in various treatment sites including 5 lungs, 5 prostates, 4 brains, 2 head and necks, and 2 paraspinal tissues. A combination of conventional, conformal, and intensity-modulated radiation therapy plans was used in dose calculation. The leaf sequence from intensity-modulated radiation therapy plans or beam shapes from conformal plans and monitor units and other planning parameters calculated by the PB were identical for calculating dose with MC. Heterogeneity correction was considered in both PB and MC dose calculations. Dose-volume parameters such as V95 (volume covered by 95% of prescription dose), dose distributions, and gamma analysis were used to evaluate the calculated dose by PB and MC. The measured doses by ionization chamber and EBT GAFCHROMIC film in solid water and heterogeneous phantoms were used to quantitatively asses the accuracy of dose calculated by PB and MC. The dose-volume histograms and dose distributions calculated by PB and MC in the brain, prostate, paraspinal, and head and neck were in good agreement with one another (within 5%) and provided acceptable planning target volume coverage. However, dose distributions of the patients with lung cancer had large discrepancies. For a plan optimized with PB, the dose coverage was shown as clinically acceptable, whereas in reality, the MC showed a

  6. Quantitative assessment of the accuracy of dose calculation using pencil beam and Monte Carlo algorithms and requirements for clinical quality assurance.

    PubMed

    Ali, Imad; Ahmad, Salahuddin

    2013-01-01

    To compare the doses calculated using the BrainLAB pencil beam (PB) and Monte Carlo (MC) algorithms for tumors located in various sites including the lung and evaluate quality assurance procedures required for the verification of the accuracy of dose calculation. The dose-calculation accuracy of PB and MC was also assessed quantitatively with measurement using ionization chamber and Gafchromic films placed in solid water and heterogeneous phantoms. The dose was calculated using PB convolution and MC algorithms in the iPlan treatment planning system from BrainLAB. The dose calculation was performed on the patient's computed tomography images with lesions in various treatment sites including 5 lungs, 5 prostates, 4 brains, 2 head and necks, and 2 paraspinal tissues. A combination of conventional, conformal, and intensity-modulated radiation therapy plans was used in dose calculation. The leaf sequence from intensity-modulated radiation therapy plans or beam shapes from conformal plans and monitor units and other planning parameters calculated by the PB were identical for calculating dose with MC. Heterogeneity correction was considered in both PB and MC dose calculations. Dose-volume parameters such as V95 (volume covered by 95% of prescription dose), dose distributions, and gamma analysis were used to evaluate the calculated dose by PB and MC. The measured doses by ionization chamber and EBT GAFCHROMIC film in solid water and heterogeneous phantoms were used to quantitatively asses the accuracy of dose calculated by PB and MC. The dose-volume histograms and dose distributions calculated by PB and MC in the brain, prostate, paraspinal, and head and neck were in good agreement with one another (within 5%) and provided acceptable planning target volume coverage. However, dose distributions of the patients with lung cancer had large discrepancies. For a plan optimized with PB, the dose coverage was shown as clinically acceptable, whereas in reality, the MC showed a

  7. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    SciTech Connect

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-05-15

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the

  8. SU-E-T-339: Dosimetric Verification of Acuros XB Dose Calculation Algorithm On An Air Cavity for 6-MV Flattening Filter-Free Beam

    SciTech Connect

    Kang, S; Suh, T; Chung, J

    2015-06-15

    Purpose: This study was to verify the accuracy of Acuros XB (AXB) dose calculation algorithm on an air cavity for a single radiation field using 6-MV flattening filter-free (FFF) beam. Methods: A rectangular slab phantom containing an air cavity was made for this study. The CT images of the phantom for dose calculation were scanned with and without film at measurement depths (4.5, 5.5, 6.5 and 7.5 cm). The central axis doses (CADs) and the off-axis doses (OADs) were measured by film and calculated with Analytical Anisotropic Algorithm (AAA) and AXB for field sizes ranging from 2 Χ 2 to 5 Χ 5 cm{sup 2} of 6-MV FFF beams. Both algorithms were divided into AXB-w and AAA -w when included the film in phantom for dose calculation, and AXB-w/o and AAA-w/o in calculation without film. The calculated OADs for both algorithms were compared with the measured OADs and difference values were determined using root means squares error (RMSE) and gamma evaluation. Results: The percentage differences (%Diffs) between the measured and calculated CAD for AXB-w was most agreement than others. Compared to the %Diff with and without film, the %Diffs with film were decreased than without within both algorithms. The %Diffs for both algorithms were reduced with increasing field size and increased relative to the depth increment. RMSEs of CAD for AXB-w were within 10.32% for both inner-profile and penumbra, while the corresponding values of AAA-w appeared to 96.50%. Conclusion: This study demonstrated that the dose calculation with AXB within air cavity shows more accurate than with AAA compared to the measured dose. Furthermore, we found that the AXB-w was superior to AXB-w/o in this region when compared against the measurements.

  9. Accuracy of one algorithm used to modify a planned DVH with data from actual dose delivery.

    PubMed

    Ma, Tianjun; Podgorsak, Matthew B; Kumaraswamy, Lalith K

    2016-09-08

    Detection and accurate quantification of treatment delivery errors is important in radiation therapy. This study aims to evaluate the accuracy of DVH based QA in quantifying delivery errors. Eighteen previously treated VMAT plans (prostate, H&N, and brain) were randomly chosen for this study. Conventional IMRT delivery QA was done with the ArcCHECK diode detector for error-free plans and plans with the following modifications: 1) induced monitor unit differences up to ± 3.0%, 2) control point deletion (3, 5, and 8 control points were deleted for each arc), and 3) gantry angle shift (2° uniform shift clockwise and counterclockwise). 2D and 3D distance-to-agreement (DTA) analyses were performed for all plans with SNC Patient software and 3DVH software, respectively. Subsequently, accuracy of the reconstructed DVH curves and DVH parameters in 3DVH software were analyzed for all selected cases using the plans in the Eclipse treatment planning system as standard. 3D DTA analysis for error-induced plans generally gave high pass rates, whereas the 2D evaluation seemed to be more sensitive to detecting delivery errors. The average differences for DVH parameters between each pair of Eclipse recalculation and 3DVH prediction were within 2% for all three types of error-induced treatment plans. This illustrates that 3DVH accurately quantifies delivery errors in terms of actual dose delivered to the patients. 2D DTA analysis should be routinely used for clinical evaluation. Any concerns or dose discrepancies should be further analyzed through DVH-based QA for clinically relevant results and confirmation of a conventional passing-rate-based QA.

  10. Accuracy of one algorithm used to modify a planned DVH with data from actual dose delivery.

    PubMed

    Ma, Tianjun; Podgorsak, Matthew B; Kumaraswamy, Lalith

    2016-09-01

    Detection and accurate quantification of treatment delivery errors is important in radiation therapy. This study aims to evaluate the accuracy of DVH based QA in quantifying delivery errors. Eighteen previously treated VMAT plans (prostate, H&N, and brain) were randomly chosen for this study. Conventional IMRT delivery QA was done with the ArcCHECK diode detector for error-free plans and plans with the following modifications: 1) induced monitor unit differences up to ±3.0%,2) control point deletion (3, 5, and 8 control points were deleted for each arc), and 3) gantry angle shift (2° uniform shift clockwise and counterclockwise). 2D and 3D distance-to-agreement (DTA) analyses were performed for all plans with SNC Patient software and 3DVH software, respectively. Subsequently, accuracy of the reconstructed DVH curves and DVH parameters in 3DVH software were analyzed for all selected cases using the plans in the Eclipse treatment planning system as standard. 3D DTA analysis for error-induced plans generally gave high pass rates, whereas the 2D evaluation seemed to be more sensitive to detecting delivery errors. The average differences for DVH parameters between each pair of Eclipse recalculation and 3DVH prediction were within 2% for all three types of error-induced treatment plans. This illustrates that 3DVH accurately quantifies delivery errors in terms of actual dose delivered to the patients. 2D DTA analysis should be routinely used for clinical evaluation. Any concerns or dose discrepancies should be further analyzed through DVH-based QA for clinically relevant results and confirmation of a conventional passing-rate-based QA. PACS number(s): 87.56.Fc, 87.55.Qr, 87.55.dk, 87.55.km.

  11. High-density dental implants and radiotherapy planning: evaluation of effects on dose distribution using pencil beam convolution algorithm and Monte Carlo method.

    PubMed

    Çatli, Serap

    2015-09-01

    High atomic number and density of dental implants leads to major problems at providing an accurate dose distribution in radiotherapy and contouring tumors and organs caused by the artifact in head and neck tumors. The limits and deficiencies of the algorithms using in the treatment planning systems can lead to large errors in dose calculation, and this may adversely affect the patient's treatment. In the present study, four commercial dental implants were used: pure titanium, titanium alloy (Ti-6Al-4V), amalgam, and crown. The effects of dental implants on dose distribution are determined with two methods: pencil beam convolution (PBC) algorithm and Monte Carlo code for 6 MV photon beam. The central axis depth doses were calculated on the phantom for a source-skin distance (SSD) of 100 cm and a 10×10 cm2 field using both of algorithms. The results of Monte Carlo method and Eclipse TPS were compared to each other and to those previously reported. In the present study, dose increases in tissue at a distance of 2 mm in front of the dental implants were seen due to the backscatter of electrons for dental implants at 6 MV using the Monte Carlo method. The Eclipse treatment planning system (TPS) couldn't precisely account for the backscatter radiation caused by the dental prostheses. TPS underestimated the back scatter dose and overestimated the dose after the dental implants. The large errors found for TPS in this study are due to the limits and deficiencies of the algorithms. The accuracy of the PBC algorithm of Eclipse TPS was evaluated in comparison to Monte Carlo calculations in consideration of the recommendations of the American Association of Physicists in Medicine Radiation Therapy Committee Task Group 65. From the comparisons of the TPS and Monte Carlo calculations, it is verified that the Monte Carlo simulation is a good approach to derive the dose distribution in heterogeneous media. PACS numbers: 87.55.K.

  12. High-density dental implants and radiotherapy planning: evaluation of effects on dose distribution using pencil beam convolution algorithm and Monte Carlo method.

    PubMed

    Çatli, Serap

    2015-09-08

    High atomic number and density of dental implants leads to major problems at providing an accurate dose distribution in radiotherapy and contouring tumors and organs caused by the artifact in head and neck tumors. The limits and deficiencies of the algorithms using in the treatment planning systems can lead to large errors in dose calculation, and this may adversely affect the patient's treatment. In the present study, four commercial dental implants were used: pure titanium, titanium alloy (Ti-6Al-4V), amalgam, and crown. The effects of dental implants on dose distribution are determined with two methods: pencil beam convolution (PBC) algorithm and Monte Carlo code for 6 MV photon beam. The central axis depth doses were calculated on the phantom for a source-skin distance (SSD) of 100 cm and a 10 × 10 cm2 field using both of algorithms. The results of Monte Carlo method and Eclipse TPS were compared to each other and to those previously reported. In the present study, dose increases in tissue at a distance of 2 mm in front of the dental implants were seen due to the backscatter of electrons for dental implants at 6 MV using the Monte Carlo method. The Eclipse treatment planning system (TPS) couldn't precisely account for the backscatter radiation caused by the dental prostheses. TPS underestimated the back scatter dose and overestimated the dose after the dental implants. The large errors found for TPS in this study are due to the limits and deficiencies of the algorithms. The accuracy of the PBC algorithm of Eclipse TPS was evaluated in comparison to Monte Carlo calculations in consideration of the recommendations of the American Association of Physicists in Medicine Radiation Therapy Committee Task Group 65. From the comparisons of the TPS and Monte Carlo calculations, it is verified that the Monte Carlo simulation is a good approach to derive the dose distribution in heterogeneous media.

  13. A New Drug Combinatory Effect Prediction Algorithm on the Cancer Cell Based on Gene Expression and Dose-Response Curve.

    PubMed

    Goswami, C Pankaj; Cheng, L; Alexander, P S; Singal, A; Li, L

    2015-02-01

    Gene expression data before and after treatment with an individual drug and the IC20 of dose-response data were utilized to predict two drugs' interaction effects on a diffuse large B-cell lymphoma (DLBCL) cancer cell. A novel drug interaction scoring algorithm was developed to account for either synergistic or antagonistic effects between drug combinations. Different core gene selection schemes were investigated, which included the whole gene set, the drug-sensitive gene set, the drug-sensitive minus drug-resistant gene set, and the known drug target gene set. The prediction scores were compared with the observed drug interaction data at 6, 12, and 24 hours with a probability concordance (PC) index. The test result shows the concordance between observed and predicted drug interaction ranking reaches a PC index of 0.605. The scoring reliability and efficiency was further confirmed in five drug interaction studies published in the GEO database.

  14. Comparison of dosimetric and radiobiological parameters on plans for prostate stereotactic body radiotherapy using an endorectal balloon for different dose-calculation algorithms and delivery-beam modes

    NASA Astrophysics Data System (ADS)

    Kang, Sang-Won; Suh, Tae-Suk; Chung, Jin-Beom; Eom, Keun-Yong; Song, Changhoon; Kim, In-Ah; Kim, Jae-Sung; Lee, Jeong-Woo; Cho, Woong

    2017-02-01

    The purpose of this study was to evaluate the impact of dosimetric and radiobiological parameters on treatment plans by using different dose-calculation algorithms and delivery-beam modes for prostate stereotactic body radiation therapy using an endorectal balloon. For 20 patients with prostate cancer, stereotactic body radiation therapy (SBRT) plans were generated by using a 10-MV photon beam with flattening filter (FF) and flattening-filter-free (FFF) modes. The total treatment dose prescribed was 42.7 Gy in 7 fractions to cover at least 95% of the planning target volume (PTV) with 95% of the prescribed dose. The dose computation was initially performed using an anisotropic analytical algorithm (AAA) in the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) and was then re-calculated using Acuros XB (AXB V. 11.0.34) with the same monitor units and multileaf collimator files. The dosimetric and the radiobiological parameters for the PTV and organs at risk (OARs) were analyzed from the dose-volume histogram. An obvious difference in dosimetric parameters between the AAA and the AXB plans was observed in the PTV and rectum. Doses to the PTV, excluding the maximum dose, were always higher in the AAA plans than in the AXB plans. However, doses to the other OARs were similar in both algorithm plans. In addition, no difference was observed in the dosimetric parameters for different delivery-beam modes when using the same algorithm to generate plans. As a result of the dosimetric parameters, the radiobiological parameters for the two algorithm plans presented an apparent difference in the PTV and the rectum. The average tumor control probability of the AAA plans was higher than that of the AXB plans. The average normal tissue complication probability (NTCP) to rectum was lower in the AXB plans than in the AAA plans. The AAA and the AXB plans yielded very similar NTCPs for the other OARs. In plans using the same algorithms, the NTCPs for delivery

  15. Radiation dose reduction using a neck detection algorithm for single spiral brain and cervical spine CT acquisition in the trauma setting.

    PubMed

    Ardley, Nicholas D; Lau, Ken K; Buchan, Kevin

    2013-12-01

    Cervical spine injuries occur in 4-8 % of adults with head trauma. Dual acquisition technique has been traditionally used for the CT scanning of brain and cervical spine. The purpose of this study was to determine the efficacy of radiation dose reduction by using a single acquisition technique that incorporated both anatomical regions with a dedicated neck detection algorithm. Thirty trauma patients for brain and cervical spine CT were included and were scanned with the single acquisition technique. The radiation doses from the single CT acquisition technique with the neck detection algorithm, which allowed appropriate independent dose administration relevant to brain and cervical spine regions, were recorded. Comparison was made both to the doses calculated from the simulation of the traditional dual acquisitions with matching parameters, and to the doses of retrospective dual acquisition legacy technique with the same sample size. The mean simulated dose for the traditional dual acquisition technique was 3.99 mSv, comparable to the average dose of 4.2 mSv from 30 previous patients who had CT of brain and cervical spine as dual acquisitions. The mean dose from the single acquisition technique was 3.35 mSv, resulting in a 16 % overall dose reduction. The images from the single acquisition technique were of excellent diagnostic quality. The new single acquisition CT technique incorporating the neck detection algorithm for brain and cervical spine significantly reduces the overall radiation dose by eliminating the unavoidable overlapping range between 2 anatomical regions which occurs with the traditional dual acquisition technique.

  16. From prompt gamma distribution to dose: a novel approach combining an evolutionary algorithm and filtering based on Gaussian-powerlaw convolutions.

    PubMed

    Schumann, A; Priegnitz, M; Schoene, S; Enghardt, W; Rohling, H; Fiedler, F

    2016-10-07

    Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.

  17. From prompt gamma distribution to dose: a novel approach combining an evolutionary algorithm and filtering based on Gaussian-powerlaw convolutions

    NASA Astrophysics Data System (ADS)

    Schumann, A.; Priegnitz, M.; Schoene, S.; Enghardt, W.; Rohling, H.; Fiedler, F.

    2016-10-01

    Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.

  18. Percentage depth dose calculation accuracy of model based algorithms in high energy photon small fields through heterogeneous media and comparison with plastic scintillator dosimetry.

    PubMed

    Alagar, Ananda Giri Babu; Kadirampatti Mani, Ganesh; Karunakaran, Kaviarasu

    2016-01-08

    Small fields smaller than 4 × 4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model-based algorithms, X-ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS-Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth-of-dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth-dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1 × 1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1 × 1 cm2 field showed maximum deviation, except in 6MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower-density materials compared to high-density materials.

  19. Application and experience of a two-dosimeter algorithm for better estimation of effective dose during maintenance periods at Korea nuclear power plants.

    PubMed

    Kim, Hee Geun; Kong, Tae Young

    2009-01-01

    The application of a two-dosimeter and its algorithm and a test of its use in an inhomogeneous high radiation field are described. The goal was to develop an improved method for estimating the effective dose during maintenance periods at Korean nuclear power plants (NPPs). The application and experience to KNPPs was evaluated using data for each algorithm from two-dosimeter results for an inhomogeneous high radiation field during maintenance periods at Korean NPPs.

  20. TU-G-204-09: The Effects of Reduced- Dose Lung Cancer Screening CT On Lung Nodule Detection Using a CAD Algorithm

    SciTech Connect

    Young, S; Lo, P; Kim, G; Hsu, W; Hoffman, J; Brown, M; McNitt-Gray, M

    2015-06-15

    Purpose: While Lung Cancer Screening CT is being performed at low doses, the purpose of this study was to investigate the effects of further reducing dose on the performance of a CAD nodule-detection algorithm. Methods: We selected 50 cases from our local database of National Lung Screening Trial (NLST) patients for which we had both the image series and the raw CT data from the original scans. All scans were acquired with fixed mAs (25 for standard-sized patients, 40 for large patients) on a 64-slice scanner (Sensation 64, Siemens Healthcare). All images were reconstructed with 1-mm slice thickness, B50 kernel. 10 of the cases had at least one nodule reported on the NLST reader forms. Based on a previously-published technique, we added noise to the raw data to simulate reduced-dose versions of each case at 50% and 25% of the original NLST dose (i.e. approximately 1.0 and 0.5 mGy CTDIvol). For each case at each dose level, the CAD detection algorithm was run and nodules greater than 4 mm in diameter were reported. These CAD results were compared to “truth”, defined as the approximate nodule centroids from the NLST reports. Subject-level mean sensitivities and false-positive rates were calculated for each dose level. Results: The mean sensitivities of the CAD algorithm were 35% at the original dose, 20% at 50% dose, and 42.5% at 25% dose. The false-positive rates, in decreasing-dose order, were 3.7, 2.9, and 10 per case. In certain cases, particularly in larger patients, there were severe photon-starvation artifacts, especially in the apical region due to the high-attenuating shoulders. Conclusion: The detection task was challenging for the CAD algorithm at all dose levels, including the original NLST dose. However, the false-positive rate at 25% dose approximately tripled, suggesting a loss of CAD robustness somewhere between 0.5 and 1.0 mGy. NCI grant U01 CA181156 (Quantitative Imaging Network); Tobacco Related Disease Research Project grant 22RT-0131.

  1. A very fast iterative algorithm for TV-regularized image reconstruction with applications to low-dose and few-view CT

    NASA Astrophysics Data System (ADS)

    Kudo, Hiroyuki; Yamazaki, Fukashi; Nemoto, Takuya; Takaki, Keita

    2016-10-01

    This paper concerns iterative reconstruction for low-dose and few-view CT by minimizing a data-fidelity term regularized with the Total Variation (TV) penalty. We propose a very fast iterative algorithm to solve this problem. The algorithm derivation is outlined as follows. First, the original minimization problem is reformulated into the saddle point (primal-dual) problem by using the Lagrangian duality, to which we apply the first-order primal-dual iterative methods. Second, we precondition the iteration formula using the ramp filter of Filtered Backprojection (FBP) reconstruction algorithm in such a way that the problem solution is not altered. The resulting algorithm resembles the structure of so-called iterative FBP algorithm, and it converges to the exact minimizer of cost function very fast.

  2. SU-E-T-520: Four-Dimensional Dose Calculation Algorithm Considering Variations in Dose Distribution Induced by Sinusoidal One-Dimensional Motion Patterns

    SciTech Connect

    Taguenang, J; Algan, O; Ahmad, S; Ali, I

    2014-06-01

    Purpose: To investigate quantitatively the variations in dose-distributions induced by motion by measurements and modeling. A four-dimensional (4D) motion model of dose distributions that accounts for different motion parameters was developed. Methods: Variations in dose distributions induced by sinusoidal phantom motion were measured using a multiple-diode-array-detector (MapCheck2). MapCheck2 was mounted on a mobile platform that moves with adjustable calibrated motion patterns in the superior-inferior direction. Various plans including open and intensity-modulated fields were used to irradiate MapCheck2. A motion model was developed to predict spatial and temporal variations in the dose-distributions and dependence on the motion parameters using pencil-beam spread-out superposition function. This model used the superposition of pencil-beams weighted with a probability function extracted from the motion trajectory. The model was verified with measured dose-distributions obtained from MapCheck2. Results: Dose-distribution varied considerably with motion where in the regions between isocenter and 50% isodose-line, dose decreased with increase of the motion amplitude. Dose levels increased with increase in the motion amplitude in the region beyond 50% isodose-line. When the range of motion (ROM=twice amplitude) was smaller than the field length both central axis dose and the 50% isodose-line did not change with variation of motion amplitude and remained equal to the dose of stationary phantom. As ROM became larger than the field length, the dose level decreased at central axis dose and 50% isodose-line. Motion frequency and phase did not affect the dose distributions which were delivered over an extended time longer than few motion cycles, however, they played an important role for doses delivered with high-dose-rates within one motion cycle . Conclusion: A 4D-dose motion model was developed to predict and correct variations in dose distributions induced by one

  3. SU-E-T-626: Accuracy of Dose Calculation Algorithms in MultiPlan Treatment Planning System in Presence of Heterogeneities

    SciTech Connect

    Moignier, C; Huet, C; Barraux, V; Loiseau, C; Sebe-Mercier, K; Batalla, A; Makovicka, L

    2014-06-15

    Purpose: Advanced stereotactic radiotherapy (SRT) treatments require accurate dose calculation for treatment planning especially for treatment sites involving heterogeneous patient anatomy. The purpose of this study was to evaluate the accuracy of dose calculation algorithms, Raytracing and Monte Carlo (MC), implemented in the MultiPlan treatment planning system (TPS) in presence of heterogeneities. Methods: First, the LINAC of a CyberKnife radiotherapy facility was modeled with the PENELOPE MC code. A protocol for the measurement of dose distributions with EBT3 films was established and validated thanks to comparison between experimental dose distributions and calculated dose distributions obtained with MultiPlan Raytracing and MC algorithms as well as with the PENELOPE MC model for treatments planned with the homogenous Easycube phantom. Finally, bones and lungs inserts were used to set up a heterogeneous Easycube phantom. Treatment plans with the 10, 7.5 or the 5 mm field sizes were generated in Multiplan TPS with different tumor localizations (in the lung and at the lung/bone/soft tissue interface). Experimental dose distributions were compared to the PENELOPE MC and Multiplan calculations using the gamma index method. Results: Regarding the experiment in the homogenous phantom, 100% of the points passed for the 3%/3mm tolerance criteria. These criteria include the global error of the method (CT-scan resolution, EBT3 dosimetry, LINAC positionning …), and were used afterwards to estimate the accuracy of the MultiPlan algorithms in heterogeneous media. Comparison of the dose distributions obtained in the heterogeneous phantom is in progress. Conclusion: This work has led to the development of numerical and experimental dosimetric tools for small beam dosimetry. Raytracing and MC algorithms implemented in MultiPlan TPS were evaluated in heterogeneous media.

  4. SU-E-T-397: Evaluation of Planned Dose Distributions by Monte Carlo (0.5%) and Ray Tracing Algorithm for the Spinal Tumors with CyberKnife

    SciTech Connect

    Cho, H; Brindle, J; Hepel, J

    2015-06-15

    Purpose: To analyze and evaluate dose distribution between Ray Tracing (RT) and Monte Carlo (MC) algorithms of 0.5% uncertainty on a critical structure of spinal cord and gross target volume and planning target volume. Methods: Twenty four spinal tumor patients were treated with stereotactic body radiotherapy (SBRT) by CyberKnife in 2013 and 2014. The MC algorithm with 0.5% of uncertainty is used to recalculate the dose distribution for the treatment plan of the patients using the same beams, beam directions, and monitor units (MUs). Results: The prescription doses are uniformly larger for MC plans than RT except one case. Up to a factor of 1.19 for 0.25cc threshold volume and 1.14 for 1.2cc threshold volume of dose differences are observed for the spinal cord. Conclusion: The MC recalculated dose distributions are larger than the original MC calculations for the spinal tumor cases. Based on the accuracy of the MC calculations, more radiation dose might be delivered to the tumor targets and spinal cords with the increase prescription dose.

  5. Genotype-guided versus standard vitamin K antagonist dosing algorithms in patients initiating anticoagulation. A systematic review and meta-analysis.

    PubMed

    Belley-Cote, Emilie P; Hanif, Hasib; D'Aragon, Frederick; Eikelboom, John W; Anderson, Jeffrey L; Borgman, Mark; Jonas, Daniel E; Kimmel, Stephen E; Manolopoulos, Vangelis G; Baranova, Ekaterina; Maitland-van der Zee, Anke H; Pirmohamed, Munir; Whitlock, Richard P

    2015-10-01

    Variability in vitamin K antagonist (VKA) dosing is partially explained by genetic polymorphisms. We performed a meta-analysis to determine whether genotype-guided VKA dosing algorithms decrease a composite of death, thromboembolic events and major bleeding (primary outcome) and improve time in therapeutic range (TTR). We searched MEDLINE, EMBASE, CENTRAL, trial registries and conference proceedings for randomised trials comparing genotype-guided and standard (non genotype-guided) VKA dosing algorithms in adults initiating anticoagulation. Data were pooled using a random effects model. Of the 12 included studies (3,217 patients), six reported all components of the primary outcome of mortality, thromboembolic events and major bleeding (2,223 patients, 87 events). Our meta-analysis found no significant difference between groups for the primary outcome (relative risk 0.85, 95% confidence interval [CI] 0.54-1.34; heterogeneity Χ(²)=4.46, p=0.35, I(²)=10%). Based on 10 studies (2,767 patients), TTR was significantly higher in the genotype-guided group (mean difference (MD) 4.31%; 95% CI 0.35, 8.26; heterogeneity Χ(²)=43.31, p<0.001, I(²)=79%). Pre-specified exploratory analyses demonstrated that TTR was significantly higher when genotype-guided dosing was compared with fixed VKA dosing (6 trials, 997 patients: MD 8.41%; 95% CI 3.50,13.31; heterogeneity Χ(²)=15.18, p=0.01, I(²)=67%) but not when compared with clinical algorithm-guided dosing (4 trials, 1,770 patients: MD -0.29%; 95% CI -2.48,1.90; heterogeneity Χ(²)=1.53, p=0.68, I(²)=0%; p for interaction=0.002). In conclusion, genotype-guided compared with standard VKA dosing algorithms were not found to decrease a composite of death, thromboembolism and major bleeding, but did result in improved TTR. An improvement in TTR was observed in comparison with fixed VKA dosing algorithms, but not with clinical algorithms.

  6. Meta-analysis of randomized controlled trials reveals an improved clinical outcome of using genotype plus clinical algorithm for warfarin dosing.

    PubMed

    Liao, Zhenqi; Feng, Shaoguang; Ling, Peng; Zhang, Guoqing

    2015-02-01

    Previous studies have raised interest in using the genotyping of CYP2C9 and VKORC1 to guide warfarin dosing. However, there is lack of solid evidence to prove that genotype plus clinical algorithm provides improved clinical outcomes than the single clinical algorithm. The results of recent reported clinical trials are paradoxical and needs to be systematically evaluated. In this study, we aim to assess whether genotype plus clinical algorithm of warfarin is superior to the single clinical algorithm through a meta-analysis of randomized controlled trials (RCTs). All relevant studies from PubMed and reference lists from Jan 1, 1995 to Jan 13, 2014 were extracted and screened. Eligible studies included randomized trials that compared clinical plus pharmacogenetic algorithms group to single clinical algorithm group using adult (≥ 18 years) patients with disease conditions that require warfarin use. We further used fix-effect models to calculate the mean difference or the risk ratios (RRs) and 95% CIs to analyze the extracted data. The statistical heterogeneity was calculated using I(2). The percentage of time within the therapeutic INR range was considered to be the primary clinical outcome. The initial search strategy identified 50 citations and 7 trials were eligible. These seven trials included 1,910 participants, including 960 patients who received genotype plus clinical algorithm of warfarin dosing and 950 patients who received clinical algorithm only. We discovered that the percentage of time within the therapeutic INR range of the genotype-guided group was improved compared with the standard group in the RCTs when the initial standard dose was fixed (95% CI 0.09-0.40; I(2) = 47.8%). However, for the studies using non-fixed initial doses, the genotype-guided group failed to exhibit statistically significant outcome compared to the standard group. No significant difference was observed in the incidences of adverse events (RR 0.94, 95% CI 0.84-1.04; I(2) = 0%, p

  7. Incorporating an Exercise Detection, Grading, and Hormone Dosing Algorithm Into the Artificial Pancreas Using Accelerometry and Heart Rate.

    PubMed

    Jacobs, Peter G; Resalat, Navid; El Youssef, Joseph; Reddy, Ravi; Branigan, Deborah; Preiser, Nicholas; Condon, John; Castle, Jessica

    2015-10-05

    In this article, we present several important contributions necessary for enabling an artificial endocrine pancreas (AP) system to better respond to exercise events. First, we show how exercise can be automatically detected using body-worn accelerometer and heart rate sensors. During a 22 hour overnight inpatient study, 13 subjects with type 1 diabetes wearing a Zephyr accelerometer and heart rate monitor underwent 45 minutes of mild aerobic treadmill exercise while controlling their glucose levels using sensor-augmented pump therapy. We used the accelerometer and heart rate as inputs into a validated regression model. Using this model, we were able to detect the exercise event with a sensitivity of 97.2% and a specificity of 99.5%. Second, from this same study, we show how patients' glucose declined during the exercise event and we present results from in silico modeling that demonstrate how including an exercise model in the glucoregulatory model improves the estimation of the drop in glucose during exercise. Last, we present an exercise dosing adjustment algorithm and describe parameter tuning and performance using an in silico glucoregulatory model during an exercise event.

  8. Incorporating an Exercise Detection, Grading, and Hormone Dosing Algorithm Into the Artificial Pancreas Using Accelerometry and Heart Rate

    PubMed Central

    Jacobs, Peter G.; Resalat, Navid; El Youssef, Joseph; Reddy, Ravi; Branigan, Deborah; Preiser, Nicholas; Condon, John; Castle, Jessica

    2015-01-01

    In this article, we present several important contributions necessary for enabling an artificial endocrine pancreas (AP) system to better respond to exercise events. First, we show how exercise can be automatically detected using body-worn accelerometer and heart rate sensors. During a 22 hour overnight inpatient study, 13 subjects with type 1 diabetes wearing a Zephyr accelerometer and heart rate monitor underwent 45 minutes of mild aerobic treadmill exercise while controlling their glucose levels using sensor-augmented pump therapy. We used the accelerometer and heart rate as inputs into a validated regression model. Using this model, we were able to detect the exercise event with a sensitivity of 97.2% and a specificity of 99.5%. Second, from this same study, we show how patients’ glucose declined during the exercise event and we present results from in silico modeling that demonstrate how including an exercise model in the glucoregulatory model improves the estimation of the drop in glucose during exercise. Last, we present an exercise dosing adjustment algorithm and describe parameter tuning and performance using an in silico glucoregulatory model during an exercise event. PMID:26438720

  9. SU-E-I-05: A Correction Algorithm for Kilovoltage Cone-Beam Computed Tomography Dose Calculations in Cervical Cancer Patients

    SciTech Connect

    Zhang, J; Zhang, W; Lu, J

    2015-06-15

    Purpose: To investigate the accuracy and feasibility of dose calculations using kilovoltage cone beam computed tomography in cervical cancer radiotherapy using a correction algorithm. Methods: The Hounsfield units (HU) and electron density (HU-density) curve was obtained for both planning CT (pCT) and kilovoltage cone beam CT (CBCT) using a CIRS-062 calibration phantom. The pCT and kV-CBCT images have different HU values, and if the HU-density curve of CBCT was directly used to calculate dose in CBCT images may have a deviation on dose distribution. It is necessary to normalize the different HU values between pCT and CBCT. A HU correction algorithm was used for CBCT images (cCBCT). Fifteen intensity-modulated radiation therapy (IMRT) plans of cervical cancer were chosen, and the plans were transferred to the pCT and cCBCT data sets without any changes for dose calculations. Phantom and patient studies were carried out. The dose differences and dose distributions were compared between cCBCT plan and pCT plan. Results: The HU number of CBCT was measured by several times, and the maximum change was less than 2%. To compare with pCT, the CBCT and cCBCT has a discrepancy, the dose differences in CBCT and cCBCT images were 2.48%±0.65% (range: 1.3%∼3.8%) and 0.48%±0.21% (range: 0.1%∼0.82%) for phantom study, respectively. For dose calculation in patient images, the dose differences were 2.25%±0.43% (range: 1.4%∼3.4%) and 0.63%±0.35% (range: 0.13%∼0.97%), respectively. And for the dose distributions, the passing rate of cCBCT was higher than the CBCTs. Conclusion: The CBCT image for dose calculation is feasible in cervical cancer radiotherapy, and the correction algorithm offers acceptable accuracy. It will become a useful tool for adaptive radiation therapy.

  10. Dosimetric verification and clinical evaluation of a new commercially available Monte Carlo-based dose algorithm for application in stereotactic body radiation therapy (SBRT) treatment planning

    NASA Astrophysics Data System (ADS)

    Fragoso, Margarida; Wen, Ning; Kumar, Sanath; Liu, Dezhi; Ryu, Samuel; Movsas, Benjamin; Munther, Ajlouni; Chetty, Indrin J.

    2010-08-01

    Modern cancer treatment techniques, such as intensity-modulated radiation therapy (IMRT) and stereotactic body radiation therapy (SBRT), have greatly increased the demand for more accurate treatment planning (structure definition, dose calculation, etc) and dose delivery. The ability to use fast and accurate Monte Carlo (MC)-based dose calculations within a commercial treatment planning system (TPS) in the clinical setting is now becoming more of a reality. This study describes the dosimetric verification and initial clinical evaluation of a new commercial MC-based photon beam dose calculation algorithm, within the iPlan v.4.1 TPS (BrainLAB AG, Feldkirchen, Germany). Experimental verification of the MC photon beam model was performed with film and ionization chambers in water phantoms and in heterogeneous solid-water slabs containing bone and lung-equivalent materials for a 6 MV photon beam from a Novalis (BrainLAB) linear accelerator (linac) with a micro-multileaf collimator (m3 MLC). The agreement between calculated and measured dose distributions in the water phantom verification tests was, on average, within 2%/1 mm (high dose/high gradient) and was within ±4%/2 mm in the heterogeneous slab geometries. Example treatment plans in the lung show significant differences between the MC and one-dimensional pencil beam (PB) algorithms within iPlan, especially for small lesions in the lung, where electronic disequilibrium effects are emphasized. Other user-specific features in the iPlan system, such as options to select dose to water or dose to medium, and the mean variance level, have been investigated. Timing results for typical lung treatment plans show the total computation time (including that for processing and I/O) to be less than 10 min for 1-2% mean variance (running on a single PC with 8 Intel Xeon X5355 CPUs, 2.66 GHz). Overall, the iPlan MC algorithm is demonstrated to be an accurate and efficient dose algorithm, incorporating robust tools for MC

  11. SU-E-J-109: Evaluation of Deformable Accumulated Parotid Doses Using Different Registration Algorithms in Adaptive Head and Neck Radiotherapy

    SciTech Connect

    Xu, S; Liu, B

    2015-06-15

    Purpose: Three deformable image registration (DIR) algorithms are utilized to perform deformable dose accumulation for head and neck tomotherapy treatment, and the differences of the accumulated doses are evaluated. Methods: Daily MVCT data for 10 patients with pathologically proven nasopharyngeal cancers were analyzed. The data were acquired using tomotherapy (TomoTherapy, Accuray) at the PLA General Hospital. The prescription dose to the primary target was 70Gy in 33 fractions.Three DIR methods (B-spline, Diffeomorphic Demons and MIMvista) were used to propagate parotid structures from planning CTs to the daily CTs and accumulate fractionated dose on the planning CTs. The mean accumulated doses of parotids were quantitatively compared and the uncertainties of the propagated parotid contours were evaluated using Dice similarity index (DSI). Results: The planned mean dose of the ipsilateral parotids (32.42±3.13Gy) was slightly higher than those of the contralateral parotids (31.38±3.19Gy)in 10 patients. The difference between the accumulated mean doses of the ipsilateral parotids in the B-spline, Demons and MIMvista deformation algorithms (36.40±5.78Gy, 34.08±6.72Gy and 33.72±2.63Gy ) were statistically significant (B-spline vs Demons, P<0.0001, B-spline vs MIMvista, p =0.002). And The difference between those of the contralateral parotids in the B-spline, Demons and MIMvista deformation algorithms (34.08±4.82Gy, 32.42±4.80Gy and 33.92±4.65Gy ) were also significant (B-spline vs Demons, p =0.009, B-spline vs MIMvista, p =0.074). For the DSI analysis, the scores of B-spline, Demons and MIMvista DIRs were 0.90, 0.89 and 0.76. Conclusion: Shrinkage of parotid volumes results in the dose increase to the parotid glands in adaptive head and neck radiotherapy. The accumulated doses of parotids show significant difference using the different DIR algorithms between kVCT and MVCT. Therefore, the volume-based criterion (i.e. DSI) as a quantitative evaluation of

  12. Inverse determination of the penalty parameter in penalized weighted least-squares algorithm for noise reduction of low-dose CBCT

    SciTech Connect

    Wang, Jing; Guan, Huaiqun; Solberg, Timothy

    2011-07-15

    Purpose: A statistical projection restoration algorithm based on the penalized weighted least-squares (PWLS) criterion can substantially improve the image quality of low-dose CBCT images. The performance of PWLS is largely dependent on the choice of the penalty parameter. Previously, the penalty parameter was chosen empirically by trial and error. In this work, the authors developed an inverse technique to calculate the penalty parameter in PWLS for noise suppression of low-dose CBCT in image guided radiotherapy (IGRT). Methods: In IGRT, a daily CBCT is acquired for the same patient during a treatment course. In this work, the authors acquired the CBCT with a high-mAs protocol for the first session and then a lower mAs protocol for the subsequent sessions. The high-mAs projections served as the goal (ideal) toward, which the low-mAs projections were to be smoothed by minimizing the PWLS objective function. The penalty parameter was determined through an inverse calculation of the derivative of the objective function incorporating both the high and low-mAs projections. Then the parameter obtained can be used for PWLS to smooth the noise in low-dose projections. CBCT projections for a CatPhan 600 and an anthropomorphic head phantom, as well as for a brain patient, were used to evaluate the performance of the proposed technique. Results: The penalty parameter in PWLS was obtained for each CBCT projection using the proposed strategy. The noise in the low-dose CBCT images reconstructed from the smoothed projections was greatly suppressed. Image quality in PWLS-processed low-dose CBCT was comparable to its corresponding high-dose CBCT. Conclusions: A technique was proposed to estimate the penalty parameter for PWLS algorithm. It provides an objective and efficient way to obtain the penalty parameter for image restoration algorithms that require predefined smoothing parameters.

  13. PIVET rFSH dosing algorithms for individualized controlled ovarian stimulation enables optimized pregnancy productivity rates and avoidance of ovarian hyperstimulation syndrome.

    PubMed

    Yovich, John L; Alsbjerg, Birgit; Conceicao, Jason L; Hinchliffe, Peter M; Keane, Kevin N

    2016-01-01

    The first PIVET algorithm for individualized recombinant follicle stimulating hormone (rFSH) dosing in in vitro fertilization, reported in 2012, was based on age and antral follicle count grading with adjustments for anti-Müllerian hormone level, body mass index, day-2 FSH, and smoking history. In 2007, it was enabled by the introduction of a metered rFSH pen allowing small dosage increments of ~8.3 IU per click. In 2011, a second rFSH pen was introduced allowing more precise dosages of 12.5 IU per click, and both pens with their individual algorithms have been applied continuously at our clinic. The objective of this observational study was to validate the PIVET algorithms pertaining to the two rFSH pens with the aim of collecting ≤15 oocytes and minimizing the risk of ovarian hyperstimulation syndrome. The data set included 2,822 in vitro fertilization stimulations over a 6-year period until April 2014 applying either of the two individualized dosing algorithms and corresponding pens. The main outcome measures were mean oocytes retrieved and resultant embryos designated for transfer or cryopreservation permitted calculation of oocyte and embryo utilization rates. Ensuing pregnancies were tracked until live births, and live birth productivity rates embracing fresh and frozen transfers were calculated. Overall, the results showed that mean oocyte numbers were 10.0 for all women <40 years with 24% requiring rFSH dosages <150 IU. Applying both specific algorithms in our clinic meant that the starting dose was not altered for 79.1% of patients and for 30.1% of those receiving the very lowest rFSH dosages (≤75 IU). Only 0.3% patients were diagnosed with severe ovarian hyperstimulation syndrome, all deemed avoidable due to definable breaches from the protocols. The live birth productivity rates exceeded 50% for women <35 years and was 33.2% for the group aged 35-39 years. Routine use of both algorithms led to only 11.6% of women generating >15 oocytes

  14. PIVET rFSH dosing algorithms for individualized controlled ovarian stimulation enables optimized pregnancy productivity rates and avoidance of ovarian hyperstimulation syndrome

    PubMed Central

    Yovich, John L; Alsbjerg, Birgit; Conceicao, Jason L; Hinchliffe, Peter M; Keane, Kevin N

    2016-01-01

    The first PIVET algorithm for individualized recombinant follicle stimulating hormone (rFSH) dosing in in vitro fertilization, reported in 2012, was based on age and antral follicle count grading with adjustments for anti-Müllerian hormone level, body mass index, day-2 FSH, and smoking history. In 2007, it was enabled by the introduction of a metered rFSH pen allowing small dosage increments of ~8.3 IU per click. In 2011, a second rFSH pen was introduced allowing more precise dosages of 12.5 IU per click, and both pens with their individual algorithms have been applied continuously at our clinic. The objective of this observational study was to validate the PIVET algorithms pertaining to the two rFSH pens with the aim of collecting ≤15 oocytes and minimizing the risk of ovarian hyperstimulation syndrome. The data set included 2,822 in vitro fertilization stimulations over a 6-year period until April 2014 applying either of the two individualized dosing algorithms and corresponding pens. The main outcome measures were mean oocytes retrieved and resultant embryos designated for transfer or cryopreservation permitted calculation of oocyte and embryo utilization rates. Ensuing pregnancies were tracked until live births, and live birth productivity rates embracing fresh and frozen transfers were calculated. Overall, the results showed that mean oocyte numbers were 10.0 for all women <40 years with 24% requiring rFSH dosages <150 IU. Applying both specific algorithms in our clinic meant that the starting dose was not altered for 79.1% of patients and for 30.1% of those receiving the very lowest rFSH dosages (≤75 IU). Only 0.3% patients were diagnosed with severe ovarian hyperstimulation syndrome, all deemed avoidable due to definable breaches from the protocols. The live birth productivity rates exceeded 50% for women <35 years and was 33.2% for the group aged 35–39 years. Routine use of both algorithms led to only 11.6% of women generating >15 oocytes

  15. SU-C-207-05: A Comparative Study of Noise-Reduction Algorithms for Low-Dose Cone-Beam Computed Tomography

    SciTech Connect

    Mukherjee, S; Yao, W

    2015-06-15

    Purpose: To study different noise-reduction algorithms and to improve the image quality of low dose cone beam CT for patient positioning in radiation therapy. Methods: In low-dose cone-beam CT, the reconstructed image is contaminated with excessive quantum noise. In this study, three well-developed noise reduction algorithms namely, a) penalized weighted least square (PWLS) method, b) split-Bregman total variation (TV) method, and c) compressed sensing (CS) method were studied and applied to the images of a computer–simulated “Shepp-Logan” phantom and a physical CATPHAN phantom. Up to 20% additive Gaussian noise was added to the Shepp-Logan phantom. The CATPHAN phantom was scanned by a Varian OBI system with 100 kVp, 4 ms and 20 mA. For comparing the performance of these algorithms, peak signal-to-noise ratio (PSNR) of the denoised images was computed. Results: The algorithms were shown to have the potential in reducing the noise level for low-dose CBCT images. For Shepp-Logan phantom, an improvement of PSNR of 2 dB, 3.1 dB and 4 dB was observed using PWLS, TV and CS respectively, while for CATPHAN, the improvement was 1.2 dB, 1.8 dB and 2.1 dB, respectively. Conclusion: Penalized weighted least square, total variation and compressed sensing methods were studied and compared for reducing the noise on a simulated phantom and a physical phantom scanned by low-dose CBCT. The techniques have shown promising results for noise reduction in terms of PSNR improvement. However, reducing the noise without compromising the smoothness and resolution of the image needs more extensive research.

  16. Assessment of a 2D electronic portal imaging devices-based dosimetry algorithm for pretreatment and in-vivo midplane dose verification

    PubMed Central

    Jomehzadeh, Ali; Shokrani, Parvaneh; Mohammadi, Mohammad; Amouheidari, Alireza

    2016-01-01

    Background: The use of electronic portal imaging devices (EPIDs) is a method for the dosimetric verification of radiotherapy plans, both pretreatment and in vivo. The aim of this study is to test a 2D EPID-based dosimetry algorithm for dose verification of some plans inside a homogenous and anthropomorphic phantom and in vivo as well. Materials and Methods: Dose distributions were reconstructed from EPID images using a 2D EPID dosimetry algorithm inside a homogenous slab phantom for a simple 10 × 10 cm2 box technique, 3D conformal (prostate, head-and-neck, and lung), and intensity-modulated radiation therapy (IMRT) prostate plans inside an anthropomorphic (Alderson) phantom and in the patients (one fraction in vivo) for 3D conformal plans (prostate, head-and-neck and lung). Results: The planned and EPID dose difference at the isocenter, on an average, was 1.7% for pretreatment verification and less than 3% for all in vivo plans, except for head-and-neck, which was 3.6%. The mean γ values for a seven-field prostate IMRT plan delivered to the Alderson phantom varied from 0.28 to 0.65. For 3D conformal plans applied for the Alderson phantom, all γ1% values were within the tolerance level for all plans and in both anteroposterior and posteroanterior (AP-PA) beams. Conclusion: The 2D EPID-based dosimetry algorithm provides an accurate method to verify the dose of a simple 10 × 10 cm2 field, in two dimensions, inside a homogenous slab phantom and an IMRT prostate plan, as well as in 3D conformal plans (prostate, head-and-neck, and lung plans) applied using an anthropomorphic phantom and in vivo. However, further investigation to improve the 2D EPID dosimetry algorithm for a head-and-neck case, is necessary. PMID:28028511

  17. Study of 201 Non-Small Cell Lung Cancer Patients Given Stereotactic Ablative Radiation Therapy Shows Local Control Dependence on Dose Calculation Algorithm

    SciTech Connect

    Latifi, Kujtim; Oliver, Jasmine; Baker, Ryan; Dilling, Thomas J.; Stevens, Craig W.; Kim, Jongphil; Yue, Binglin; DeMarco, MaryLou; Zhang, Geoffrey G.; Moros, Eduardo G.; Feygelman, Vladimir

    2014-04-01

    Purpose: Pencil beam (PB) and collapsed cone convolution (CCC) dose calculation algorithms differ significantly when used in the thorax. However, such differences have seldom been previously directly correlated with outcomes of lung stereotactic ablative body radiation (SABR). Methods and Materials: Data for 201 non-small cell lung cancer patients treated with SABR were analyzed retrospectively. All patients were treated with 50 Gy in 5 fractions of 10 Gy each. The radiation prescription mandated that 95% of the planning target volume (PTV) receive the prescribed dose. One hundred sixteen patients were planned with BrainLab treatment planning software (TPS) with the PB algorithm and treated on a Novalis unit. The other 85 were planned on the Pinnacle TPS with the CCC algorithm and treated on a Varian linac. Treatment planning objectives were numerically identical for both groups. The median follow-up times were 24 and 17 months for the PB and CCC groups, respectively. The primary endpoint was local/marginal control of the irradiated lesion. Gray's competing risk method was used to determine the statistical differences in local/marginal control rates between the PB and CCC groups. Results: Twenty-five patients planned with PB and 4 patients planned with the CCC algorithms to the same nominal doses experienced local recurrence. There was a statistically significant difference in recurrence rates between the PB and CCC groups (hazard ratio 3.4 [95% confidence interval: 1.18-9.83], Gray's test P=.019). The differences (Δ) between the 2 algorithms for target coverage were as follows: ΔD99{sub GITV} = 7.4 Gy, ΔD99{sub PTV} = 10.4 Gy, ΔV90{sub GITV} = 13.7%, ΔV90{sub PTV} = 37.6%, ΔD95{sub PTV} = 9.8 Gy, and ΔD{sub ISO} = 3.4 Gy. GITV = gross internal tumor volume. Conclusions: Local control in patients receiving who were planned to the same nominal dose with PB and CCC algorithms were statistically significantly different. Possible alternative

  18. Selection of the most appropriate two-dosemeter algorithm for estimating effective dose equivalent during maintenance periods in Korean nuclear power plants.

    PubMed

    Kim, Hee Geun; Kong, Tae Young

    2010-07-01

    The application of a two-dosemeter system with its algorithm, as well as a test of its use in an inhomogeneous high-radiation field, is described in this study. The goal was to improve the method for estimating the effective dose equivalent during maintenance periods at Korean Nuclear Power Plants (NPPs). The use of this method in Korean and international NPPs, including those NPPs in the USA and Canada, was also investigated. The algorithms used by the the American National Standards Institute, Lakshmanan, the National Council on Radiation Protection and Measurements (NCRP), the Electric Power Research Institute and Kim were extensively analysed as two-dosemeter algorithms. Their possible application to NPPs was evaluated using data for each algorithm from two-dosemeter results that were obtained from an inhomogeneous high-radiation field during maintenance periods at Korean NPPs. The NCRP algorithm (55:50) was selected as an optimal two-dosemeter algorithm for Korean NPPs by taking into account the field test results and the convenience of wearing two dosemeters.

  19. Bladder dose accumulation based on a biomechanical deformable image registration algorithm in volumetric modulated arc therapy for prostate cancer

    NASA Astrophysics Data System (ADS)

    Andersen, E. S.; Muren, L. P.; Sørensen, T. S.; Noe, K. Ø.; Thor, M.; Petersen, J. B.; Høyer, M.; Bentzen, L.; Tanderup, K.

    2012-11-01

    Variations in bladder position, shape and volume cause uncertainties in the doses delivered to this organ during a course of radiotherapy for pelvic tumors. The purpose of this study was to evaluate the potential of dose accumulation based on repeat imaging and deformable image registration (DIR) to improve the accuracy of bladder dose assessment. For each of nine prostate cancer patients, the initial treatment plan was re-calculated on eight to nine repeat computed tomography (CT) scans. The planned bladder dose-volume histogram (DVH) parameters were compared to corresponding parameters derived from DIR-based accumulations as well as DVH summation based on dose re-calculations. It was found that the deviations between the DIR-based accumulations and the planned treatment were substantial and ranged (-0.5-2.3) Gy and (-9.4-13.5) Gy for D2% and Dmean, respectively, whereas the deviations between DIR-based accumulations and DVH summation were small and well within 1 Gy. For the investigated treatment scenario, DIR-based bladder dose accumulation did not result in substantial improvement of dose estimation as compared to the straightforward DVH summation. Large variations were found in individual patients between the doses from the initial treatment plan and the accumulated bladder doses. Hence, the use of repeat imaging has a potential for improved accuracy in treatment dose reporting.

  20. Validation of calculation algorithms for organ doses in CT by measurements on a 5 year old paediatric phantom

    NASA Astrophysics Data System (ADS)

    Dabin, Jérémie; Mencarelli, Alessandra; McMillan, Dayton; Romanyukha, Anna; Struelens, Lara; Lee, Choonsik

    2016-06-01

    Many organ dose calculation tools for computed tomography (CT) scans rely on the assumptions: (1) organ doses estimated for one CT scanner can be converted into organ doses for another CT scanner using the ratio of the Computed Tomography Dose Index (CTDI) between two CT scanners; and (2) helical scans can be approximated as the summation of axial slices covering the same scan range. The current study aims to validate experimentally these two assumptions. We performed organ dose measurements in a 5 year-old physical anthropomorphic phantom for five different CT scanners from four manufacturers. Absorbed doses to 22 organs were measured using thermoluminescent dosimeters for head-to-torso scans. We then compared the measured organ doses with the values calculated from the National Cancer Institute dosimetry system for CT (NCICT) computer program, developed at the National Cancer Institute. Whereas the measured organ doses showed significant variability (coefficient of variation (CoV) up to 53% at 80 kV) across different scanner models, the CoV of organ doses normalised to CTDIvol substantially decreased (12% CoV on average at 80 kV). For most organs, the difference between measured and simulated organ doses was within  ±20% except for the bone marrow, breasts and ovaries. The discrepancies were further explained by additional Monte Carlo calculations of organ doses using a voxel phantom developed from CT images of the physical phantom. The results demonstrate that organ doses calculated for one CT scanner can be used to assess organ doses from other CT scanners with 20% uncertainty (k  =  1), for the scan settings considered in the study.

  1. SU-E-T-203: Comparison of a Commercial MRI-Linear Accelerator Based Monte Carlo Dose Calculation Algorithm and Geant4

    SciTech Connect

    Ahmad, S; Sarfehnia, A; Paudel, M; Sahgal, A; Keller, B; Hissoiny, S

    2015-06-15

    Purpose: An MRI-linear accelerator is currently being developed by the vendor Elekta™. The treatment planning system that will be used to model dose for this unit uses a Monte Carlo dose calculation algorithm, GPUMCD, that allows for the application of a magnetic field. We tested this radiation transport code against an independent Monte-Carlo toolkit Geant4 (v.4.10.01) both with and without the magnetic field applied. Methods: The setup comprised a 6 MeV mono-energetic photon beam emerging from a point source impinging on a homogeneous water phantom at 100 cm SSD. The comparisons were drawn from the percentage depth doses (PDD) for three different field sizes (1.5 x 1.5 cm{sup 2}, 5 x 5 cm{sup 2}, 10 x 10 cm{sup 2}) and dose profiles at various depths. A 1.5 T magnetic field was applied perpendicular to the direction of the beam. The transport thresholds were kept the same for both codes. Results: All of the normalized PDDs and profiles agreed within ± 1 %. In the presence of the magnetic field, PDDs rise more quickly reducing the depth of maximum dose. Near the beam exit point in the phantom a hot spot is created due to the electron return effect. This effect is more pronounced for the larger field sizes. Profiles selected parallel to the external field show no effect, however, the ones selected perpendicular to the direction of the applied magnetic field are shifted towards the direction of the Lorentz force applied by the magnetic field on the secondary electrons. It is observed that these profiles are not symmetric which indicates a lateral build up of the dose. Conclusion: There is a good general agreement between the PDDs/profiles calculated by both algorithms thus far. We are proceeding towards clinically relevant comparisons in a heterogeneous phantom for polyenergetic beams. Funding for this work has been provided by Elekta.

  2. Electron dose distributions caused by the contact-type metallic eye shield: Studies using Monte Carlo and pencil beam algorithms

    SciTech Connect

    Kang, Sei-Kwon; Yoon, Jai-Woong; Hwang, Taejin; Park, Soah; Cheong, Kwang-Ho; Jin Han, Tae; Kim, Haeyoung; Lee, Me-Yeon; Ju Kim, Kyoung Bae, Hoonsik

    2015-10-01

    A metallic contact eye shield has sometimes been used for eyelid treatment, but dose distribution has never been reported for a patient case. This study aimed to show the shield-incorporated CT-based dose distribution using the Pinnacle system and Monte Carlo (MC) calculation for 3 patient cases. For the artifact-free CT scan, an acrylic shield machined as the same size as that of the tungsten shield was used. For the MC calculation, BEAMnrc and DOSXYZnrc were used for the 6-MeV electron beam of the Varian 21EX, in which information for the tungsten, stainless steel, and aluminum material for the eye shield was used. The same plan was generated on the Pinnacle system and both were compared. The use of the acrylic shield produced clear CT images, enabling delineation of the regions of interest, and yielded CT-based dose calculation for the metallic shield. Both the MC and the Pinnacle systems showed a similar dose distribution downstream of the eye shield, reflecting the blocking effect of the metallic eye shield. The major difference between the MC and the Pinnacle results was the target eyelid dose upstream of the shield such that the Pinnacle system underestimated the dose by 19 to 28% and 11 to 18% for the maximum and the mean doses, respectively. The pattern of dose difference between the MC and the Pinnacle systems was similar to that in the previous phantom study. In conclusion, the metallic eye shield was successfully incorporated into the CT-based planning, and the accurate dose calculation requires MC simulation.

  3. SU-E-I-06: A Dose Calculation Algorithm for KV Diagnostic Imaging Beams by Empirical Modeling

    SciTech Connect

    Chacko, M; Aldoohan, S; Sonnad, J; Ahmad, S; Ali, I

    2015-06-15

    Purpose: To develop accurate three-dimensional (3D) empirical dose calculation model for kV diagnostic beams for different radiographic and CT imaging techniques. Methods: Dose was modeled using photon attenuation measured using depth dose (DD), scatter radiation of the source and medium, and off-axis ratio (OAR) profiles. Measurements were performed using single-diode in water and a diode-array detector (MapCHECK2) with kV on-board imagers (OBI) integrated with Varian TrueBeam and Trilogy linacs. The dose parameters were measured for three energies: 80, 100, and 125 kVp with and without bowtie filters using field sizes 1×1–40×40 cm2 and depths 0–20 cm in water tank. Results: The measured DD decreased with depth in water because of photon attenuation, while it increased with field size due to increased scatter radiation from medium. DD curves varied with energy and filters where they increased with higher energies and beam hardening from half-fan and full-fan bowtie filters. Scatter radiation factors increased with field sizes and higher energies. The OAR was with 3% for beam profiles within the flat dose regions. The heal effect of this kV OBI system was within 6% from the central axis value at different depths. The presence of bowtie filters attenuated measured dose off-axis by as much as 80% at the edges of large beams. The model dose predictions were verified with measured doses using single point diode and ionization chamber or two-dimensional diode-array detectors inserted in solid water phantoms. Conclusion: This empirical model enables fast and accurate 3D dose calculation in water within 5% in regions with near charge-particle equilibrium conditions outside buildup region and penumbra. It considers accurately scatter radiation contribution in water which is superior to air-kerma or CTDI dose measurements used usually in dose calculation for diagnostic imaging beams. Considering heterogeneity corrections in this model will enable patient specific dose

  4. Combination of a low tube voltage technique with the hybrid iterative reconstruction (iDose) algorithm at coronary CT angiography

    PubMed Central

    Funama, Yoshinori; Taguchi, Katsuyuki; Utsunomiya, Daisuke; Oda, Seitaro; Yanaga, Yumi; Yamashita, Yasuyuki; Awai, Kazuo

    2011-01-01

    We compare the performance of low-tube voltage with the hybrid iterative reconstruction (iDose) with standard- and low-tube voltage with the filtered backprojection (FBP) using phantoms at CT coronary angiography (CTCA). At CTCA, application of the combined low-tube voltage with iDose resulted in significant image quality improvements compared to the low-tube voltage with FBP. Image quality was the same or better despite a reduction in the radiation dose by 76% compared with standard-tube voltage with FBP. PMID:21765305

  5. SU-E-I-82: Improving CT Image Quality for Radiation Therapy Using Iterative Reconstruction Algorithms and Slightly Increasing Imaging Doses

    SciTech Connect

    Noid, G; Chen, G; Tai, A; Li, X

    2014-06-01

    Purpose: Iterative reconstruction (IR) algorithms are developed to improve CT image quality (IQ) by reducing noise without diminishing spatial resolution or contrast. For CT in radiation therapy (RT), slightly increasing imaging dose to improve IQ may be justified if it can substantially enhance structure delineation. The purpose of this study is to investigate and to quantify the IQ enhancement as a result of increasing imaging doses and using IR algorithms. Methods: CT images were acquired for phantoms, built to evaluate IQ metrics including spatial resolution, contrast and noise, with a variety of imaging protocols using a CT scanner (Definition AS Open, Siemens) installed inside a Linac room. Representative patients were scanned once the protocols were optimized. Both phantom and patient scans were reconstructed using the Sinogram Affirmed Iterative Reconstruction (SAFIRE) and the Filtered Back Projection (FBP) methods. IQ metrics of the obtained CTs were compared. Results: IR techniques are demonstrated to preserve spatial resolution as measured by the point spread function and reduce noise in comparison to traditional FBP. Driven by the reduction in noise, the contrast to noise ratio is doubled by adopting the highest SAFIRE strength. As expected, increasing imaging dose reduces noise for both SAFIRE and FBP reconstructions. The contrast to noise increases from 3 to 5 by increasing the dose by a factor of 4. Similar IQ improvement was observed on the CTs for selected patients with pancreas and prostrate cancers. Conclusion: The IR techniques produce a measurable enhancement to CT IQ by reducing the noise. Increasing imaging dose further reduces noise independent of the IR techniques. The improved CT enables more accurate delineation of tumors and/or organs at risk during RT planning and delivery guidance.

  6. On the use of Gafchromic EBT3 films for validating a commercial electron Monte Carlo dose calculation algorithm

    NASA Astrophysics Data System (ADS)

    Chan, EuJin; Lydon, Jenny; Kron, Tomas

    2015-03-01

    This study aims to investigate the effects of oblique incidence, small field size and inhomogeneous media on the electron dose distribution, and to compare calculated (Elekta/CMS XiO) and measured results. All comparisons were done in terms of absolute dose. A new measuring method was developed for high resolution, absolute dose measurement of non-standard beams using Gafchromic® EBT3 film. A portable U-shaped holder was designed and constructed to hold EBT3 films vertically in a reproducible setup submerged in a water phantom. The experimental film method was verified with ionisation chamber measurements and agreed to within 2% or 1 mm. Agreement between XiO electron Monte Carlo (eMC) and EBT3 was within 2% or 2 mm for most standard fields and 3% or 3 mm for the non-standard fields. Larger differences were seen in the build-up region where XiO eMC overestimates dose by up to 10% for obliquely incident fields and underestimates the dose for small circular fields by up to 5% when compared to measurement. Calculations with inhomogeneous media mimicking ribs, lung and skull tissue placed at the side of the film in water agreed with measurement to within 3% or 3 mm. Gafchromic film in water proved to be a convenient high spatial resolution method to verify dose distributions from electrons in non-standard conditions including irradiation in inhomogeneous media.

  7. Quantification of dose differences between two versions of Acuros XB algorithm compared to Monte Carlo simulations--the effect on clinical patient treatment planning.

    PubMed

    Ojala, Jarkko Juhani; Kapanen, Mika

    2015-11-08

    A commercialized implementation of linear Boltzmann transport equation solver, the Acuros XB algorithm (AXB), represents a class of most advanced type 'c' photon radiotherapy dose calculation algorithms. The purpose of the study was to quantify the effects of the modifications implemented in the more recent version 11 of the AXB (AXB11) compared to the first commercial implementation, version 10 of the AXB (AXB10), in various anatomical regions in clinical treatment planning. Both versions of the AXB were part of Varian's Eclipse clinical treatment planning system and treatment plans for 10 patients were created using intensity-modulated radiotherapy (IMRT) and volumetric-modulated arc radiotherapy (VMAT). The plans were first created with the AXB10 and then recalculated with the AXB11 and full Monte Carlo (MC) simulations. Considering the full MC simulations as reference, a DVH analysis for gross tumor and planning target volumes (GTV and PTV) and organs at risk was performed, and also 3D gamma agreement index (GAI) values within a 15% isodose region and for the PTV were determined. Although differences up to 12% in DVH analysis were seen between the MC simulations and the AXB, based on the results of this study no general conclusion can be drawn that the modifications made in the AXB11 compared to the AXB10 would imply that the dose calculation accuracy of the AXB10 would be inferior to the AXB11 in the clinical patient treatment planning. The only clear improvement with the AXB11 over the AXB10 is the dose calculation accuracy in air cavities. In general, no large deviations are present in the DVH analysis results between the two versions of the algorithm, and the results of 3D gamma analysis do not favor one or the other. Thus it may be concluded that the results of the comprehensive studies assessing the accuracy of the AXB10 may be extended to the AXB11.

  8. Development of an algorithm for feed-forward chlorine dosing of lettuce wash operations and correlation of chlorine profile with Escherichia coli O157:H7 inactivation.

    PubMed

    Zhou, Bin; Luo, Yaguang; Nou, Xiangwu; Millner, Patricia

    2014-04-01

    The dynamic interactions of chlorine and organic matter during a simulated fresh-cut produce wash process and the consequences for Escherichia coli O157:H7 inactivation were investigated. An algorithm for a chlorine feed-forward dosing scheme to maintain a stable chlorine level was further developed and validated. Organic loads with chemical oxygen demand of 300 to 800 mg/liter were modeled using iceberg lettuce. Sodium hypochlorite (NaOCl) was added to the simulated wash solution incrementally. The solution pH, free and total chlorine, and oxidation-reduction potential were monitored, and chlorination breakpoint and chloramine humps determined. The results indicated that the E. coli O157:H7 inactivation curve mirrored that of the free chlorine during the chlorine replenishment process: a slight reduction in E. coli O157:H7 was observed as the combined chlorine hump was approached, while the E. coli O157:H7 cell populations declined sharply after chlorination passed the chlorine hump and decreased to below the detection limit (<0.75 most probable number per ml) after the chlorination breakpoint was reached. While the amounts of NaOCl required for reaching the chloramine humps and chlorination breakpoints depended on the organic loads, there was a linear correlation between NaOCl input and free chlorine in the wash solution once NaOCl dosing passed the chlorination breakpoint, regardless of organic load. The data obtained were further exploited to develop a NaOCl dosing algorithm for maintaining a stable chlorine concentration in the presence of an increasing organic load. The validation tests results indicated that free chlorine could be maintained at target levels using such an algorithm, while the pH and oxidation-reduction potential were also stably maintained using this system.

  9. CTC-ask: a new algorithm for conversion of CT numbers to tissue parameters for Monte Carlo dose calculations applying DICOM RS knowledge.

    PubMed

    Ottosson, Rickard O; Behrens, Claus F

    2011-11-21

    One of the building blocks in Monte Carlo (MC) treatment planning is to convert patient CT data to MC compatible phantoms, consisting of density and media matrices. The resulting dose distribution is highly influenced by the accuracy of the conversion. Two major contributing factors are precise conversion of CT number to density and proper differentiation between air and lung. Existing tools do not address this issue specifically. Moreover, their density conversion may depend on the number of media used. Differentiation between air and lung is an important task in MC treatment planning and misassignment may lead to local dose errors on the order of 10%. A novel algorithm, CTC-ask, is presented in this study. It enables locally confined constraints for the media assignment and is independent of the number of media used for the conversion of CT number to density. MC compatible phantoms were generated for two clinical cases using a CT-conversion scheme implemented in both CTC-ask and the DICOM-RT toolbox. Full MC dose calculation was subsequently conducted and the resulting dose distributions were compared. The DICOM-RT toolbox inaccurately assigned lung in 9.9% and 12.2% of the voxels located outside of the lungs for the two cases studied, respectively. This was completely avoided by CTC-ask. CTC-ask is able to reduce anatomically irrational media assignment. The CTC-ask source code can be made available upon request to the authors.

  10. CTC-ask: a new algorithm for conversion of CT numbers to tissue parameters for Monte Carlo dose calculations applying DICOM RS knowledge

    NASA Astrophysics Data System (ADS)

    Ottosson, Rickard O.; Behrens, Claus F.

    2011-11-01

    One of the building blocks in Monte Carlo (MC) treatment planning is to convert patient CT data to MC compatible phantoms, consisting of density and media matrices. The resulting dose distribution is highly influenced by the accuracy of the conversion. Two major contributing factors are precise conversion of CT number to density and proper differentiation between air and lung. Existing tools do not address this issue specifically. Moreover, their density conversion may depend on the number of media used. Differentiation between air and lung is an important task in MC treatment planning and misassignment may lead to local dose errors on the order of 10%. A novel algorithm, CTC-ask, is presented in this study. It enables locally confined constraints for the media assignment and is independent of the number of media used for the conversion of CT number to density. MC compatible phantoms were generated for two clinical cases using a CT-conversion scheme implemented in both CTC-ask and the DICOM-RT toolbox. Full MC dose calculation was subsequently conducted and the resulting dose distributions were compared. The DICOM-RT toolbox inaccurately assigned lung in 9.9% and 12.2% of the voxels located outside of the lungs for the two cases studied, respectively. This was completely avoided by CTC-ask. CTC-ask is able to reduce anatomically irrational media assignment. The CTC-ask source code can be made available upon request to the authors.

  11. Comparison of Planned Dose Distributions Calculated by Monte Carlo and Ray-Trace Algorithms for the Treatment of Lung Tumors With CyberKnife: A Preliminary Study in 33 Patients

    SciTech Connect

    Wilcox, Ellen E.; Daskalov, George M.; Lincoln, Holly; Shumway, Richard C.; Kaplan, Bruce M.; Colasanto, Joseph M.

    2010-05-01

    Purpose: To compare dose distributions calculated using the Monte Carlo algorithm (MC) and Ray-Trace algorithm (effective path length method, EPL) for CyberKnife treatments of lung tumors. Materials and Methods: An acceptable treatment plan is created using Multiplan 2.1 and MC dose calculation. Dose is prescribed to the isodose line encompassing 95% of the planning target volume (PTV) and this is the plan clinically delivered. For comparison, the Ray-Trace algorithm with heterogeneity correction (EPL) is used to recalculate the dose distribution for this plan using the same beams, beam directions, and monitor units (MUs). Results: The maximum doses calculated by the EPL to target PTV are uniformly larger than the MC plans by up to a factor of 1.63. Up to a factor of four larger maximum dose differences are observed for the critical structures in the chest. More beams traversing larger distances through low density lung are associated with larger differences, consistent with the fact that the EPL overestimates doses in low-density structures and this effect is more pronounced as collimator size decreases. Conclusions: We establish that changing the treatment plan calculation algorithm from EPL to MC can produce large differences in target and critical organs' dose coverage. The observed discrepancies are larger for plans using smaller collimator sizes and have strong dependency on the anatomical relationship of target-critical structures.

  12. Development and Evaluation of a New Air Exchange Rate Algorithm for the Stochastic Human Exposure and Dose Simulation Model

    EPA Science Inventory

    between-home and between-city variability in residential pollutant infiltration. This is likely a result of differences in home ventilation, or air exchange rates (AER). The Stochastic Human Exposure and Dose Simulation (SHEDS) model is a population exposure model that uses a pro...

  13. Algorithm for x-ray beam hardening and scatter correction in low-dose cone-beam CT: phantom studies

    NASA Astrophysics Data System (ADS)

    Liu, Wenlei; Rong, Junyan; Gao, Peng; Liao, Qimei; Lu, HongBing

    2016-03-01

    X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), as well as beam hardening, resulting in image artifacts, contrast reduction, and lack of CT number accuracy. Meanwhile the x-ray radiation dose is also non-ignorable. Considerable scatter or beam hardening correction methods have been developed, independently, and rarely combined with low-dose CT reconstruction. In this paper, we combine scatter suppression with beam hardening correction for sparse-view CT reconstruction to improve CT image quality and reduce CT radiation. Firstly, scatter was measured, estimated, and removed using measurement-based methods, assuming that signal in the lead blocker shadow is only attributable to x-ray scatter. Secondly, beam hardening was modeled by estimating an equivalent attenuation coefficient at the effective energy, which was integrated into the forward projector of the algebraic reconstruction technique (ART). Finally, the compressed sensing (CS) iterative reconstruction is carried out for sparse-view CT reconstruction to reduce the CT radiation. Preliminary Monte Carlo simulated experiments indicate that with only about 25% of conventional dose, our method reduces the magnitude of cupping artifact by a factor of 6.1, increases the contrast by a factor of 1.4 and the CNR by a factor of 15. The proposed method could provide good reconstructed image from a few view projections, with effective suppression of artifacts caused by scatter and beam hardening, as well as reducing the radiation dose. With this proposed framework and modeling, it may provide a new way for low-dose CT imaging.

  14. Assessment of the dose reduction potential of a model-based iterative reconstruction algorithm using a task-based performance metrology

    SciTech Connect

    Samei, Ehsan; Richard, Samuel

    2015-01-15

    indicated a 46%–84% dose reduction potential, depending on task, without compromising the modeled detection performance. Conclusions: The presented methodology based on ACR phantom measurements extends current possibilities for the assessment of CT image quality under the complex resolution and noise characteristics exhibited with statistical and iterative reconstruction algorithms. The findings further suggest that MBIR can potentially make better use of the projections data to reduce CT dose by approximately a factor of 2. Alternatively, if the dose held unchanged, it can improve image quality by different levels for different tasks.

  15. Effect of Radiation Dose Reduction and Reconstruction Algorithm on Image Noise, Contrast, Resolution, and Detectability of Subtle Hypoattenuating Liver Lesions at Multidetector CT: Filtered Back Projection versus a Commercial Model-based Iterative Reconstruction Algorithm.

    PubMed

    Solomon, Justin; Marin, Daniele; Roy Choudhury, Kingshuk; Patel, Bhavik; Samei, Ehsan

    2017-02-07

    Purpose To determine the effect of radiation dose and iterative reconstruction (IR) on noise, contrast, resolution, and observer-based detectability of subtle hypoattenuating liver lesions and to estimate the dose reduction potential of the IR algorithm in question. Materials and Methods This prospective, single-center, HIPAA-compliant study was approved by the institutional review board. A dual-source computed tomography (CT) system was used to reconstruct CT projection data from 21 patients into six radiation dose levels (12.5%, 25%, 37.5%, 50%, 75%, and 100%) on the basis of two CT acquisitions. A series of virtual liver lesions (five per patient, 105 total, lesion-to-liver prereconstruction contrast of -15 HU, 12-mm diameter) were inserted into the raw CT projection data and images were reconstructed with filtered back projection (FBP) (B31f kernel) and sinogram-affirmed IR (SAFIRE) (I31f-5 kernel). Image noise (pixel standard deviation), lesion contrast (after reconstruction), lesion boundary sharpness (average normalized gradient at lesion boundary), and contrast-to-noise ratio (CNR) were compared. Next, a two-alternative forced choice perception experiment was performed (16 readers [six radiologists, 10 medical physicists]). A linear mixed-effects statistical model was used to compare detection accuracy between FBP and SAFIRE and to estimate the radiation dose reduction potential of SAFIRE. Results Compared with FBP, SAFIRE reduced noise by a mean of 53% ± 5, lesion contrast by 12% ± 4, and lesion sharpness by 13% ± 10 but increased CNR by 89% ± 19. Detection accuracy was 2% higher on average with SAFIRE than with FBP (P = .03), which translated into an estimated radiation dose reduction potential (±95% confidence interval) of 16% ± 13. Conclusion SAFIRE increases detectability at a given radiation dose (approximately 2% increase in detection accuracy) and allows for imaging at reduced radiation dose (16% ± 13), while maintaining low

  16. Low kilovoltage peak (kVp) with an adaptive statistical iterative reconstruction algorithm in computed tomography urography: evaluation of image quality and radiation dose

    PubMed Central

    Zhou, Zhiguo; Chen, Haixi; Wei, Wei; Zhou, Shanghui; Xu, Jingbo; Wang, Xifu; Wang, Qingguo; Zhang, Guixiang; Zhang, Zhuoli; Zheng, Linfeng

    2016-01-01

    Purpose: The purpose of this study was to evaluate the image quality and radiation dose in computed tomography urography (CTU) images acquired with a low kilovoltage peak (kVp) in combination with an adaptive statistical iterative reconstruction (ASiR) algorithm. Methods: A total of 45 subjects (18 women, 27 men) who underwent CTU with kV assist software for automatic selection of the optimal kVp were included and divided into two groups (A and B) based on the kVp and image reconstruction algorithm: group A consisted of patients who underwent CTU with a 80 or 100 kVp and whose images were reconstructed with the 50% ASiR algorithm (n=32); group B consisted of patients who underwent CTU with a 120 kVp and whose images were reconstructed with the filtered back projection (FBP) algorithm (n=13). The images were separately reconstructed with volume rendering (VR) and maximum intensity projection (MIP). Finally, the image quality was evaluated using an image score, CT attenuation, image noise, the contrast-to-noise ratio (CNR) of the renal pelvis-to-abdominal visceral fat and the signal-to-noise ratio (SNR) of the renal pelvis. The radiation dose was assessed using volume CT dose index (CTDIvol), dose-length product (DLP) and effective dose (ED). Results: For groups A and B, the subjective image scores for the VR reconstruction images were 3.9±0.4 and 3.8±0.4, respectively, while those for the MIP reconstruction images were 3.8±0.4 and 3.6±0.6, respectively. No significant difference was found (p>0.05) between the two groups’ image scores for either the VR or MIP reconstruction images. Additionally, the inter-reviewer image scores did not significantly differ (p>0.05). The mean attenuation of the bilateral renal pelvis in group A was significantly higher than that in group B (271.4±57.6 vs. 221.8±35.3 HU, p<0.05), whereas the image noise in group A was significantly lower than that in group B (7.9±2.1 vs. 10.5±2.3 HU, p<0.05). The CNR and SNR in group A were

  17. A 3D superposition pencil beam dose calculation algorithm for a 60Co therapy unit and its verification by MC simulation

    NASA Astrophysics Data System (ADS)

    Koncek, O.; Krivonoska, J.

    2014-11-01

    The MCNP Monte Carlo code was used to simulate the collimating system of the 60Co therapy unit to calculate the primary and scattered photon fluences as well as the electron contamination incident to the isocentric plane as the functions of the irradiation field size. Furthermore, a Monte Carlo simulation for the polyenergetic Pencil Beam Kernels (PBKs) generation was performed using the calculated photon and electron spectra. The PBK was analytically fitted to speed up the dose calculation using the convolution technique in the homogeneous media. The quality of the PBK fit was verified by comparing the calculated and simulated 60Co broad beam profiles and depth dose curves in a homogeneous water medium. The inhomogeneity correction coefficients were derived from the PBK simulation of an inhomogeneous slab phantom consisting of various materials. The inhomogeneity calculation model is based on the changes in the PBK radial displacement and on the change of the forward and backward electron scattering. The inhomogeneity correction is derived from the electron density values gained from a complete 3D CT array and considers different electron densities through which the pencil beam is propagated as well as the electron density values located between the interaction point and the point of dose deposition. Important aspects and details of the algorithm implementation are also described in this study.

  18. The impact of uncertainties in the CT conversion algorithm when predicting proton beam ranges in patients from dose and PET-activity distributions.

    PubMed

    España, Samuel; Paganetti, Harald

    2010-12-21

    The advantages of a finite range of proton beams can only be partly exploited in radiation therapy unless the range can be predicted in patient anatomy with <2 mm accuracy (for non-moving targets). Monte Carlo dose calculation aims at 1-2 mm accuracy in dose prediction, and proton-induced PET imaging aims at ∼2 mm accuracy in range verification. The latter is done using Monte Carlo predicted PET images. Monte Carlo methods are based on CT images to describe patient anatomy. The dose calculation algorithm and the CT resolution/artifacts might affect dose calculation accuracy. Additionally, when using Monte Carlo for PET range verification, the biological decay model and the cross sections for positron emitter production affect predicted PET images. The goal of this work is to study the effect of uncertainties in the CT conversion on the proton beam range predicted by Monte Carlo dose calculations and proton-induced PET signals. Conversion schemes to assign density and elemental composition based on a CT image of the patient define a unique Hounsfield unit (HU) to tissue parameters relationship. Uncertainties are introduced because there is no unique relationship between HU and tissue parameters. In this work, different conversion schemes based on a stoichiometric calibration method as well as different numbers of tissue bins were considered in three head and neck patients. For Monte Carlo dose calculation, the results show close to zero (<0.5 mm) differences in range using different conversion schemes. Further, a reduction of the number of bins used to define individual tissues down to 13 did not affect the accuracy. In the case of simulated PET images we found a more pronounced sensitivity on the CT conversion scheme with a mean fall-off position variation of about 1 mm. We conclude that proton dose distributions based on Monte Carlo calculation are only slightly affected by the uncertainty on density and elemental composition introduced by unique assignment to

  19. Dosimetric comparison of lung stereotactic body radiotherapy treatment plans using averaged computed tomography and end-exhalation computed tomography images: Evaluation of the effect of different dose-calculation algorithms and prescription methods.

    PubMed

    Mitsuyoshi, Takamasa; Nakamura, Mitsuhiro; Matsuo, Yukinori; Ueki, Nami; Nakamura, Akira; Iizuka, Yusuke; Mampuya, Wambaka Ange; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    The purpose of this article is to quantitatively evaluate differences in dose distributions calculated using various computed tomography (CT) datasets, dose-calculation algorithms, and prescription methods in stereotactic body radiotherapy (SBRT) for patients with early-stage lung cancer. Data on 29 patients with early-stage lung cancer treated with SBRT were retrospectively analyzed. Averaged CT (Ave-CT) and expiratory CT (Ex-CT) images were reconstructed for each patient using 4-dimensional CT data. Dose distributions were initially calculated using the Ave-CT images and recalculated (in the same monitor units [MUs]) by employing Ex-CT images with the same beam arrangements. The dose-volume parameters, including D95, D90, D50, and D2 of the planning target volume (PTV), were compared between the 2 image sets. To explore the influence of dose-calculation algorithms and prescription methods on the differences in dose distributions evident between Ave-CT and Ex-CT images, we calculated dose distributions using the following 3 different algorithms: x-ray Voxel Monte Carlo (XVMC), Acuros XB (AXB), and the anisotropic analytical algorithm (AAA). We also used 2 different dose-prescription methods; the isocenter prescription and the PTV periphery prescription methods. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data were within 3 percentage points (%pts) employing the isocenter prescription method, and within 1.5%pts using the PTV periphery prescription method, irrespective of which of the 3 algorithms (XVMC, AXB, and AAA) was employed. The frequencies of dose-volume parameters differing by >1%pt when the XVMC and AXB were used were greater than those associated with the use of the AAA, regardless of the dose-prescription method employed. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data on patients who underwent lung SBRT were within 3%pts, regardless of the dose-calculation algorithm or the dose

  20. SU-E-I-89: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Pediatric Anthropomorphic and ACR Phantoms

    SciTech Connect

    Mahmood, U; Erdi, Y; Wang, W

    2014-06-01

    Purpose: To assess the impact of General Electrics automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of a pediatric anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, 80 mA, 0.7s rotation time. Image quality was assessed by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: For the baseline protocol, CNR was found to decrease from 0.460 ± 0.182 to 0.420 ± 0.057 when kVa was activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.620 ± 0.040. The liver dose decreased by 30% with kVa activation. Conclusion: Application of kVa reduces the liver dose up to 30%. However, reduction in image quality for abdominal scans occurs when using the automated tube voltage selection feature at the baseline protocol. As demonstrated by the CNR and NPS analysis, the texture and magnitude of the noise in reconstructed images at ASiR 40% was found to be the same as our baseline images. We have demonstrated that 30% dose reduction is possible when using 40% ASiR with kVa in pediatric patients.

  1. Prediction of human observer performance in a 2-alternative forced choice low-contrast detection task using channelized Hotelling observer: Impact of radiation dose and reconstruction algorithms

    SciTech Connect

    Yu Lifeng; Leng Shuai; Chen Lingyun; Kofler, James M.; McCollough, Cynthia H.; Carter, Rickey E.

    2013-04-15

    Purpose: Efficient optimization of CT protocols demands a quantitative approach to predicting human observer performance on specific tasks at various scan and reconstruction settings. The goal of this work was to investigate how well a channelized Hotelling observer (CHO) can predict human observer performance on 2-alternative forced choice (2AFC) lesion-detection tasks at various dose levels and two different reconstruction algorithms: a filtered-backprojection (FBP) and an iterative reconstruction (IR) method. Methods: A 35 Multiplication-Sign 26 cm{sup 2} torso-shaped phantom filled with water was used to simulate an average-sized patient. Three rods with different diameters (small: 3 mm; medium: 5 mm; large: 9 mm) were placed in the center region of the phantom to simulate small, medium, and large lesions. The contrast relative to background was -15 HU at 120 kV. The phantom was scanned 100 times using automatic exposure control each at 60, 120, 240, 360, and 480 quality reference mAs on a 128-slice scanner. After removing the three rods, the water phantom was again scanned 100 times to provide signal-absent background images at the exact same locations. By extracting regions of interest around the three rods and on the signal-absent images, the authors generated 21 2AFC studies. Each 2AFC study had 100 trials, with each trial consisting of a signal-present image and a signal-absent image side-by-side in randomized order. In total, 2100 trials were presented to both the model and human observers. Four medical physicists acted as human observers. For the model observer, the authors used a CHO with Gabor channels, which involves six channel passbands, five orientations, and two phases, leading to a total of 60 channels. The performance predicted by the CHO was compared with that obtained by four medical physicists at each 2AFC study. Results: The human and model observers were highly correlated at each dose level for each lesion size for both FBP and IR. The

  2. SU-E-I-81: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Adult Anthropomorphic and ACR Phantoms

    SciTech Connect

    Mahmood, U; Erdi, Y; Wang, W

    2014-06-01

    Purpose: To assess the impact of General Electrics (GE) automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of an adult anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, Auto mA (180 to 380 mA), noise index (NI) = 14, adaptive iterative statistical reconstruction (ASiR) of 20%, 0.8s rotation time. Image quality was evaluated by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: The CNR for the adult male was found to decrease from CNR = 0.912 ± 0.045 for the baseline protocol without kVa to a CNR = 0.756 ± 0.049 with kVa activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.903 ± 0.023. The difference in the central liver dose with and without kVa was found to be 0.07%. Conclusion: Dose reduction was insignificant in the adult phantom. As determined by NPS analysis, ASiR of 40% produced images with similar noise texture to the baseline protocol. However, the CNR at ASiR of 40% with kVa fails to meet the current ACR CNR passing requirement of 1.0.

  3. SU-E-T-800: Verification of Acurose XB Dose Calculation Algorithm at Air Cavity-Tissue Interface Using Film Measurement for Small Fields of 6-MV Flattening Filter-Free Beams

    SciTech Connect

    Kang, S; Suh, T; Chung, J

    2015-06-15

    Purpose: To verify the dose accuracy of Acuros XB (AXB) dose calculation algorithm at air-tissue interface using inhomogeneous phantom for 6-MV flattening filter-free (FFF) beams. Methods: An inhomogeneous phantom included air cavity was manufactured for verifying dose accuracy at the air-tissue interface. The phantom was composed with 1 and 3 cm thickness of air cavity. To evaluate the central axis doses (CAD) and dose profiles of the interface, the dose calculations were performed for 3 × 3 and 4 × 4 cm{sup 2} fields of 6 MV FFF beams with AAA and AXB in Eclipse treatment plainning system. Measurements in this region were performed with Gafchromic film. The root mean square errors (RMSE) were analyzed with calculated and measured dose profile. Dose profiles were divided into inner-dose profile (>80%) and penumbra (20% to 80%) region for evaluating RMSE. To quantify the distribution difference, gamma evaluation was used and determined the agreement with 3%/3mm criteria. Results: The percentage differences (%Diffs) between measured and calculated CAD in the interface, AXB shows more agreement than AAA. The %Diffs were increased with increasing the thickness of air cavity size and it is similar for both algorithms. In RMSEs of inner-profile, AXB was more accurate than AAA. The difference was up to 6 times due to overestimation by AAA. RMSEs of penumbra appeared to high difference for increasing the measurement depth. Gamma agreement also presented that the passing rates decreased in penumbra. Conclusion: This study demonstrated that the dose calculation with AXB shows more accurate than with AAA for the air-tissue interface. The 2D dose distributions with AXB for both inner-profile and penumbra showed better agreement than with AAA relative to variation of the measurement depths and air cavity sizes.

  4. Validation of a method for in vivo 3D dose reconstruction for IMRT and VMAT treatments using on-treatment EPID images and a model-based forward-calculation algorithm

    SciTech Connect

    Van Uytven, Eric Van Beek, Timothy; McCowan, Peter M.; Chytyk-Praznik, Krista; Greer, Peter B.; McCurdy, Boyd M. C.

    2015-12-15

    Purpose: Radiation treatments are trending toward delivering higher doses per fraction under stereotactic radiosurgery and hypofractionated treatment regimens. There is a need for accurate 3D in vivo patient dose verification using electronic portal imaging device (EPID) measurements. This work presents a model-based technique to compute full three-dimensional patient dose reconstructed from on-treatment EPID portal images (i.e., transmission images). Methods: EPID dose is converted to incident fluence entering the patient using a series of steps which include converting measured EPID dose to fluence at the detector plane and then back-projecting the primary source component of the EPID fluence upstream of the patient. Incident fluence is then recombined with predicted extra-focal fluence and used to calculate 3D patient dose via a collapsed-cone convolution method. This method is implemented in an iterative manner, although in practice it provides accurate results in a single iteration. The robustness of the dose reconstruction technique is demonstrated with several simple slab phantom and nine anthropomorphic phantom cases. Prostate, head and neck, and lung treatments are all included as well as a range of delivery techniques including VMAT and dynamic intensity modulated radiation therapy (IMRT). Results: Results indicate that the patient dose reconstruction algorithm compares well with treatment planning system computed doses for controlled test situations. For simple phantom and square field tests, agreement was excellent with a 2%/2 mm 3D chi pass rate ≥98.9%. On anthropomorphic phantoms, the 2%/2 mm 3D chi pass rates ranged from 79.9% to 99.9% in the planning target volume (PTV) region and 96.5% to 100% in the low dose region (>20% of prescription, excluding PTV and skin build-up region). Conclusions: An algorithm to reconstruct delivered patient 3D doses from EPID exit dosimetry measurements was presented. The method was applied to phantom and patient

  5. Dose optimization tool

    NASA Astrophysics Data System (ADS)

    Amir, Ornit; Braunstein, David; Altman, Ami

    2003-05-01

    A dose optimization tool for CT scanners is presented using patient raw data to calculate noise. The tool uses a single patient image which is modified for various lower doses. Dose optimization is carried out without extra measurements by interactively visualizing the dose-induced changes in this image. This tool can be used either off line, on existing image(s) or, as a pre - requisite for dose optimization for the specific patient, during the patient clinical study. The algorithm of low-dose simulation consists of reconstruction of two images from a single measurement and uses those images to create the various lower dose images. This algorithm enables fast simulation of various low dose (mAs) images on a real patient image.

  6. SU-E-T-579: On the Relative Sensitivity of Monte Carlo and Pencil Beam Dose Calculation Algorithms to CT Metal Artifacts in Volumetric-Modulated Arc Spine Radiosurgery (RS)

    SciTech Connect

    Wong, M; Lee, V; Leung, R; Lee, K; Law, G; Tung, S; Chan, M; Blanck, O

    2015-06-15

    Purpose: Investigating the relative sensitivity of Monte Carlo (MC) and Pencil Beam (PB) dose calculation algorithms to low-Z (titanium) metallic artifacts is important for accurate and consistent dose reporting in post¬operative spinal RS. Methods: Sensitivity analysis of MC and PB dose calculation algorithms on the Monaco v.3.3 treatment planning system (Elekta CMS, Maryland Heights, MO, USA) was performed using CT images reconstructed without (plain) and with Orthopedic Metal Artifact Reduction (OMAR; Philips Healthcare system, Cleveland, OH, USA). 6MV and 10MV volumetric-modulated arc (VMAT) RS plans were obtained for MC and PB on the plain and OMAR images (MC-plain/OMAR and PB-plain/OMAR). Results: Maximum differences in dose to 0.2cc (D0.2cc) of spinal cord and cord +2mm for 6MV and 10MV VMAT plans were 0.1Gy between MC-OMAR and MC-plain, and between PB-OMAR and PB-plain. Planning target volume (PTV) dose coverage changed by 0.1±0.7% and 0.2±0.3% for 6MV and 10MV from MC-OMAR to MC-plain, and by 0.1±0.1% for both 6MV and 10 MV from PB-OMAR to PB-plain, respectively. In no case for both MC and PB the D0.2cc to spinal cord was found to exceed the planned tolerance changing from OMAR to plain CT in dose calculations. Conclusion: Dosimetric impacts of metallic artifacts caused by low-Z metallic spinal hardware (mainly titanium alloy) are not clinically important in VMAT-based spine RS, without significant dependence on dose calculation methods (MC and PB) and photon energy ≥ 6MV. There is no need to use one algorithm instead of the other to reduce uncertainty for dose reporting. The dose calculation method that should be used in spine RS shall be consistent with the usual clinical practice.

  7. SU-E-T-219: Comprehensive Validation of the Electron Monte Carlo Dose Calculation Algorithm in RayStation Treatment Planning System for An Elekta Linear Accelerator with AgilityTM Treatment Head

    SciTech Connect

    Wang, Yi; Park, Yang-Kyun; Doppke, Karen P.

    2015-06-15

    Purpose: This study evaluated the performance of the electron Monte Carlo dose calculation algorithm in RayStation v4.0 for an Elekta machine with Agility™ treatment head. Methods: The machine has five electron energies (6–8 MeV) and five applicators (6×6 to 25×25 cm {sup 2}). The dose (cGy/MU at d{sub max}), depth dose and profiles were measured in water using an electron diode at 100 cm SSD for nine square fields ≥2×2 cm{sup 2} and four complex fields at normal incidence, and a 14×14 cm{sup 2} field at 15° and 30° incidence. The dose was also measured for three square fields ≥4×4 cm{sup 2} at 98, 105 and 110 cm SSD. Using selected energies, the EBT3 radiochromic film was used for dose measurements in slab-shaped inhomogeneous phantoms and a breast phantom with surface curvature. The measured and calculated doses were analyzed using a gamma criterion of 3%/3 mm. Results: The calculated and measured doses varied by <3% for 116 of the 120 points, and <5% for the 4×4 cm{sup 2} field at 110 cm SSD at 9–18 MeV. The gamma analysis comparing the 105 pairs of in-water isodoses passed by >98.1%. The planar doses measured from films placed at 0.5 cm below a lung/tissue layer (12 MeV) and 1.0 cm below a bone/air layer (15 MeV) showed excellent agreement with calculations, with gamma passing by 99.9% and 98.5%, respectively. At the breast-tissue interface, the gamma passing rate is >98.8% at 12–18 MeV. The film results directly validated the accuracy of MU calculation and spatial dose distribution in presence of tissue inhomogeneity and surface curvature - situations challenging for simpler pencil-beam algorithms. Conclusion: The electron Monte Carlo algorithm in RayStation v4.0 is fully validated for clinical use for the Elekta Agility™ machine. The comprehensive validation included small fields, complex fields, oblique beams, extended distance, tissue inhomogeneity and surface curvature.

  8. SU-F-BRD-15: The Impact of Dose Calculation Algorithm and Hounsfield Units Conversion Tables On Plan Dosimetry for Lung SBRT

    SciTech Connect

    Kuo, L; Yorke, E; Lim, S; Mechalakos, J; Rimner, A

    2014-06-15

    Purpose: To assess dosimetric differences in IMRT lung stereotactic body radiotherapy (SBRT) plans calculated with Varian AAA and Acuros (AXB) and with vendor-supplied (V) versus in-house (IH) measured Hounsfield units (HU) to mass and HU to electron density conversion tables. Methods: In-house conversion tables were measured using Gammex 472 density-plug phantom. IMRT plans (6 MV, Varian TrueBeam, 6–9 coplanar fields) meeting departmental coverage and normal tissue constraints were retrospectively generated for 10 lung SBRT cases using Eclipse Vn 10.0.28 AAA with in-house tables (AAA/IH). Using these monitor units and MLC sequences, plans were recalculated with AAA and vendor tables (AAA/V) and with AXB with both tables (AXB/IH and AXB/V). Ratios to corresponding AAA/IH values were calculated for PTV D95, D01, D99, mean-dose, total and ipsilateral lung V20 and chestwall V30. Statistical significance of differences was judged by Wilcoxon Signed Rank Test (p<0.05). Results: For HU<−400 the vendor HU-mass density table was notably below the IH table. PTV D95 ratios to AAA/IH, averaged over all patients, are 0.963±0.073 (p=0.508), 0.914±0.126 (p=0.011), and 0.998±0.001 (p=0.005) for AXB/IH, AXB/V and AAA/V respectively. Total lung V20 ratios are 1.006±0.046 (p=0.386), 0.975±0.080 (p=0.514) and 0.998±0.002 (p=0.007); ipsilateral lung V20 ratios are 1.008±0.041(p=0.284), 0.977±0.076 (p=0.443), and 0.998±0.018 (p=0.005) for AXB/IH, AXB/V and AAA/V respectively. In 7 cases, ratios to AAA/IH were within ± 5% for all indices studied. For 3 cases characterized by very low lung density and small PTV (19.99±8.09 c.c.), PTV D95 ratio for AXB/V ranged from 67.4% to 85.9%, AXB/IH D95 ratio ranged from 81.6% to 93.4%; there were large differences in other studied indices. Conclusion: For AXB users, careful attention to HU conversion tables is important, as they can significantly impact AXB (but not AAA) lung SBRT plans. Algorithm selection is also important for

  9. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  10. A gradient Markov chain Monte Carlo algorithm for computing multivariate maximum likelihood estimates and posterior distributions: mixture dose-response assessment.

    PubMed

    Li, Ruochen; Englehardt, James D; Li, Xiaoguang

    2012-02-01

    Multivariate probability distributions, such as may be used for mixture dose-response assessment, are typically highly parameterized and difficult to fit to available data. However, such distributions may be useful in analyzing the large electronic data sets becoming available, such as dose-response biomarker and genetic information. In this article, a new two-stage computational approach is introduced for estimating multivariate distributions and addressing parameter uncertainty. The proposed first stage comprises a gradient Markov chain Monte Carlo (GMCMC) technique to find Bayesian posterior mode estimates (PMEs) of parameters, equivalent to maximum likelihood estimates (MLEs) in the absence of subjective information. In the second stage, these estimates are used to initialize a Markov chain Monte Carlo (MCMC) simulation, replacing the conventional burn-in period to allow convergent simulation of the full joint Bayesian posterior distribution and the corresponding unconditional multivariate distribution (not conditional on uncertain parameter values). When the distribution of parameter uncertainty is such a Bayesian posterior, the unconditional distribution is termed predictive. The method is demonstrated by finding conditional and unconditional versions of the recently proposed emergent dose-response function (DRF). Results are shown for the five-parameter common-mode and seven-parameter dissimilar-mode models, based on published data for eight benzene-toluene dose pairs. The common mode conditional DRF is obtained with a 21-fold reduction in data requirement versus MCMC. Example common-mode unconditional DRFs are then found using synthetic data, showing a 71% reduction in required data. The approach is further demonstrated for a PCB 126-PCB 153 mixture. Applicability is analyzed and discussed. Matlab(®) computer programs are provided.

  11. Stereotactic Body Radiotherapy for Primary Lung Cancer at a Dose of 50 Gy Total in Five Fractions to the Periphery of the Planning Target Volume Calculated Using a Superposition Algorithm

    SciTech Connect

    Takeda, Atsuya; Sanuki, Naoko; Kunieda, Etsuo Ohashi, Toshio; Oku, Yohei; Takeda, Toshiaki; Shigematsu, Naoyuki; Kubo, Atsushi

    2009-02-01

    Purpose: To retrospectively analyze the clinical outcomes of stereotactic body radiotherapy (SBRT) for patients with Stages 1A and 1B non-small-cell lung cancer. Methods and Materials: We reviewed the records of patients with non-small-cell lung cancer treated with curative intent between Dec 2001 and May 2007. All patients had histopathologically or cytologically confirmed disease, increased levels of tumor markers, and/or positive findings on fluorodeoxyglucose positron emission tomography. Staging studies identified their disease as Stage 1A or 1B. Performance status was 2 or less according to World Health Organization guidelines in all cases. The prescribed dose of 50 Gy total in five fractions, calculated by using a superposition algorithm, was defined for the periphery of the planning target volume. Results: One hundred twenty-one patients underwent SBRT during the study period, and 63 were eligible for this analysis. Thirty-eight patients had Stage 1A (T1N0M0) and 25 had Stage 1B (T2N0M0). Forty-nine patients were not appropriate candidates for surgery because of chronic pulmonary disease. Median follow-up of these 49 patients was 31 months (range, 10-72 months). The 3-year local control, disease-free, and overall survival rates in patients with Stages 1A and 1B were 93% and 96% (p = 0.86), 76% and 77% (p = 0.83), and 90% and 63% (p = 0.09), respectively. No acute toxicity was observed. Grade 2 or higher radiation pneumonitis was experienced by 3 patients, and 1 of them had fatal bacterial pneumonia. Conclusions: The SBRT at 50 Gy total in five fractions to the periphery of the planning target volume calculated by using a superposition algorithm is feasible. High local control rates were achieved for both T2 and T1 tumors.

  12. MO-A-BRD-09: A Data-Mining Algorithm for Large Scale Analysis of Dose-Outcome Relationships in a Database of Irradiated Head-And-Neck (HN) Cancer Patients

    SciTech Connect

    Robertson, SP; Quon, H; Kiess, AP; Moore, JA; Yang, W; Cheng, Z; Sharabi, A; McNutt, TR

    2014-06-15

    Purpose: To develop a framework for automatic extraction of clinically meaningful dosimetric-outcome relationships from an in-house, analytic oncology database. Methods: Dose-volume histograms (DVH) and clinical outcome-related structured data elements have been routinely stored to our database for 513 HN cancer patients treated from 2007 to 2014. SQL queries were developed to extract outcomes that had been assessed for at least 100 patients, as well as DVH curves for organs-at-risk (OAR) that were contoured for at least 100 patients. DVH curves for paired OAR (e.g., left and right parotids) were automatically combined and included as additional structures for analysis. For each OAR-outcome combination, DVH dose points, D(V{sub t}), at a series of normalized volume thresholds, V{sub t}=[0.01,0.99], were stratified into two groups based on outcomes after treatment completion. The probability, P[D(V{sub t})], of an outcome was modeled at each V{sub t} by logistic regression. Notable combinations, defined as having P[D(V{sub t})] increase by at least 5% per Gy (p<0.05), were further evaluated for clinical relevance using a custom graphical interface. Results: A total of 57 individual and combined structures and 115 outcomes were queried, resulting in over 6,500 combinations for analysis. Of these, 528 combinations met the 5%/Gy requirement, with further manual inspection revealing a number of reasonable models based on either reported literature or proximity between neighboring OAR. The data mining algorithm confirmed the following well-known toxicity/outcome relationships: dysphagia/larynx, voice changes/larynx, esophagitis/esophagus, xerostomia/combined parotids, and mucositis/oral mucosa. Other notable relationships included dysphagia/pharyngeal constrictors, nausea/brainstem, nausea/spinal cord, weight-loss/mandible, and weight-loss/combined parotids. Conclusion: Our database platform has enabled large-scale analysis of dose-outcome relationships. The current data

  13. Electron beam dose calculations.

    PubMed

    Hogstrom, K R; Mills, M D; Almond, P R

    1981-05-01

    Electron beam dose distributions in the presence of inhomogeneous tissue are calculated by an algorithm that sums the dose distribution of individual pencil beams. The off-axis dependence of the pencil beam dose distribution is described by the Fermi-Eyges theory of thick-target multiple Coulomb scattering. Measured square-field depth-dose data serve as input for the calculations. Air gap corrections are incorporated and use data from'in-air' measurements in the penumbra of the beam. The effective depth, used to evaluate depth-dose, and the sigma of the off-axis Gaussian spread against depth are calculated by recursion relations from a CT data matrix for the material underlying individual pencil beams. The correlation of CT number with relative linear stopping power and relative linear scattering power for various tissues is shown. The results of calculations are verified by comparison with measurements in a 17 MeV electron beam from the Therac 20 linear accelerator. Calculated isodose lines agree nominally to within 2 mm of measurements in a water phantom. Similar agreement is observed in cork slabs simulating lung. Calculations beneath a bone substitute illustrate a weakness in the calculation. Finally a case of carcinoma in the maxillary antrum is studied. The theory suggests an alternative method for the calculation of depth-dose of rectangular fields.

  14. A dose error evaluation study for 4D dose calculations

    NASA Astrophysics Data System (ADS)

    Milz, Stefan; Wilkens, Jan J.; Ullrich, Wolfgang

    2014-10-01

    Previous studies have shown that respiration induced motion is not negligible for Stereotactic Body Radiation Therapy. The intrafractional breathing induced motion influences the delivered dose distribution on the underlying patient geometry such as the lung or the abdomen. If a static geometry is used, a planning process for these indications does not represent the entire dynamic process. The quality of a full 4D dose calculation approach depends on the dose coordinate transformation process between deformable geometries. This article provides an evaluation study that introduces an advanced method to verify the quality of numerical dose transformation generated by four different algorithms. The used transformation metric value is based on the deviation of the dose mass histogram (DMH) and the mean dose throughout dose transformation. The study compares the results of four algorithms. In general, two elementary approaches are used: dose mapping and energy transformation. Dose interpolation (DIM) and an advanced concept, so called divergent dose mapping model (dDMM), are used for dose mapping. The algorithms are compared to the basic energy transformation model (bETM) and the energy mass congruent mapping (EMCM). For evaluation 900 small sample regions of interest (ROI) are generated inside an exemplary lung geometry (4DCT). A homogeneous fluence distribution is assumed for dose calculation inside the ROIs. The dose transformations are performed with the four different algorithms. The study investigates the DMH-metric and the mean dose metric for different scenarios (voxel sizes: 8 mm, 4 mm, 2 mm, 1 mm 9 different breathing phases). dDMM achieves the best transformation accuracy in all measured test cases with 3-5% lower errors than the other models. The results of dDMM are reasonable and most efficient in this study, although the model is simple and easy to implement. The EMCM model also achieved suitable results, but the approach requires a more complex

  15. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  16. Quantum Algorithms

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.

  17. Dose reconstruction for intensity-modulated radiation therapy using a non-iterative method and portal dose image

    NASA Astrophysics Data System (ADS)

    Yeo, Inhwan Jason; Jung, Jae Won; Chew, Meng; Kim, Jong Oh; Wang, Brian; Di Biase, Steven; Zhu, Yunping; Lee, Dohyung

    2009-09-01

    A straightforward and accurate method was developed to verify the delivery of intensity-modulated radiation therapy (IMRT) and to reconstruct the dose in a patient. The method is based on a computational algorithm that linearly describes the physical relationship between beamlets and dose-scoring voxels in a patient and the dose image from an electronic portal imaging device (EPID). The relationship is expressed in the form of dose response functions (responses) that are quantified using Monte Carlo (MC) particle transport techniques. From the dose information measured by the EPID the received patient dose is reconstructed by inversely solving the algorithm. The unique and novel non-iterative feature of this algorithm sets it apart from many existing dose reconstruction methods in the literature. This study presents the algorithm in detail and validates it experimentally for open and IMRT fields. Responses were first calculated for each beamlet of the selected fields by MC simulation. In-phantom and exit film dosimetry were performed on a flat phantom. Using the calculated responses and the algorithm, the exit film dose was used to inversely reconstruct the in-phantom dose, which was then compared with the measured in-phantom dose. The dose comparison in the phantom for all irradiated fields showed a pass rate of higher than 90% dose points given the criteria of dose difference of 3% and distance to agreement of 3 mm.

  18. A Simple Low-dose X-ray CT Simulation from High-dose Scan.

    PubMed

    Zeng, Dong; Huang, Jing; Bian, Zhaoying; Niu, Shanzhou; Zhang, Hua; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua

    2015-10-01

    Low-dose X-ray computed tomography (CT) simulation from high-dose scan is required in optimizing radiation dose to patients. In this study, we propose a simple low-dose CT simulation strategy in sinogram domain using the raw data from high-dose scan. Specially, a relationship between the incident fluxes of low- and high- dose scans is first determined according to the repeated projection measurements and analysis. Second, the incident flux level of the simulated low-dose scan is generated by properly scaling the incident flux level of high-dose scan via the determined relationship in the first step. Third, the low-dose CT transmission data by energy integrating detection is simulated by adding a statistically independent Poisson noise distribution plus a statistically independent Gaussian noise distribution. Finally, a filtered back-projection (FBP) algorithm is implemented to reconstruct the resultant low-dose CT images. The present low-dose simulation strategy is verified on the simulations and real scans by comparing it with the existing low-dose CT simulation tool. Experimental results demonstrated that the present low-dose CT simulation strategy can generate accurate low-dose CT sinogram data from high-dose scan in terms of qualitative and quantitative measurements.

  19. Radiotherapy dose calculations in the presence of hip prostheses

    SciTech Connect

    Keall, Paul J.; Siebers, Jeffrey V.; Jeraj, Robert; Mohan, Radhe

    2003-06-30

    The high density and atomic number of hip prostheses for patients undergoing pelvic radiotherapy challenge our ability to accurately calculate dose. A new clinical dose calculation algorithm, Monte Carlo, will allow accurate calculation of the radiation transport both within and beyond hip prostheses. The aim of this research was to investigate, for both phantom and patient geometries, the capability of various dose calculation algorithms to yield accurate treatment plans. Dose distributions in phantom and patient geometries with high atomic number prostheses were calculated using Monte Carlo, superposition, pencil beam, and no-heterogeneity correction algorithms. The phantom dose distributions were analyzed by depth dose and dose profile curves. The patient dose distributions were analyzed by isodose curves, dose-volume histograms (DVHs) and tumor control probability/normal tissue complication probability (TCP/NTCP) calculations. Monte Carlo calculations predicted the dose enhancement and reduction at the proximal and distal prosthesis interfaces respectively, whereas superposition and pencil beam calculations did not. However, further from the prosthesis, the differences between the dose calculation algorithms diminished. Treatment plans calculated with superposition showed similar isodose curves, DVHs, and TCP/NTCP as the Monte Carlo plans, except in the bladder, where Monte Carlo predicted a slightly lower dose. Treatment plans calculated with either the pencil beam method or with no heterogeneity correction differed significantly from the Monte Carlo plans.

  20. Calculation of the biological effective dose for piecewise defined dose-rate fits

    SciTech Connect

    Hobbs, Robert F.; Sgouros, George

    2009-03-15

    An algorithmic solution to the biological effective dose (BED) calculation from the Lea-Catcheside formula for a piecewise defined function is presented. Data from patients treated for metastatic thyroid cancer were used to illustrate the solution. The Lea-Catcheside formula for the G-factor of the BED is integrated numerically using a large number of small trapezoidal fits to each integral. The algorithmically calculated BED is compatible with an analytic calculation for a similarly valued exponentially fitted dose-rate plot and is the only resolution for piecewise defined dose-rate functions.

  1. Scheduling algorithms

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Wood, David; Sorensen, Stephen E.

    1996-12-01

    This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.

  2. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  3. Low-dose computed tomography image restoration using previous normal-dose scan

    SciTech Connect

    Ma, Jianhua; Huang, Jing; Feng, Qianjin; Zhang, Hua; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2011-10-15

    Purpose: In current computed tomography (CT) examinations, the associated x-ray radiation dose is of a significant concern to patients and operators. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) or kVp parameter (or delivering less x-ray energy to the body) as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and the noise would propagate into the CT image if no adequate noise control is applied during image reconstruction. Since a normal-dose high diagnostic CT image scanned previously may be available in some clinical applications, such as CT perfusion imaging and CT angiography (CTA), this paper presents an innovative way to utilize the normal-dose scan as a priori information to induce signal restoration of the current low-dose CT image series. Methods: Unlike conventional local operations on neighboring image voxels, nonlocal means (NLM) algorithm utilizes the redundancy of information across the whole image. This paper adapts the NLM to utilize the redundancy of information in the previous normal-dose scan and further exploits ways to optimize the nonlocal weights for low-dose image restoration in the NLM framework. The resulting algorithm is called the previous normal-dose scan induced nonlocal means (ndiNLM). Because of the optimized nature of nonlocal weights calculation, the ndiNLM algorithm does not depend heavily on image registration between the current low-dose and the previous normal-dose CT scans. Furthermore, the smoothing parameter involved in the ndiNLM algorithm can be adaptively estimated based on the image noise relationship between the current low-dose and the previous normal-dose scanning protocols. Results: Qualitative and quantitative evaluations were carried out on a physical phantom as well as clinical abdominal and brain perfusion CT scans in terms of accuracy and resolution properties. The gain by the use

  4. Optimization of the double dosimetry algorithm for interventional cardiologists

    NASA Astrophysics Data System (ADS)

    Chumak, Vadim; Morgun, Artem; Bakhanova, Elena; Voloskiy, Vitalii; Borodynchik, Elena

    2014-11-01

    A double dosimetry method is recommended in interventional cardiology (IC) to assess occupational exposure; yet currently there is no common and universal algorithm for effective dose estimation. In this work, flexible and adaptive algorithm building methodology was developed and some specific algorithm applicable for typical irradiation conditions of IC procedures was obtained. It was shown that the obtained algorithm agrees well with experimental measurements and is less conservative compared to other known algorithms.

  5. A comparison of three inverse treatment planning algorithms.

    PubMed

    Holmes, T; Mackie, T R

    1994-01-01

    Three published inverse treatment planning algorithms for physical optimization of external beam radiotherapy are compared. All three algorithms attempt to minimize a quadratic objective function of the dose distribution. It is shown that the algorithms are based on the common framework of Newton's method of multi-dimensional function minimization. The approximations used within this framework to obtain the different algorithms are described. The use of these algorithms requires that the number of weights of elemental dose distributions be equal to the number of sample points taken in the dose volume. The primary factor in determining how the algorithms are implemented is the dose computation model. Two of the algorithms use pencil beam dose models and therefore directly optimize individual pencil beam weights, whereas the third algorithm is implemented to optimize groups of pencil beams, each group converging upon a common point. All dose computation models assume that the irradiated medium is homogeneous. It is shown that the two different implementations produce similar results for the simple optimization problem of conforming dose to a convex target shape. Complex optimization problems consisting of non-convex target shapes and dose limiting structures are shown to require a pencil beam optimization method.

  6. Acoustic dose and acoustic dose-rate.

    PubMed

    Duck, Francis

    2009-10-01

    Acoustic dose is defined as the energy deposited by absorption of an acoustic wave per unit mass of the medium supporting the wave. Expressions for acoustic dose and acoustic dose-rate are given for plane-wave conditions, including temporal and frequency dependencies of energy deposition. The relationship between the acoustic dose-rate and the resulting temperature increase is explored, as is the relationship between acoustic dose-rate and radiation force. Energy transfer from the wave to the medium by means of acoustic cavitation is considered, and an approach is proposed in principle that could allow cavitation to be included within the proposed definitions of acoustic dose and acoustic dose-rate.

  7. SU-E-T-280: Reconstructed Rectal Wall Dose Map-Based Verification of Rectal Dose Sparing Effect According to Rectum Definition Methods and Dose Perturbation by Air Cavity in Endo-Rectal Balloon

    SciTech Connect

    Park, J; Park, H; Lee, J; Kang, S; Lee, M; Suh, T; Lee, B

    2014-06-01

    Purpose: Dosimetric effect and discrepancy according to the rectum definition methods and dose perturbation by air cavity in an endo-rectal balloon (ERB) were verified using rectal-wall (Rwall) dose maps considering systematic errors in dose optimization and calculation accuracy in intensity-modulated radiation treatment (IMRT) for prostate cancer patients. Methods: When the inflated ERB having average diameter of 4.5 cm and air volume of 100 cc is used for patient, Rwall doses were predicted by pencil-beam convolution (PBC), anisotropic analytic algorithm (AAA), and AcurosXB (AXB) with material assignment function. The errors of dose optimization and calculation by separating air cavity from the whole rectum (Rwhole) were verified with measured rectal doses. The Rwall doses affected by the dose perturbation of air cavity were evaluated using a featured rectal phantom allowing insert of rolled-up gafchromic films and glass rod detectors placed along the rectum perimeter. Inner and outer Rwall doses were verified with reconstructed predicted rectal wall dose maps. Dose errors and extent at dose levels were evaluated with estimated rectal toxicity. Results: While AXB showed insignificant difference of target dose coverage, Rwall doses underestimated by up to 20% in dose optimization for the Rwhole than Rwall at all dose range except for the maximum dose. As dose optimization for Rwall was applied, the Rwall doses presented dose error less than 3% between dose calculation algorithm except for overestimation of maximum rectal dose up to 5% in PBC. Dose optimization for Rwhole caused dose difference of Rwall especially at intermediate doses. Conclusion: Dose optimization for Rwall could be suggested for more accurate prediction of rectal wall dose prediction and dose perturbation effect by air cavity in IMRT for prostate cancer. This research was supported by the Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea

  8. Performance Evaluation of Algorithms in Lung IMRT: A comparison of Monte Carlo, Pencil Beam, Superposition, Fast Superposition and Convolution Algorithms

    PubMed Central

    Verma, T.; Painuly, N.K.; Mishra, S.P.; Shajahan, M.; Singh, N.; Bhatt, M.L.B.; Jamal, N.; Pant, M.C.

    2016-01-01

    Background: Inclusion of inhomogeneity corrections in intensity modulated small fields always makes conformal irradiation of lung tumor very complicated in accurate dose delivery. Objective: In the present study, the performance of five algorithms via Monte Carlo, Pencil Beam, Convolution, Fast Superposition and Superposition were evaluated in lung cancer Intensity Modulated Radiotherapy planning. Materials and Methods: Treatment plans for ten lung cancer patients previously planned on Monte Carlo algorithm were re-planned using same treatment planning indices (gantry angel, rank, power etc.) in other four algorithms. Results: The values of radiotherapy planning parameters such as Mean dose, volume of 95% isodose line, Conformity Index, Homogeneity Index for target, Maximum dose, Mean dose; %Volume receiving 20Gy or more by contralateral lung; % volume receiving 30 Gy or more; % volume receiving 25 Gy or more, Mean dose received by heart; %volume receiving 35Gy or more; %volume receiving 50Gy or more, Mean dose to Easophagous; % Volume receiving 45Gy or more, Maximum dose received by Spinal cord and Total monitor unit, Volume of 50 % isodose lines were recorded for all ten patients. Performance of different algorithms was also evaluated statistically. Conclusion: MC and PB algorithms found better as for tumor coverage, dose distribution homogeneity in Planning Target Volume and minimal dose to organ at risks are concerned. Superposition algorithms found to be better than convolution and fast superposition. In the case of tumors located centrally, it is recommended to use Monte Carlo algorithms for the optimal use of radiotherapy. PMID:27853720

  9. Absorbed Dose and Dose Equivalent Calculations for Modeling Effective Dose

    NASA Technical Reports Server (NTRS)

    Welton, Andrew; Lee, Kerry

    2010-01-01

    While in orbit, Astronauts are exposed to a much higher dose of ionizing radiation than when on the ground. It is important to model how shielding designs on spacecraft reduce radiation effective dose pre-flight, and determine whether or not a danger to humans is presented. However, in order to calculate effective dose, dose equivalent calculations are needed. Dose equivalent takes into account an absorbed dose of radiation and the biological effectiveness of ionizing radiation. This is important in preventing long-term, stochastic radiation effects in humans spending time in space. Monte carlo simulations run with the particle transport code FLUKA, give absorbed and equivalent dose data for relevant shielding. The shielding geometry used in the dose calculations is a layered slab design, consisting of aluminum, polyethylene, and water. Water is used to simulate the soft tissues that compose the human body. The results obtained will provide information on how the shielding performs with many thicknesses of each material in the slab. This allows them to be directly applicable to modern spacecraft shielding geometries.

  10. Direct dose mapping versus energy/mass transfer mapping for 4D dose accumulation: fundamental differences and dosimetric consequences

    NASA Astrophysics Data System (ADS)

    Li, Haisen S.; Zhong, Hualiang; Kim, Jinkoo; Glide-Hurst, Carri; Gulam, Misbah; Nurushev, Teamour S.; Chetty, Indrin J.

    2014-01-01

    The direct dose mapping (DDM) and energy/mass transfer (EMT) mapping are two essential algorithms for accumulating the dose from different anatomic phases to the reference phase when there is organ motion or tumor/tissue deformation during the delivery of radiation therapy. DDM is based on interpolation of the dose values from one dose grid to another and thus lacks rigor in defining the dose when there are multiple dose values mapped to one dose voxel in the reference phase due to tissue/tumor deformation. On the other hand, EMT counts the total energy and mass transferred to each voxel in the reference phase and calculates the dose by dividing the energy by mass. Therefore it is based on fundamentally sound physics principles. In this study, we implemented the two algorithms and integrated them within the Eclipse treatment planning system. We then compared the clinical dosimetric difference between the two algorithms for ten lung cancer patients receiving stereotactic radiosurgery treatment, by accumulating the delivered dose to the end-of-exhale (EE) phase. Specifically, the respiratory period was divided into ten phases and the dose to each phase was calculated and mapped to the EE phase and then accumulated. The displacement vector field generated by Demons-based registration of the source and reference images was used to transfer the dose and energy. The DDM and EMT algorithms produced noticeably different cumulative dose in the regions with sharp mass density variations and/or high dose gradients. For the planning target volume (PTV) and internal target volume (ITV) minimum dose, the difference was up to 11% and 4% respectively. This suggests that DDM might not be adequate for obtaining an accurate dose distribution of the cumulative plan, instead, EMT should be considered.

  11. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  12. Approximation algorithms

    PubMed Central

    Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.

    1997-01-01

    Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525

  13. Automated Gamma Knife dose planning

    NASA Astrophysics Data System (ADS)

    Leichtman, Gregg S.; Aita, Anthony L.; Goldman, H. W.

    1998-06-01

    The Gamma Knife (Elekta Instruments, Inc., Atlanta, GA), a neurosurgical, highly focused radiation delivery device, is used to eradicate deep-seated anomalous tissue within the human brain by delivering a lethal dose of radiation to target tissue. This dose is the accumulated result of delivering sequential `shots' of radiation to the target where each shot is approximately 3D Gaussian in shape. The size and intensity of each shot can be adjusted by varying the time of radiation exposure and by using one of four collimator sizes ranging from 4 - 18 mm. Current dose planning requires that the dose plan be developed manually to cover the target, and only the target, with a desired minimum radiation intensity using a minimum number of shots. This is a laborious and subjective process which typically leads to suboptimal conformal target coverage by the dose. We have used adaptive simulated annealing/quenching followed by Nelder-Mead simplex optimization to automate the selection and placement of Gaussian-based `shots' to form a simulated dose plane. In order to make the computation of the problem tractable, the algorithm, based upon contouring and polygon clipping, takes a 2 1/2-D approach to defining the cost function. Several experiments have been performed where the optimizers have been given the freedom to vary the number of shots and the weight, collimator size, and 3D location of each shot. To data best results have been obtained by forcing the optimizers to use a fixed number of unweighted shots with each optimizer set free to vary the 3D location and collimator size of each shot. Our preliminary results indicate that this technology will radically decrease planning time while significantly increasing accuracy of conformal target coverage and reproducibility over current manual methods.

  14. Estimation of the Warfarin Dose with Clinical and Pharmacogenetic Data

    PubMed Central

    2009-01-01

    BACKGROUND Genetic variability among patients plays an important role in determining the dose of warfarin that should be used when oral anticoagulation is initiated, but practical methods of using genetic information have not been evaluated in a diverse and large population. We developed and used an algorithm for estimating the appropriate warfarin dose that is based on both clinical and genetic data from a broad population base. METHODS Clinical and genetic data from 4043 patients were used to create a dose algorithm that was based on clinical variables only and an algorithm in which genetic information was added to the clinical variables. In a validation cohort of 1009 subjects, we evaluated the potential clinical value of each algorithm by calculating the percentage of patients whose predicted dose of warfarin was within 20% of the actual stable therapeutic dose; we also evaluated other clinically relevant indicators. RESULTS In the validation cohort, the pharmacogenetic algorithm accurately identified larger proportions of patients who required 21 mg of warfarin or less per week and of those who required 49 mg or more per week to achieve the target international normalized ratio than did the clinical algorithm (49.4% vs. 33.3%, P<0.001, among patients requiring ≤21 mg per week; and 24.8% vs. 7.2%, P<0.001, among those requiring ≥49 mg per week). CONCLUSIONS The use of a pharmacogenetic algorithm for estimating the appropriate initial dose of warfarin produces recommendations that are significantly closer to the required stable therapeutic dose than those derived from a clinical algorithm or a fixed-dose approach. The greatest benefits were observed in the 46.2% of the population that required 21 mg or less of warfarin per week or 49 mg or more per week for therapeutic anticoagulation. PMID:19228618

  15. SCCT guidelines on radiation dose and dose-optimization strategies in cardiovascular CT.

    PubMed

    Halliburton, Sandra S; Abbara, Suhny; Chen, Marcus Y; Gentry, Ralph; Mahesh, Mahadevappa; Raff, Gilbert L; Shaw, Leslee J; Hausleiter, Jörg

    2011-01-01

    Over the last few years, computed tomography (CT) has developed into a standard clinical test for a variety of cardiovascular conditions. The emergence of cardiovascular CT during a period of dramatic increase in radiation exposure to the population from medical procedures and heightened concern about the subsequent potential cancer risk has led to intense scrutiny of the radiation burden of this new technique. This has hastened the development and implementation of dose reduction tools and prompted closer monitoring of patient dose. In an effort to aid the cardiovascular CT community in incorporating patient-centered radiation dose optimization and monitoring strategies into standard practice, the Society of Cardiovascular Computed Tomography has produced a guideline document to review available data and provide recommendations regarding interpretation of radiation dose indices and predictors of risk, appropriate use of scanner acquisition modes and settings, development of algorithms for dose optimization, and establishment of procedures for dose monitoring.

  16. SCCT guidelines on radiation dose and dose-optimization strategies in cardiovascular CT

    PubMed Central

    Halliburton, Sandra S.; Abbara, Suhny; Chen, Marcus Y.; Gentry, Ralph; Mahesh, Mahadevappa; Raff, Gilbert L.; Shaw, Leslee J.; Hausleiter, Jörg

    2012-01-01

    Over the last few years, computed tomography (CT) has developed into a standard clinical test for a variety of cardiovascular conditions. The emergence of cardiovascular CT during a period of dramatic increase in radiation exposure to the population from medical procedures and heightened concern about the subsequent potential cancer risk has led to intense scrutiny of the radiation burden of this new technique. This has hastened the development and implementation of dose reduction tools and prompted closer monitoring of patient dose. In an effort to aid the cardiovascular CT community in incorporating patient-centered radiation dose optimization and monitoring strategies into standard practice, the Society of Cardiovascular Computed Tomography has produced a guideline document to review available data and provide recommendations regarding interpretation of radiation dose indices and predictors of risk, appropriate use of scanner acquisition modes and settings, development of algorithms for dose optimization, and establishment of procedures for dose monitoring. PMID:21723512

  17. Optimizing CT radiation dose based on patient size and image quality: the size-specific dose estimate method.

    PubMed

    Larson, David B

    2014-10-01

    The principle of ALARA (dose as low as reasonably achievable) calls for dose optimization rather than dose reduction, per se. Optimization of CT radiation dose is accomplished by producing images of acceptable diagnostic image quality using the lowest dose method available. Because it is image quality that constrains the dose, CT dose optimization is primarily a problem of image quality rather than radiation dose. Therefore, the primary focus in CT radiation dose optimization should be on image quality. However, no reliable direct measure of image quality has been developed for routine clinical practice. Until such measures become available, size-specific dose estimates (SSDE) can be used as a reasonable image-quality estimate. The SSDE method of radiation dose optimization for CT abdomen and pelvis consists of plotting SSDE for a sample of examinations as a function of patient size, establishing an SSDE threshold curve based on radiologists' assessment of image quality, and modifying protocols to consistently produce doses that are slightly above the threshold SSDE curve. Challenges in operationalizing CT radiation dose optimization include data gathering and monitoring, managing the complexities of the numerous protocols, scanners and operators, and understanding the relationship of the automated tube current modulation (ATCM) parameters to image quality. Because CT manufacturers currently maintain their ATCM algorithms as secret for proprietary reasons, prospective modeling of SSDE for patient populations is not possible without reverse engineering the ATCM algorithm and, hence, optimization by this method requires a trial-and-error approach.

  18. SU-D-BRB-07: Lipiodol Impact On Dose Distribution in Liver SBRT After TACE

    SciTech Connect

    Kawahara, D; Ozawa, S; Hioki, K; Suzuki, T; Lin, Y; Okumura, T; Ochi, Y; Nakashima, T; Ohno, Y; Kimura, T; Murakami, Y; Nagata, Y

    2015-06-15

    Purpose: Stereotactic body radiotherapy (SBRT) combining transarterial chemoembolization (TACE) with Lipiodol is expected to improve local control. This study aims to evaluate the impact of Lipiodol on dose distribution by comparing the dosimetric performance of the Acuros XB (AXB) algorithm, anisotropic analytical algorithm (AAA), and Monte Carlo (MC) method using a virtual heterogeneous phantom and a treatment plan for liver SBRT after TACE. Methods: The dose distributions calculated using AAA and AXB algorithm, both in Eclipse (ver. 11; Varian Medical Systems, Palo Alto, CA), and EGSnrc-MC were compared. First, the inhomogeneity correction accuracy of the AXB algorithm and AAA was evaluated by comparing the percent depth dose (PDD) obtained from the algorithms with that from the MC calculations using a virtual inhomogeneity phantom, which included water and Lipiodol. Second, the dose distribution of a liver SBRT patient treatment plan was compared between the calculation algorithms. Results In the virtual phantom, compared with the MC calculations, AAA underestimated the doses just before and in the Lipiodol region by 5.1% and 9.5%, respectively, and overestimated the doses behind the region by 6.0%. Furthermore, compared with the MC calculations, the AXB algorithm underestimated the doses just before and in the Lipiodol region by 4.5% and 10.5%, respectively, and overestimated the doses behind the region by 4.2%. In the SBRT plan, the AAA and AXB algorithm underestimated the maximum doses in the Lipiodol region by 9.0% in comparison with the MC calculations. In clinical cases, the dose enhancement in the Lipiodol region can approximately 10% increases in tumor dose without increase of dose to normal tissue. Conclusion: The MC method demonstrated a larger increase in the dose in the Lipiodol region than the AAA and AXB algorithm. Notably, dose enhancement were observed in the tumor area; this may lead to a clinical benefit.

  19. Fluence-convolution broad-beam (FCBB) dose calculation.

    PubMed

    Lu, Weiguo; Chen, Mingli

    2010-12-07

    IMRT optimization requires a fast yet relatively accurate algorithm to calculate the iteration dose with small memory demand. In this paper, we present a dose calculation algorithm that approaches these goals. By decomposing the infinitesimal pencil beam (IPB) kernel into the central axis (CAX) component and lateral spread function (LSF) and taking the beam's eye view (BEV), we established a non-voxel and non-beamlet-based dose calculation formula. Both LSF and CAX are determined by a commissioning procedure using the collapsed-cone convolution/superposition (CCCS) method as the standard dose engine. The proposed dose calculation involves a 2D convolution of a fluence map with LSF followed by ray tracing based on the CAX lookup table with radiological distance and divergence correction, resulting in complexity of O(N(3)) both spatially and temporally. This simple algorithm is orders of magnitude faster than the CCCS method. Without pre-calculation of beamlets, its implementation is also orders of magnitude smaller than the conventional voxel-based beamlet-superposition (VBS) approach. We compared the presented algorithm with the CCCS method using simulated and clinical cases. The agreement was generally within 3% for a homogeneous phantom and 5% for heterogeneous and clinical cases. Combined with the 'adaptive full dose correction', the algorithm is well suitable for calculating the iteration dose during IMRT optimization.

  20. [Effect of alclofenac on the prothrombin level in patients under treatment with anticoagulants].

    PubMed

    Kaufmann, E

    1977-06-25

    Simultaneous administration of the anticoagulants acenocoumarol, phenprocoumon or chlorindion and the antirheumatic substance alclofenac in long term trials has no observable influence on the prothrombin time of stabilized patients. When the anticoagulant (acenocoumarol or phenprocoumon) and the alclofenac therapy are begun simultaneously, a variation of the dose is necessary. Either a one-third lower initial dosage can be given, or, after attaining the maximal anticoagulation effect on the prothrombin time (48 h after beginning acenocoumarol therapy, 72 h with phenprocoumarol), a lower daily dosage must be administered once, in comparison to the other group without alclofenac therapy. The maintenace dosage is not influenced by alclofenac.

  1. Brachytherapy source characterization for improved dose calculations using primary and scatter dose separation.

    PubMed

    Russell, Kellie R; Tedgren, Asa K Carlsson; Ahnesjö, Anders

    2005-09-01

    In brachytherapy, tissue heterogeneities, source shielding, and finite patient/phantom extensions affect both the primary and scatter dose distributions. The primary dose is, due to the short range of secondary electrons, dependent only on the distribution of material located on the ray line between the source and dose deposition site. The scatter dose depends on both the direct irradiation pattern and the distribution of material in a large volume surrounding the point of interest, i.e., a much larger volume must be included in calculations to integrate many small dose contributions. It is therefore of interest to consider different methods for the primary and the scatter dose calculation to improve calculation accuracy with limited computer resources. The algorithms in present clinical use ignore these effects causing systematic dose errors in brachytherapy treatment planning. In this work we review a primary and scatter dose separation formalism (PSS) for brachytherapy source characterization to support separate calculation of the primary and scatter dose contributions. We show how the resulting source characterization data can be used to drive more accurate dose calculations using collapsed cone superposition for scatter dose calculations. Two types of source characterization data paths are used: a direct Monte Carlo simulation in water phantoms with subsequent parameterization of the results, and an alternative data path built on processing of AAPM TG43 formatted data to provide similar parameter sets. The latter path is motivated of the large amounts of data already existing in the TG43 format. We demonstrate the PSS methods using both data paths for a clinical 192Ir source. Results are shown for two geometries: a finite but homogeneous water phantom, and a half-slab consisting of water and air. The dose distributions are compared to results from full Monte Carlo simulations and we show significant improvement in scatter dose calculations when the collapsed

  2. [CUDA-based fast dose calculation in radiotherapy].

    PubMed

    Wang, Xianliang; Liu, Cao; Hou, Qing

    2011-10-01

    Dose calculation plays a key role in treatment planning of radiotherapy. Algorithms for dose calculation require high accuracy and computational efficiency. Finite size pencil beam (FSPB) algorithm is a method commonly adopted in the treatment planning system for radiotherapy. However, improvement on its computational efficiency is still desirable for such purpose as real time treatment planning. In this paper, we present an implementation of the FSPB, by which the most time-consuming parts in the algorithm are parallelized and ported on graphic processing unit (GPU). Compared with the FSPB completely running on central processing unit (CPU), the GPU-implemented FSPB can speed up the dose calculation for 25-35 times on a low price GPU (Geforce GT320) and for 55-100 times on a Tesla C1060, indicating that the GPU-implemented FSPB can provide fast enough dose calculations for real-time treatment planning.

  3. SU-E-T-802: Verification of Implanted Cardiac Pacemaker Doses in Intensity-Modulated Radiation Therapy: Dose Prediction Accuracy and Reduction Effect of a Lead Sheet

    SciTech Connect

    Lee, J; Chung, J

    2015-06-15

    Purpose: To verify delivered doses on the implanted cardiac pacemaker, predicted doses with and without dose reduction method were verified using the MOSFET detectors in terms of beam delivery and dose calculation techniques in intensity-modulated radiation therapy (IMRT). Methods: The pacemaker doses for a patient with a tongue cancer were predicted according to the beam delivery methods [step-and-shoot (SS) and sliding window (SW)], intensity levels for dose optimization, and dose calculation algorithms. Dosimetric effects on the pacemaker were calculated three dose engines: pencil-beam convolution (PBC), analytical anisotropic algorithm (AAA), and Acuros-XB. A lead shield of 2 mm thickness was designed for minimizing irradiated doses to the pacemaker. Dose variations affected by the heterogeneous material properties of the pacemaker and effectiveness of the lead shield were predicted by the Acuros-XB. Dose prediction accuracy and the feasibility of the dose reduction strategy were verified based on the measured skin doses right above the pacemaker using mosfet detectors during the radiation treatment. Results: The Acuros-XB showed underestimated skin doses and overestimated doses by the lead-shield effect, even though the lower dose disagreement was observed. It led to improved dose prediction with higher intensity level of dose optimization in IMRT. The dedicated tertiary lead sheet effectively achieved reduction of pacemaker dose up to 60%. Conclusion: The current SS technique could deliver lower scattered doses than recommendation criteria, however, use of the lead sheet contributed to reduce scattered doses.Thin lead plate can be a useful tertiary shielder and it could not acuse malfunction or electrical damage of the implanted pacemaker in IMRT. It is required to estimate more accurate scattered doses of the patient with medical device to design proper dose reduction strategy.

  4. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  5. Neutron dose equivalent meter

    DOEpatents

    Olsher, Richard H.; Hsu, Hsiao-Hua; Casson, William H.; Vasilik, Dennis G.; Kleck, Jeffrey H.; Beverding, Anthony

    1996-01-01

    A neutron dose equivalent detector for measuring neutron dose capable of accurately responding to neutron energies according to published fluence to dose curves. The neutron dose equivalent meter has an inner sphere of polyethylene, with a middle shell overlying the inner sphere, the middle shell comprising RTV.RTM. silicone (organosiloxane) loaded with boron. An outer shell overlies the middle shell and comprises polyethylene loaded with tungsten. The neutron dose equivalent meter defines a channel through the outer shell, the middle shell, and the inner sphere for accepting a neutron counter tube. The outer shell is loaded with tungsten to provide neutron generation, increasing the neutron dose equivalent meter's response sensitivity above 8 MeV.

  6. [Indications for low-dose CT in the emergency setting].

    PubMed

    Poletti, Pierre-Alexandre; Andereggen, Elisabeth; Rutschmann, Olivier; de Perrot, Thomas; Caviezel, Alessandro; Platon, Alexandra

    2009-08-19

    CT delivers a large dose of radiation, especially in abdominal imaging. Recently, a low-dose abdominal CT protocol (low-dose CT) has been set-up in our institution. "Low-dose CT" is almost equivalent to a single standard abdominal radiograph in term of dose of radiation (about one sixth of those delivered by a standard CT). "Low-dose CT" is now used routinely in our emergency service in two main indications: patients with a suspicion of renal colic and those with right lower quadrant pain. It is obtained without intravenous contrast media. Oral contrast is given to patients with suspicion of appendicitis. "Low-dose CT" is used in the frame of well defined clinical algorithms, and does only replace standard CT when it can reach a comparable diagnostic quality.

  7. Is it sensible to 'deform' dose? 3D experimental validation of dose-warping

    SciTech Connect

    Yeo, U. J.; Taylor, M. L.; Supple, J. R.; Smith, R. L.; Dunn, L.; Kron, T.; Franich, R. D.

    2012-08-15

    Purpose: Strategies for dose accumulation in deforming anatomy are of interest in radiotherapy. Algorithms exist for the deformation of dose based on patient image sets, though these are sometimes contentious because not all such image calculations are constrained by physical laws. While tumor and organ motion has been a key area of study for a considerable amount of time, deformation is of increasing interest. In this work, we demonstrate a full 3D experimental validation of results from a range of dose deformation algorithms available in the public domain. Methods: We recently developed the first tissue-equivalent, full 3D deformable dosimetric phantom-'DEFGEL.' To assess the accuracy of dose-warping based on deformable image registration (DIR), we have measured doses in undeformed and deformed states of the DEFGEL dosimeter and compared these to planned doses and warped doses. In this way we have directly evaluated the accuracy of dose-warping calculations for 11 different algorithms. We have done this for a range of stereotactic irradiation schemes and types and magnitudes of deformation. Results: The original Horn and Schunck algorithm is shown to be the best performing of the 11 algorithms trialled. Comparing measured and dose-warped calculations for this method, it is found that for a 10 Multiplication-Sign 10 mm{sup 2} square field, {gamma}{sub 3%/3mm}= 99.9%; for a 20 Multiplication-Sign 20 mm{sup 2} cross-shaped field, {gamma}{sub 3%/3mm}= 99.1%; and for a multiple dynamic arc (0.413 cm{sup 3} PTV) treatment adapted from a patient treatment plan, {gamma}{sub 3%/3mm}= 95%. In each case, the agreement is comparable to-but consistently {approx}1% less than-comparison between measured and calculated (planned) dose distributions in the absence of deformation. The magnitude of the deformation, as measured by the largest displacement experienced by any voxel in the volume, has the greatest influence on the accuracy of the warped dose distribution. Considering

  8. Evaluation of dose calculation accuracy of treatment planning systems at hip prosthesis interfaces.

    PubMed

    Paulu, David; Alaei, Parham

    2017-03-20

    There are an increasing number of radiation therapy patients with hip prosthesis. The common method of minimizing treatment planning inaccuracies is to avoid radiation beams to transit through the prosthesis. However, the beams often exit through them, especially when the patient has a double-prosthesis. Modern treatment planning systems employ algorithms with improved dose calculation accuracies but even these algorithms may not predict the dose accurately at high atomic number interfaces. The current study evaluates the dose calculation accuracy of three common dose calculation algorithms employed in two commercial treatment planning systems. A hip prosthesis was molded inside a cylindrical phantom and the dose at several points within the phantom at the interface with prosthesis was measured using thermoluminescent dosimeters. The measured doses were then compared to the predicted ones by the planning systems. The results of the study indicate all three algorithms underestimate the dose at the prosthesis interface, albeit to varying degrees, and for both low- and high-energy x rays. The measured doses are higher than calculated ones by 5-22% for Pinnacle Collapsed Cone Convolution algorithm, 2-23% for Eclipse Acuros XB, and 6-25% for Eclipse Analytical Anisotropic Algorithm. There are generally better agreements for AXB algorithm and the worst results are for the AAA.

  9. Photon dose calculation based on electron multiple-scattering theory: primary dose deposition kernels.

    PubMed

    Wang, L; Jette, D

    1999-08-01

    The transport of the secondary electrons resulting from high-energy photon interactions is essential to energy redistribution and deposition. In order to develop an accurate dose-calculation algorithm for high-energy photons, which can predict the dose distribution in inhomogeneous media and at the beam edges, we have investigated the feasibility of applying electron transport theory [Jette, Med. Phys. 15, 123 (1988)] to photon dose calculation. In particular, the transport of and energy deposition by Compton electron and electrons and positrons resulting from pair production were studied. The primary photons are treated as the source of the secondary electrons and positrons, which are transported through the irradiated medium using Gaussian multiple-scattering theory [Jette, Med. Phys. 15, 123 (1988)]. The initial angular and kinetic energy distribution(s) of the secondary electrons (and positrons) emanating from the photon interactions are incorporated into the transport. Due to different mechanisms of creation and cross-section functions, the transport of and the energy deposition by the electrons released in these two processes are studied and modeled separately based on first principles. In this article, we focus on determining the dose distribution for an individual interaction site. We define the Compton dose deposition kernel (CDK) or the pair-production dose deposition kernel (PDK) as the dose distribution relative to the point of interaction, per unit interaction density, for a monoenergetic photon beam in an infinite homogeneous medium of unit density. The validity of this analytic modeling of dose deposition was evaluated through EGS4 Monte Carlo simulation. Quantitative agreement between these two calculations of the dose distribution and the average energy deposited per interaction was achieved. Our results demonstrate the applicability of the electron dose-calculation method to photon dose calculation.

  10. Calculation of effective dose.

    PubMed

    McCollough, C H; Schueler, B A

    2000-05-01

    The concept of "effective dose" was introduced in 1975 to provide a mechanism for assessing the radiation detriment from partial body irradiations in terms of data derived from whole body irradiations. The effective dose is the mean absorbed dose from a uniform whole-body irradiation that results in the same total radiation detriment as from the nonuniform, partial-body irradiation in question. The effective dose is calculated as the weighted average of the mean absorbed dose to the various body organs and tissues, where the weighting factor is the radiation detriment for a given organ (from a whole-body irradiation) as a fraction of the total radiation detriment. In this review, effective dose equivalent and effective dose, as established by the International Commission on Radiological Protection in 1977 and 1990, respectively, are defined and various methods of calculating these quantities are presented for radionuclides, radiography, fluoroscopy, computed tomography and mammography. In order to calculate either quantity, it is first necessary to estimate the radiation dose to individual organs. One common method of determining organ doses is through Monte Carlo simulations of photon interactions within a simplified mathematical model of the human body. Several groups have performed these calculations and published their results in the form of data tables of organ dose per unit activity or exposure. These data tables are specified according to particular examination parameters, such as radiopharmaceutical, x-ray projection, x-ray beam energy spectra or patient size. Sources of these organ dose conversion coefficients are presented and differences between them are examined. The estimates of effective dose equivalent or effective dose calculated using these data, although not intended to describe the dose to an individual, can be used as a relative measure of stochastic radiation detriment. The calculated values, in units of sievert (or rem), indicate the amount of

  11. Algorithm Animation with Galant.

    PubMed

    Stallmann, Matthias F

    2017-01-01

    Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.

  12. Automated size-specific CT dose monitoring program: Assessing variability in CT dose

    SciTech Connect

    Christianson, Olav; Li Xiang; Frush, Donald; Samei, Ehsan

    2012-11-15

    Purpose: The potential health risks associated with low levels of ionizing radiation have created a movement in the radiology community to optimize computed tomography (CT) imaging protocols to use the lowest radiation dose possible without compromising the diagnostic usefulness of the images. Despite efforts to use appropriate and consistent radiation doses, studies suggest that a great deal of variability in radiation dose exists both within and between institutions for CT imaging. In this context, the authors have developed an automated size-specific radiation dose monitoring program for CT and used this program to assess variability in size-adjusted effective dose from CT imaging. Methods: The authors radiation dose monitoring program operates on an independent health insurance portability and accountability act compliant dosimetry server. Digital imaging and communication in medicine routing software is used to isolate dose report screen captures and scout images for all incoming CT studies. Effective dose conversion factors (k-factors) are determined based on the protocol and optical character recognition is used to extract the CT dose index and dose-length product. The patient's thickness is obtained by applying an adaptive thresholding algorithm to the scout images and is used to calculate the size-adjusted effective dose (ED{sub adj}). The radiation dose monitoring program was used to collect data on 6351 CT studies from three scanner models (GE Lightspeed Pro 16, GE Lightspeed VCT, and GE Definition CT750 HD) and two institutions over a one-month period and to analyze the variability in ED{sub adj} between scanner models and across institutions. Results: No significant difference was found between computer measurements of patient thickness and observer measurements (p= 0.17), and the average difference between the two methods was less than 4%. Applying the size correction resulted in ED{sub adj} that differed by up to 44% from effective dose estimates

  13. Dose prescription in boron neutron capture therapy

    SciTech Connect

    Gupta, N.M.S.; Gahbauer, R.A. ); Blue, T.E. ); Wambersie, A. )

    1994-03-30

    The purpose of this paper is to address some aspects of the many considerations that need to go into a dose prescription in boron neutron capture therapy (BNCT) for brain tumors; and to describe some methods to incorporate knowledge from animal studies and other experiments into the process of dose prescription. Previously, an algorithm to estimate the normal tissue tolerance to mixed high and low linear energy transfer radiations in BNCT was proposed. The authors have developed mathematical formulations and computational methods to represent this algorithm. Generalized models to fit the central axis dose rate components for an epithermal neutron field were also developed. These formulations and beam fitting models were programmed into spreadsheets to simulate two treatment techniques which are expected to be used in BNCT: a two-field bilateral scheme and a single-field treatment scheme. Parameters in these spreadsheets can be varied to represent the fractionation scheme used, the [sup 10]B microdistribution in normal tissue, and the ratio of [sup 10]B in tumor to normal tissue. Most of these factors have to be determined for a given neutron field and [sup 10]B compound combination from large animal studies. The spreadsheets have been programmed to integrate all of the treatment-related information and calculate the location along the central axis where the normal tissue tolerance is exceeded first. This information is then used to compute the maximum treatment time allowable and the maximum tumor dose that may be delivered for a given BNCT treatment. The effect of different treatment variables on the treatment time and tumor dose has been shown to be very significant. It has also been shown that the location of D[sub max] shifts significantly, depending on some of the treatment variables-mainly the fractionation scheme used. These results further emphasize the fact that dose prescription in BNCT is very complicated and nonintuitive. 11 refs., 6 figs., 3 tabs.

  14. Control algorithms for dynamic attenuators

    SciTech Connect

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-06-15

    Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not requirea priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current

  15. Out-of-field doses in radiotherapy: Input to epidemiological studies and dose-risk models.

    PubMed

    Harrison, Roger

    2017-04-06

    Out-of-field doses in radiotherapy have been increasingly studied in recent years because of the generally improved survival of patients who have received radiotherapy as part of their treatment for cancer and their subsequent risk of a second malignancy. This short article attempts to identify some current problems, challenges and opportunities for dosimetry developments in this field. Out-of-field doses and derived risk estimates contribute to general knowledge about radiation effects on humans as well as contributing to risk-benefit considerations for the individual patient. It is suggested that for input into epidemiological studies, the complete dose description (i.e. the synthesis of therapy and imaging doses from all the treatment and imaging modalities) is ideally required, although there is currently no common dosimetry framework which easily covers all modalities. A general strategy for out-of-field dose estimation requires development and improvement in several areas including (i) dosimetry in regions of steep dose gradient close to the field edge (ii) experimentally verified analytical and Monte Carlo models for out-of-field doses (iii) the validity of treatment planning system algorithms outside the field edge (iv) dosimetry of critical sub-structures in organs at risk (v) mixed field (including neutron) dosimetry in proton and ion radiotherapy and photoneutron production in high energy photon beams (vi) the most appropriate quantities to use in neutron dosimetry in a radiotherapy context and (vii) simplification of measurement methods in regions distant from the target volume.

  16. Dose calculation accuracies in whole breast radiotherapy treatment planning: a multi-institutional study.

    PubMed

    Hatanaka, Shogo; Miyabe, Yuki; Tohyama, Naoki; Kumazaki, Yu; Kurooka, Masahiko; Okamoto, Hiroyuki; Tachibana, Hidenobu; Kito, Satoshi; Wakita, Akihisa; Ohotomo, Yuko; Ikagawa, Hiroyuki; Ishikura, Satoshi; Nozaki, Miwako; Kagami, Yoshikazu; Hiraoka, Masahiro; Nishio, Teiji

    2015-07-01

    Our objective in this study was to evaluate the variation in the doses delivered among institutions due to dose calculation inaccuracies in whole breast radiotherapy. We have developed practical procedures for quality assurance (QA) of radiation treatment planning systems. These QA procedures are designed to be performed easily at any institution and to permit comparisons of results across institutions. The dose calculation accuracy was evaluated across seven institutions using various irradiation conditions. In some conditions, there was a >3 % difference between the calculated dose and the measured dose. The dose calculation accuracy differs among institutions because it is dependent on both the dose calculation algorithm and beam modeling. The QA procedures in this study are useful for verifying the accuracy of the dose calculation algorithm and of the beam model before clinical use for whole breast radiotherapy.

  17. Fast reconstruction of low dose proton CT by sinogram interpolation

    NASA Astrophysics Data System (ADS)

    Hansen, David C.; Sangild Sørensen, Thomas; Rit, Simon

    2016-08-01

    Proton computed tomography (CT) has been demonstrated as a promising image modality in particle therapy planning. It can reduce errors in particle range calculations and consequently improve dose calculations. Obtaining a high imaging resolution has traditionally required computationally expensive iterative reconstruction techniques to account for the multiple scattering of the protons. Recently, techniques for direct reconstruction have been developed, but these require a higher imaging dose than the iterative methods. No previous work has compared the image quality of the direct and the iterative methods. In this article, we extend the methodology for direct reconstruction to be applicable for low imaging doses and compare the obtained results with three state-of-the-art iterative algorithms. We find that the direct method yields comparable resolution and image quality to the iterative methods, even at 1 mSv dose levels, while yielding a twentyfold speedup in reconstruction time over previously published iterative algorithms.

  18. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  19. Monte Carlo dose calculations in advanced radiotherapy

    NASA Astrophysics Data System (ADS)

    Bush, Karl Kenneth

    The remarkable accuracy of Monte Carlo (MC) dose calculation algorithms has led to the widely accepted view that these methods should and will play a central role in the radiotherapy treatment verification and planning of the future. The advantages of using MC clinically are particularly evident for radiation fields passing through inhomogeneities, such as lung and air cavities, and for small fields, including those used in today's advanced intensity modulated radiotherapy techniques. Many investigators have reported significant dosimetric differences between MC and conventional dose calculations in such complex situations, and have demonstrated experimentally the unmatched ability of MC calculations in modeling charged particle disequilibrium. The advantages of using MC dose calculations do come at a cost. The nature of MC dose calculations require a highly detailed, in-depth representation of the physical system (accelerator head geometry/composition, anatomical patient geometry/composition and particle interaction physics) to allow accurate modeling of external beam radiation therapy treatments. To perform such simulations is computationally demanding and has only recently become feasible within mainstream radiotherapy practices. In addition, the output of the accelerator head simulation can be highly sensitive to inaccuracies within a model that may not be known with sufficient detail. The goal of this dissertation is to both improve and advance the implementation of MC dose calculations in modern external beam radiotherapy. To begin, a novel method is proposed to fine-tune the output of an accelerator model to better represent the measured output. In this method an intensity distribution of the electron beam incident on the model is inferred by employing a simulated annealing algorithm. The method allows an investigation of arbitrary electron beam intensity distributions and is not restricted to the commonly assumed Gaussian intensity. In a second component of

  20. Utirik Atoll Dose Assessment

    SciTech Connect

    Robison, W.L.; Conrado, C.L.; Bogen, K.T

    1999-10-06

    On March 1, 1954, radioactive fallout from the nuclear test at Bikini Atoll code-named BRAVO was deposited on Utirik Atoll which lies about 187 km (300 miles) east of Bikini Atoll. The residents of Utirik were evacuated three days after the fallout started and returned to their atoll in May 1954. In this report we provide a final dose assessment for current conditions at the atoll based on extensive data generated from samples collected in 1993 and 1994. The estimated population average maximum annual effective dose using a diet including imported foods is 0.037 mSv y{sup -1} (3.7 mrem y{sup -1}). The 95% confidence limits are within a factor of three of their population average value. The population average integrated effective dose over 30-, 50-, and 70-y is 0.84 mSv (84, mrem), 1.2 mSv (120 mrem), and 1.4 mSv (140 mrem), respectively. The 95% confidence limits on the population-average value post 1998, i.e., the 30-, 50-, and 70-y integral doses, are within a factor of two of the mean value and are independent of time, t, for t > 5 y. Cesium-137 ({sup 137}Cs) is the radionuclide that contributes most of this dose, mostly through the terrestrial food chain and secondarily from external gamma exposure. The dose from weapons-related radionuclides is very low and of no consequence to the health of the population. The annual background doses in the U. S. and Europe are 3.0 mSv (300 mrem), and 2.4 mSv (240 mrem), respectively. The annual background dose in the Marshall Islands is estimated to be 1.4 mSv (140 mrem). The total estimated combined Marshall Islands background dose plus the weapons-related dose is about 1.5 mSv y{sup -1} (150 mrem y{sup -1}) which can be directly compared to the annual background effective dose of 3.0 mSv y{sup -1} (300 mrem y{sup -1}) for the U. S. and 2.4 mSv y{sup -1} (240 mrem y{sup -1}) for Europe. Moreover, the doses listed in this report are based only on the radiological decay of {sup 137}Cs (30.1 y half-life) and other

  1. Uses of clinical algorithms.

    PubMed

    Margolis, C Z

    1983-02-04

    The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.

  2. Comparison of inhomogeneity correction algorithms in small photon fields.

    PubMed

    Jones, Andrew O; Das, Indra J

    2005-03-01

    Algorithms such as convolution superposition, Batho, and equivalent pathlength which were originally developed and validated for conventional treatments under conditions of electronic equilibrium using relatively large fields greater than 5 x 5 cm2 are routinely employed for inhomogeneity corrections. Modern day treatments using intensity modulated radiation therapy employ small beamlets characterized by the resolution of the multileaf collimator. These beamlets, in general, do not provide electronic equilibrium even in a homogeneous medium, and these effects are exaggerated in media with inhomogenieties. Monte Carlo simulations are becoming a tool of choice in understanding the dosimetry of small photon fields as they encounter low density media. In this study, depth dose data from the Monte Carlo simulations are compared to the results of the convolution superposition, Batho, and equivalent pathlength algorithms. The central axis dose within the low-density inhomogeneity as calculated by Monte Carlo simulation and convolution superposition decreases for small field sizes whereas it increases using the Batho and equivalent pathlength algorithms. The dose perturbation factor (DPF) is defined as the ratio of dose to a point within the inhomogeneity to the same point in a homogeneous phantom. The dose correction factor is defined as the ratio of dose calculated by an algorithm at a point to the Monte Carlo derived dose at the same point, respectively. DPF is noted to be significant for small fields and low density for all algorithms. Comparisons of the algorithms with Monte Carlo simulations is reflected in the DCF, which is close to 1.0 for the convolution-superposition algorithm. The Batho and equivalent pathlength algorithms differ significantly from Monte Carlo simulation for most field sizes and densities. Convolution superposition shows better agreement with Monte Carlo data versus the Batho or equivalent pathlength corrections. As the field size increases the

  3. Gamma Knife radiosurgery with CT image-based dose calculation.

    PubMed

    Xu, Andy Yuanguang; Bhatnagar, Jagdish; Bednarz, Greg; Niranjan, Ajay; Kondziolka, Douglas; Flickinger, John; Lunsford, L Dade; Huq, M Saiful

    2015-11-01

    The Leksell GammaPlan software version 10 introduces a CT image-based segmentation tool for automatic skull definition and a convolution dose calculation algorithm for tissue inhomogeneity correction. The purpose of this work was to evaluate the impact of these new approaches on routine clinical Gamma Knife treatment planning. Sixty-five patients who underwent CT image-guided Gamma Knife radiosurgeries at the University of Pittsburgh Medical Center in recent years were retrospectively investigated. The diagnoses for these cases include trigeminal neuralgia, meningioma, acoustic neuroma, AVM, glioma, and benign and metastatic brain tumors. Dose calculations were performed for each patient with the same dose prescriptions and the same shot arrangements using three different approaches: 1) TMR 10 dose calculation with imaging skull definition; 2) convolution dose calculation with imaging skull definition; 3) TMR 10 dose calculation with conventional measurement-based skull definition. For each treatment matrix, the total treatment time, the target coverage index, the selectivity index, the gradient index, and a set of dose statistics parameters were compared between the three calculations. The dose statistics parameters investigated include the prescription isodose volume, the 12 Gy isodose volume, the minimum, maximum and mean doses on the treatment targets, and the critical structures under consideration. The difference between the convolution and the TMR 10 dose calculations for the 104 treatment matrices were found to vary with the patient anatomy, location of the treatment shots, and the tissue inhomogeneities around the treatment target. An average difference of 8.4% was observed for the total treatment times between the convolution and the TMR algorithms. The maximum differences in the treatment times, the prescription isodose volumes, the 12 Gy isodose volumes, the target coverage indices, the selectivity indices, and the gradient indices from the convolution

  4. Gamma Knife radiosurgery with CT image-based dose calculation.

    PubMed

    Xu, Andy Yuanguang; Bhatnagar, Jagdish; Bednarz, Greg; Niranjan, Ajay; Kondziolka, Douglas; Flickinger, John; Lunsford, L Dade; Huq, M Saiful

    2015-11-08

    The Leksell GammaPlan software version 10 introduces a CT image-based segmentation tool for automatic skull definition and a convolution dose calculation algorithm for tissue inhomogeneity correction. The purpose of this work was to evaluate the impact of these new approaches on routine clinical Gamma Knife treatment planning. Sixty-five patients who underwent CT image-guided Gamma Knife radiosurgeries at the University of Pittsburgh Medical Center in recent years were retrospectively investigated. The diagnoses for these cases include trigeminal neuralgia, meningioma, acoustic neuroma, AVM, glioma, and benign and metastatic brain tumors. Dose calculations were performed for each patient with the same dose prescriptions and the same shot arrangements using three different approaches: 1) TMR 10 dose calculation with imaging skull definition; 2) convolution dose calculation with imaging skull definition; 3) TMR 10 dose calculation with conventional measurement-based skull definition. For each treatment matrix, the total treatment time, the target coverage index, the selectivity index, the gradient index, and a set of dose statistics parameters were compared between the three calculations. The dose statistics parameters investigated include the prescription isodose volume, the 12 Gy isodose volume, the minimum, maximum and mean doses on the treatment targets, and the critical structures under consideration. The difference between the convolution and the TMR 10 dose calculations for the 104 treatment matrices were found to vary with the patient anatomy, location of the treatment shots, and the tissue inhomogeneities around the treatment target. An average difference of 8.4% was observed for the total treatment times between the convolution and the TMR algorithms. The maximum differences in the treatment times, the prescription isodose volumes, the 12 Gy isodose volumes, the target coverage indices, the selectivity indices, and the gradient indices from the convolution

  5. Patient-specific dose calculation methods for high-dose-rate iridium-192 brachytherapy

    NASA Astrophysics Data System (ADS)

    Poon, Emily S.

    . The scatter dose is again adjusted using our scatter correction technique. The algorithm was tested using phantoms and actual patient plans for head-and-neck, esophagus, and MammoSite breast brachytherapy. Although the method fails to correct for the changes in lateral scatter introduced by inhomogeneities, it is a major improvement over TG-43 and is sufficiently fast for clinical use.

  6. Dose-response model for teratological experiments involving quantal responses

    SciTech Connect

    Rai, K.; Van Ryzin, J.

    1985-03-01

    This paper introduces a dose-response model for teratological quantal response data where the probability of response for an offspring from a female at a given dose varies with the litter size. The maximum likelihood estimators for the parameters of the model are given as the solution of a nonlinear iterative algorithm. Two methods of low-dose extrapolation are presented, one based on the litter size distribution and the other a conservative method. The resulting procedures are then applied to a teratological data set from the literature.

  7. Dose Reduction Techniques

    SciTech Connect

    WAGGONER, L.O.

    2000-05-16

    As radiation safety specialists, one of the things we are required to do is evaluate tools, equipment, materials and work practices and decide whether the use of these products or work practices will reduce radiation dose or risk to the environment. There is a tendency for many workers that work with radioactive material to accomplish radiological work the same way they have always done it rather than look for new technology or change their work practices. New technology is being developed all the time that can make radiological work easier and result in less radiation dose to the worker or reduce the possibility that contamination will be spread to the environment. As we discuss the various tools and techniques that reduce radiation dose, keep in mind that the radiological controls should be reasonable. We can not always get the dose to zero, so we must try to accomplish the work efficiently and cost-effectively. There are times we may have to accept there is only so much you can do. The goal is to do the smart things that protect the worker but do not hinder him while the task is being accomplished. In addition, we should not demand that large amounts of money be spent for equipment that has marginal value in order to save a few millirem. We have broken the handout into sections that should simplify the presentation. Time, distance, shielding, and source reduction are methods used to reduce dose and are covered in Part I on work execution. We then look at operational considerations, radiological design parameters, and discuss the characteristics of personnel who deal with ALARA. This handout should give you an overview of what it takes to have an effective dose reduction program.

  8. Dose Calculation Spreadsheet

    SciTech Connect

    Simpkins, Ali

    1997-06-10

    VENTSAR XL is an EXCEL Spreadsheet that can be used to calculate downwind doses as a result of a hypothetical atmospheric release. Both building effects and plume rise may be considered. VENTSAR XL will run using any version of Microsoft EXCEL version 4.0 or later. Macros (the programming language of EXCEL) was used to automate the calculations. The user enters a minimal amount of input and the code calculates the resulting concentrations and doses at various downwind distances as specified by the user.

  9. Pharmacokinetic Dashboard-Recommended Dosing Is Different than Standard of Care Dosing in Infliximab-Treated Pediatric IBD Patients.

    PubMed

    Dubinsky, Marla C; Phan, Becky L; Singh, Namita; Rabizadeh, Shervin; Mould, Diane R

    2017-01-01

    Standard of care (SOC; combination of 5-10 mg/kg and an interval every 6-8 weeks) dosing of infliximab (IFX) is associated with significant loss of response. Dashboards using covariates that influence IFX pharmacokinetics (PK) may be a more precise way of optimizing anti-TNF dosing. We tested a prototype dashboard to compare forecasted dosing regimens with actual administered regimens and SOC. Fifty IBD patients completing IFX induction were monitored during maintenance (weeks 14-54). Clinical and laboratory data were collected at each infusion; serum was analyzed for IFX concentrations and anti-drug antibodies (ADA) at weeks 14 and 54 (Prometheus Labs, San Diego). Dosing was blinded to PK data. Dashboard-based assessments were conducted on de-identified clinical, laboratory, and PK data. Bayesian algorithms were used to forecast individualized troughs and determine optimal dosing to maintain target trough concentrations (3 μg/mL). Dashboard forecasted dosing post-week 14 was compared to actual administered dose and frequency and SOC. Using week 14 clinical data only, the dashboard recommended either a dose or an interval change (<0.5 mg/kg or <1 week difference) in 43/50 patients; only 44% recommended to have SOC dosing. When IFX14 concentration and ADA status were added to clinical data, dose and/or interval changes based on actual dosing were recommended in 48/50 (96%) patients; SOC dosing was recommended in only 11/50 (22%). Dashboard recommended SOC IFX dosing in a minority of patients. Dashboards will be an important tool to individualize IFX dosing to improve treatment durability.

  10. Pharmacogenetic warfarin dose refinements remain significantly influenced by genetic factors after one week of therapy.

    PubMed

    Horne, Benjamin D; Lenzini, Petra A; Wadelius, Mia; Jorgensen, Andrea L; Kimmel, Stephen E; Ridker, Paul M; Eriksson, Niclas; Anderson, Jeffrey L; Pirmohamed, Munir; Limdi, Nita A; Pendleton, Robert C; McMillin, Gwendolyn A; Burmester, James K; Kurnik, Daniel; Stein, C Michael; Caldwell, Michael D; Eby, Charles S; Rane, Anders; Lindh, Jonatan D; Shin, Jae-Gook; Kim, Ho-Sook; Angchaisuksiri, Pantep; Glynn, Robert J; Kronquist, Kathryn E; Carlquist, John F; Grice, Gloria R; Barrack, Robert L; Li, Juan; Gage, Brian F

    2012-02-01

    By guiding initial warfarin dose, pharmacogenetic (PGx) algorithms may improve the safety of warfarin initiation. However, once international normalised ratio (INR) response is known, the contribution of PGx to dose refinements is uncertain. This study sought to develop and validate clinical and PGx dosing algorithms for warfarin dose refinement on days 6-11 after therapy initiation. An international sample of 2,022 patients at 13 medical centres on three continents provided clinical, INR, and genetic data at treatment days 6-11 to predict therapeutic warfarin dose. Independent derivation and retrospective validation samples were composed by randomly dividing the population (80%/20%). Prior warfarin doses were weighted by their expected effect on S-warfarin concentrations using an exponential-decay pharmacokinetic model. The INR divided by that "effective" dose constituted a treatment response index . Treatment response index, age, amiodarone, body surface area, warfarin indication, and target INR were associated with dose in the derivation sample. A clinical algorithm based on these factors was remarkably accurate: in the retrospective validation cohort its R(2) was 61.2% and median absolute error (MAE) was 5.0 mg/week. Accuracy and safety was confirmed in a prospective cohort (N=43). CYP2C9 variants and VKORC1-1639 G→A were significant dose predictors in both the derivation and validation samples. In the retrospective validation cohort, the PGx algorithm had: R(2)= 69.1% (p<0.05 vs. clinical algorithm), MAE= 4.7 mg/week. In conclusion, a pharmacogenetic warfarin dose-refinement algorithm based on clinical, INR, and genetic factors can explain at least 69.1% of therapeutic warfarin dose variability after about one week of therapy.

  11. [Role of individualized dosing in oncology: are we there yet?].

    PubMed

    Joerger, Markus

    2013-12-31

    There is no personalized anticancer treatment without individualized dosing. Clearly, BSA-based chemotherapy dosing does not improve the substantial pharmacokinetic variability of individual drugs. Similarly, the new oral kinase inhibitors and endocrine compounds exhibit a marked pharmacokinetic variability due to variable absorption, hepatic metabolism and potential drug-drug and food-drug interactions. Therapeutic drug monitoring and genotyping may allow more individualized dosing of anticancer compounds in the future. However, the large number of anticancer drugs that are newly approved or undergoing preclinical development are a substantial challenge for the development of substance-specific dosing algorithms. It will be necessary to implement individual dosing strategies in the early clinical development of new anticancer drugs to inform clinicians at the time of first approval.

  12. Biological effects and equivalent doses in radiotherapy: A software solution

    PubMed Central

    Voyant, Cyril; Julian, Daniel; Roustit, Rudy; Biffi, Katia; Lantieri, Céline

    2013-01-01

    Background The limits of TDF (time, dose, and fractionation) and linear quadratic models have been known for a long time. Medical physicists and physicians are required to provide fast and reliable interpretations regarding delivered doses or any future prescriptions relating to treatment changes. Aim We, therefore, propose a calculation interface under the GNU license to be used for equivalent doses, biological doses, and normal tumor complication probability (Lyman model). Materials and methods The methodology used draws from several sources: the linear-quadratic-linear model of Astrahan, the repopulation effects of Dale, and the prediction of multi-fractionated treatments of Thames. Results and conclusions The results are obtained from an algorithm that minimizes an ad-hoc cost function, and then compared to an equivalent dose computed using standard calculators in seven French radiotherapy centers. PMID:24936319

  13. Software For Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steve E.

    1992-01-01

    SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.

  14. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  15. When is a dose not a dose

    SciTech Connect

    Bond, V.P.

    1991-01-01

    Although an enormous amount of progress has been made in the fields of radiation protection and risk assessment, a number of significant problems remain. The one problem which transcends all the rest, and which has been subject to considerable misunderstanding, involves what has come to be known as the 'linear non-threshold hypothesis', or 'linear hypothesis'. Particularly troublesome has been the interpretation that any amount of radiation can cause an increase in the excess incidence of cancer. The linear hypothesis has dominated radiation protection philosophy for more than three decades, with enormous financial, societal and political impacts and has engendered an almost morbid fear of low-level exposure to ionizing radiation in large segments of the population. This document presents a different interpretation of the linear hypothesis. The basis for this view lies in the evolution of dose-response functions, particularly with respect to their use initially in the context of early acute effects, and then for the late effects, carcinogenesis and mutagenesis. 11 refs., 4 figs. (MHB)

  16. Scatter dose summation for irregular fields: speed and accuracy study.

    PubMed

    DeWyngaert, J K; Siddon, R L; Bjarngard, B E

    1986-05-01

    Using program IRREG as a standard, we have compared speed and accuracy of several algorithms that calculate the scatter dose in an irregular field. All the algorithms, in some manner, decompose the irregular field into component triangles and obtain the scatter dose as the sum of the contributions from those triangles. Two of the algorithms replace each such component triangle with a sector of a certain "effective radius": in one case the average radius of the triangle, in the other the radius of the sector having the same area as the component triangle. A third algorithm decomposes each triangle further into two right triangles and utilizes the precalculated "equivalent radius" of each, to find the scatter contribution. For points near the center of a square field, all the methods compare favorably in accuracy to program IRREG, with less than a 1% error in total dose and with approximately a factor of 3-5 savings in computation time. Even for extreme rectangular fields (2 cm X 30 cm), the methods using the average radius and the equivalent right triangles agree to within 2% in total dose and approximately a factor of 3-4 savings in computation time.

  17. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  18. Quantum algorithms: an overview

    NASA Astrophysics Data System (ADS)

    Montanaro, Ashley

    2016-01-01

    Quantum computers are designed to outperform standard computers by running quantum algorithms. Areas in which quantum algorithms can be applied include cryptography, search and optimisation, simulation of quantum systems and solving large systems of linear equations. Here we briefly survey some known quantum algorithms, with an emphasis on a broad overview of their applications rather than their technical details. We include a discussion of recent developments and near-term applications of quantum algorithms.

  19. INSENS classification algorithm report

    SciTech Connect

    Hernandez, J.E.; Frerking, C.J.; Myers, D.W.

    1993-07-28

    This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.

  20. Computing proton dose to irregularly moving targets

    NASA Astrophysics Data System (ADS)

    Phillips, Justin; Gueorguiev, Gueorgui; Shackleford, James A.; Grassberger, Clemens; Dowdell, Stephen; Paganetti, Harald; Sharp, Gregory C.

    2014-08-01

    phantom (2 mm, 2%), and 90.8% (3 mm, 3%)for the patient data. Conclusions: We have demonstrated a method for accurately reproducing proton dose to an irregularly moving target from a single CT image. We believe this algorithm could prove a useful tool to study the dosimetric impact of baseline shifts either before or during treatment.

  1. Computing Proton Dose to Irregularly Moving Targets

    PubMed Central

    Phillips, Justin; Gueorguiev, Gueorgui; Shackleford, James A.; Grassberger, Clemens; Dowdell, Stephen; Paganetti, Harald; Sharp, Gregory C.

    2014-01-01

    phantom (2 mm, 2%), and 90.8% (3 mm, 3%)for the patient data. Conclusions We have demonstrated a method for accurately reproducing proton dose to an irregularly moving target from a single CT image. We believe this algorithm could prove a useful tool to study the dosimetric impact of baseline shifts either before or during treatment. PMID:25029239

  2. Low-Dose Carcinogenicity Studies

    EPA Science Inventory

    One of the major deficiencies of cancer risk assessments is the lack of low-dose carcinogenicity data. Most assessments require extrapolation from high to low doses, which is subject to various uncertainties. Only 4 low-dose carcinogenicity studies and 5 low-dose biomarker/pre-n...

  3. Clustering algorithm studies

    NASA Astrophysics Data System (ADS)

    Graf, Norman A.

    2001-07-01

    An object-oriented framework for undertaking clustering algorithm studies has been developed. We present here the definitions for the abstract Cells and Clusters as well as the interface for the algorithm. We intend to use this framework to investigate the interplay between various clustering algorithms and the resulting jet reconstruction efficiency and energy resolutions to assist in the design of the calorimeter detector.

  4. Algorithm for correcting optimization convergence errors in Eclipse.

    PubMed

    Zacarias, Albert S; Mills, Michael D

    2009-10-14

    IMRT plans generated in Eclipse use a fast algorithm to evaluate dose for optimization and a more accurate algorithm for a final dose calculation, the Analytical Anisotropic Algorithm. The use of a fast optimization algorithm introduces optimization convergence errors into an IMRT plan. Eclipse has a feature where optimization may be performed on top of an existing base plan. This feature allows for the possibility of arriving at a recursive solution to optimization that relies on the accuracy of the final dose calculation algorithm and not the optimizer algorithm. When an IMRT plan is used as a base plan for a second optimization, the second optimization can compensate for heterogeneity and modulator errors in the original base plan. Plans with the same field arrangement as the initial base plan may be added together by adding the initial plan optimal fluence to the dose correcting plan optimal fluence.A simple procedure to correct for optimization errors is presented that may be implemented in the Eclipse treatment planning system, along with an Excel spreadsheet to add optimized fluence maps together.

  5. Clinical implications of Eclipse analytical anisotropic algorithm and Acuros XB algorithm for the treatment of lung cancer.

    PubMed

    Krishna, Gangarapu Sri; Srinivas, Vuppu; Reddy, Palreddy Yadagiri

    2016-01-01

    The aim of the present study was to investigate the dose-volume variations of planning target volume (PTV) and organs at risks (OARs) in 15 left lung cancer patients comparing analytical anisotropic algorithm (AAA) versus Acuros XB algorithm. Originally, all plans were created using AAA with a template of dose constraints and optimization parameters, and the patients were treated using intensity modulated radiotherapy. In addition, another set of plans was created by performing only dose calculations using Acuros algorithm without doing any reoptimization. Thereby, in both set of plans, the entire plan parameters, namely, beam angle, beam weight, number of beams, prescribed dose, normalization point, region of interest constraints, number of monitor units, and plan optimization were kept constant. The evaluated plan parameters were PTV coverage at dose at 95% volume (TV95) of PTV (D95), the dose at 5% of PTV (D5), maximum dose (Dmax), the mean dose (Dmean), the percent volume receiving 5 Gy (V5), 20 Gy (V20), 30 Gy (V30) of normal lung at risk (left lung- gross target volume [GTV], the dose at 33% volume (D33), at 67% volume (D67), and the Dmean (Gy) of the heart, the Dmax of the spinal cord. Furthermore, homogeneity index (HI) and conformity index were evaluated to check the quality of the plans. Significant statistical differences between the two algorithms, P < 0.05, were found in D95, Dmax, TV95, and HI of PTV. Furthermore, significant statistical differences were found in the dose parameters for the OARs, namely, V5, V20, and V30 of left lung-GTV, right lung (Dmean), D33, and Dmean of the heart, and Dmax of the spine, respectively. Although statistical differences do exist, the magnitude of the differences is too small to cause any clinically observable effect.

  6. Effect of deformable registration on the dose calculated in radiation therapy planning CT scans of lung cancer patients

    SciTech Connect

    Cunliffe, Alexandra R.; Armato, Samuel G.; White, Bradley; Justusson, Julia; Contee, Clay; Malik, Renuka; Al-Hallaq, Hania A.

    2015-01-15

    Purpose: To characterize the effects of deformable image registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60 Gy, 2 Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pretherapy (4–75 days) CT scan and a treatment planning scan with an associated dose map were collected. To establish correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pretherapy scans then were coregistered with planning scans (and associated dose maps) using the demons deformable registration algorithm and two variants of the Fraunhofer MEVIS algorithm (“Fast” and “EMPIRE10”). Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from each of the three algorithms. The Euclidean distance between manually and automatically mapped landmark points (d{sub E}) and the absolute difference in planned dose (|ΔD|) were calculated. Using regression modeling, |ΔD| was modeled as a function of d{sub E}, dose (D), dose standard deviation (SD{sub dose}) in an eight-pixel neighborhood, and the registration algorithm used. Results: Over 1400 landmark point pairs were identified, with 58–93 (median: 84) points identified per patient. Average |ΔD| across patients was 3.5 Gy (range: 0.9–10.6 Gy). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, with an average d{sub E} across patients of 5.2 mm (compared with >7 mm for the other two algorithms). Consequently, average |ΔD| was also lowest using the Fraunhofer MEVIS EMPIRE10 algorithm. |ΔD| increased significantly as a function of d{sub E} (0.42 Gy/mm), D (0.05 Gy/Gy), SD{sub dose} (1.4 Gy/Gy), and the algorithm used (≤1 Gy). Conclusions: An

  7. A spatially encoded dose difference maximal intensity projection map for patient dose evaluation: A new first line patient quality assurance tool

    SciTech Connect

    Hu Weigang; Graff, Pierre; Boettger, Thomas; Pouliot, Jean; and others

    2011-04-15

    Purpose: To develop a spatially encoded dose difference maximal intensity projection (DD-MIP) as an online patient dose evaluation tool for visualizing the dose differences between the planning dose and dose on the treatment day. Methods: Megavoltage cone-beam CT (MVCBCT) images acquired on the treatment day are used for generating the dose difference index. Each index is represented by different colors for underdose, acceptable, and overdose regions. A maximal intensity projection (MIP) algorithm is developed to compress all the information of an arbitrary 3D dose difference index into a 2D DD-MIP image. In such an algorithm, a distance transformation is generated based on the planning CT. Then, two new volumes representing the overdose and underdose regions of the dose difference index are encoded with the distance transformation map. The distance-encoded indices of each volume are normalized using the skin distance obtained on the planning CT. After that, two MIPs are generated based on the underdose and overdose volumes with green-to-blue and green-to-red lookup tables, respectively. Finally, the two MIPs are merged with an appropriate transparency level and rendered in planning CT images. Results: The spatially encoded DD-MIP was implemented in a dose-guided radiotherapy prototype and tested on 33 MVCBCT images from six patients. The user can easily establish the threshold for the overdose and underdose. A 3% difference between the treatment and planning dose was used as the threshold in the study; hence, the DD-MIP shows red or blue color for the dose difference >3% or {<=}3%, respectively. With such a method, the overdose and underdose regions can be visualized and distinguished without being overshadowed by superficial dose differences. Conclusions: A DD-MIP algorithm was developed that compresses information from 3D into a single or two orthogonal projections while hinting the user whether the dose difference is on the skin surface or deeper.

  8. A computerized framework for monitoring four-dimensional dose distributions during stereotactic body radiation therapy using a portal dose image-based 2D/3D registration approach.

    PubMed

    Nakamoto, Takahiro; Arimura, Hidetaka; Nakamura, Katsumasa; Shioyama, Yoshiyuki; Mizoguchi, Asumi; Hirose, Taka-Aki; Honda, Hiroshi; Umezu, Yoshiyuki; Nakamura, Yasuhiko; Hirata, Hideki

    2015-03-01

    A computerized framework for monitoring four-dimensional (4D) dose distributions during stereotactic body radiation therapy based on a portal dose image (PDI)-based 2D/3D registration approach has been proposed in this study. Using the PDI-based registration approach, simulated 4D "treatment" CT images were derived from the deformation of 3D planning CT images so that a 2D planning PDI could be similar to a 2D dynamic clinical PDI at a breathing phase. The planning PDI was calculated by applying a dose calculation algorithm (a pencil beam convolution algorithm) to the geometry of the planning CT image and a virtual water equivalent phantom. The dynamic clinical PDIs were estimated from electronic portal imaging device (EPID) dynamic images including breathing phase data obtained during a treatment. The parameters of the affine transformation matrix were optimized based on an objective function and a gamma pass rate using a Levenberg-Marquardt (LM) algorithm. The proposed framework was applied to the EPID dynamic images of ten lung cancer patients, which included 183 frames (mean: 18.3 per patient). The 4D dose distributions during the treatment time were successfully obtained by applying the dose calculation algorithm to the simulated 4D "treatment" CT images. The mean±standard deviation (SD) of the percentage errors between the prescribed dose and the estimated dose at an isocenter for all cases was 3.25±4.43%. The maximum error for the ten cases was 14.67% (prescribed dose: 1.50Gy, estimated dose: 1.72Gy), and the minimum error was 0.00%. The proposed framework could be feasible for monitoring the 4D dose distribution and dose errors within a patient's body during treatment.

  9. A Novel Admixture-Based Pharmacogenetic Approach to Refine Warfarin Dosing in Caribbean Hispanics

    PubMed Central

    Claudio-Campos, Karla; Rivera-Miranda, Giselle; Bermúdez-Bosch, Luis; Renta, Jessicca Y.; Cadilla, Carmen L.; Cruz, Iadelisse; Feliu, Juan F.; Vergara, Cunegundo; Ruaño, Gualberto

    2016-01-01

    Aim This study is aimed at developing a novel admixture-adjusted pharmacogenomic approach to individually refine warfarin dosing in Caribbean Hispanic patients. Patients & Methods A multiple linear regression analysis of effective warfarin doses versus relevant genotypes, admixture, clinical and demographic factors was performed in 255 patients and further validated externally in another cohort of 55 individuals. Results The admixture-adjusted, genotype-guided warfarin dosing refinement algorithm developed in Caribbean Hispanics showed better predictability (R2 = 0.70, MAE = 0.72mg/day) than a clinical algorithm that excluded genotypes and admixture (R2 = 0.60, MAE = 0.99mg/day), and outperformed two prior pharmacogenetic algorithms in predicting effective dose in this population. For patients at the highest risk of adverse events, 45.5% of the dose predictions using the developed pharmacogenetic model resulted in ideal dose as compared with only 29% when using the clinical non-genetic algorithm (p<0.001). The admixture-driven pharmacogenetic algorithm predicted 58% of warfarin dose variance when externally validated in 55 individuals from an independent validation cohort (MAE = 0.89 mg/day, 24% mean bias). Conclusions Results supported our rationale to incorporate individual’s genotypes and unique admixture metrics into pharmacogenetic refinement models in order to increase predictability when expanding them to admixed populations like Caribbean Hispanics. Trial Registration ClinicalTrials.gov NCT01318057 PMID:26745506

  10. Treatment Planning for MRI Assisted Brachytherapy of Gynecologic Malignancies Based on Total Dose Constraints

    SciTech Connect

    Lang, Stefan Kirisits, Christian; Dimopoulos, Johannes; Georg, Dietmar; Poetter, Richard

    2007-10-01

    Purpose: To develop a method for treatment planning and optimization of magnetic resonance imaging (MRI)-assisted gynecologic brachytherapy that includes biologically weighted total dose constraints. Methods and Materials: The applied algorithm is based on the linear-quadratic model and includes dose, dose rate, and fractionation of the whole radiotherapy setting, consisting of external beam therapy plus high-dose-rate (HDR), low-dose-rate (LDR) or pulsed-dose rate (PDR) brachytherapy. Biologically effective doses (BED) are converted to more familiar isoeffective (equivalent) doses in 2-Gy fractions. For individual treatment planning of each brachytherapy fraction, the algorithm calculates the physical dose per brachytherapy fraction that corresponds to a predefined isoeffective total dose constraint. Achieved target dose and sparing of organs at risk of already delivered brachytherapy fractions are incorporated. Results: Since implementation for use in clinical routine in 2001, MRI assisted treatment plans of 216 gynecologic patients (161 HDR, 55 PDR brachytherapy) were prospectively optimized taking into account isoeffective dose-volume histogram-based total dose constraints for high-risk clinical target volume (HR CTV) and organs at risk (bladder, rectum, sigmoid). The algorithm is implemented in a spreadsheet and the procedure is fast and efficient. An uncertainty analysis of the isoeffective total doses based on variations of tissue parameters shows that confidence intervals are larger for PDR compared with HDR brachytherapy. For common treatment schedules, overall uncertainties of high-risk clinical target volume and organs at risk are within 8 Gy, except for the bladder when using the PDR technique. Conclusion: The presented method to respect total dose constraints is reliable and efficient and an essential tool when aiming to increase local control and minimize side effects.

  11. Estimation of the Dose and Dose Rate Effectiveness Factor

    NASA Technical Reports Server (NTRS)

    Chappell, L.; Cucinotta, F. A.

    2013-01-01

    Current models to estimate radiation risk use the Life Span Study (LSS) cohort that received high doses and high dose rates of radiation. Transferring risks from these high dose rates to the low doses and dose rates received by astronauts in space is a source of uncertainty in our risk calculations. The solid cancer models recommended by BEIR VII [1], UNSCEAR [2], and Preston et al [3] is fitted adequately by a linear dose response model, which implies that low doses and dose rates would be estimated the same as high doses and dose rates. However animal and cell experiments imply there should be curvature in the dose response curve for tumor induction. Furthermore animal experiments that directly compare acute to chronic exposures show lower increases in tumor induction than acute exposures. A dose and dose rate effectiveness factor (DDREF) has been estimated and applied to transfer risks from the high doses and dose rates of the LSS cohort to low doses and dose rates such as from missions in space. The BEIR VII committee [1] combined DDREF estimates using the LSS cohort and animal experiments using Bayesian methods for their recommendation for a DDREF value of 1.5 with uncertainty. We reexamined the animal data considered by BEIR VII and included more animal data and human chromosome aberration data to improve the estimate for DDREF. Several experiments chosen by BEIR VII were deemed inappropriate for application to human risk models of solid cancer risk. Animal tumor experiments performed by Ullrich et al [4], Alpen et al [5], and Grahn et al [6] were analyzed to estimate the DDREF. Human chromosome aberration experiments performed on a sample of astronauts within NASA were also available to estimate the DDREF. The LSS cohort results reported by BEIR VII were combined with the new radiobiology results using Bayesian methods.

  12. Quantitative validation of a new coregistration algorithm

    SciTech Connect

    Pickar, R.D.; Esser, P.D.; Pozniakoff, T.A.; Van Heertum, R.L.; Stoddart, H.A. Jr.

    1995-08-01

    A new coregistration software package, Neuro9OO Image Coregistration software, has been developed specifically for nuclear medicine. With this algorithm, the correlation coefficient is maximized between volumes generated from sets of transaxial slices. No localization markers or segmented surfaces are needed. The coregistration program was evaluated for translational and rotational registration accuracy. A Tc-99m HM-PAO split-dose study (0.53 mCi low dose, L, and 1.01 mCi high dose, H) was simulated with a Hoffman Brain Phantom with five fiducial markers. Translation error was determined by a shift in image centroid, and rotation error was determined by a simplified two-axis approach. Changes in registration accuracy were measured with respect to: (1) slice spacing, using the four different combinations LL, LH, HL, HH, (2) translational and rotational misalignment before coregistration, (3) changes in the step size of the iterative parameters. In all the cases the algorithm converged with only small difference in translation offset, 0 and 0. At 6 nun slice spacing, translational efforts ranged from 0.9 to 2.8 mm (system resolution at 100 mm, 6.8 mm). The converged parameters showed little sensitivity to count density. In addition the correlation coefficient increased with decreasing iterative step size, as expected. From these experiments, the authors found that this algorithm based on the maximization of the correlation coefficient between studies was an accurate way to coregister SPECT brain images.

  13. Dose esclation in radioimmunotherapy based on projected whole body dose

    SciTech Connect

    Wahl, R.L.; Kaminski, M.S.; Regan, D.

    1994-05-01

    A variety of approaches have been utilized in conducting phase I radioimmunotherapy dose-escalation trials. Escalation of dose has been based on graded increases in administered mCi; mCi/kg; or mCi/m2. It is also possible to escalate dose based on tracer-projected marrow, blood or whole body radiation dose. We describe our results in performing a dose-escalation trial in patients with non-Hodgkin lymphoma based on escalating administered whole-body radiation dose. The mCi dose administered was based on a patient-individualized tracer projected whole-body dose. 25 patients were entered on the study. RIT with 131 I anti-B-1 was administered to 19 patients. The administered dose was prescribed based on the projected whole body dose, determined from patient-individualized tracer studies performed prior to RIT. Whole body dose estimates were based on the assumption that the patient was an ellipsoid, with 131 antibody kinetics determined using a whole-body probe device acquiring daily conjugate views of 1 minute duration/view. Dose escalation levels proceeded with 10 cGy increments from 25 cGy whole-body and continues, now at 75 cGy. The correlation among potential methods of dose escalation and toxicity was assessed. Whole body radiation dose by probe was strongly correlated with the blood radiation dose determined from sequential blood sampling during tracer studies (r=.87). Blood radiation dose was very weakly correlated with mCi dose (r=.4) and mCi/kg (r=.45). Whole body radiation dose appeared less well-correlated with injected dose in mCi (r=.6), or mCi/kg (r=.64). Toxicity has been infrequent in these patients, but appears related to increasing whole body dose. Non-invasive determination of whole-body radiation dose by gamma probe represents a non-invasive method of estimating blood radiation dose, and thus of estimating bone marrow radiation dose.

  14. Dose reconstruction for real-time patient-specific dose estimation in CT

    SciTech Connect

    De Man, Bruno Yin, Zhye; Wu, Mingye; FitzGerald, Paul; Kalra, Mannudeep

    2015-05-15

    Purpose: Many recent computed tomography (CT) dose reduction approaches belong to one of three categories: statistical reconstruction algorithms, efficient x-ray detectors, and optimized CT acquisition schemes with precise control over the x-ray distribution. The latter category could greatly benefit from fast and accurate methods for dose estimation, which would enable real-time patient-specific protocol optimization. Methods: The authors present a new method for volumetrically reconstructing absorbed dose on a per-voxel basis, directly from the actual CT images. The authors’ specific implementation combines a distance-driven pencil-beam approach to model the first-order x-ray interactions with a set of Gaussian convolution kernels to model the higher-order x-ray interactions. The authors performed a number of 3D simulation experiments comparing the proposed method to a Monte Carlo based ground truth. Results: The authors’ results indicate that the proposed approach offers a good trade-off between accuracy and computational efficiency. The images show a good qualitative correspondence to Monte Carlo estimates. Preliminary quantitative results show errors below 10%, except in bone regions, where the authors see a bigger model mismatch. The computational complexity is similar to that of a low-resolution filtered-backprojection algorithm. Conclusions: The authors present a method for analytic dose reconstruction in CT, similar to the techniques used in radiation therapy planning with megavoltage energies. Future work will include refinements of the proposed method to improve the accuracy as well as a more extensive validation study. The proposed method is not intended to replace methods that track individual x-ray photons, but the authors expect that it may prove useful in applications where real-time patient-specific dose estimation is required.

  15. Radiotherapy dosimetry of the breast : Factors affecting dose to the patient

    NASA Astrophysics Data System (ADS)

    Venables, Karen

    The work presented in this thesis developed from the quality assurance for the START trial. This provided a unique opportunity to perform measurements in a breast shaped, soft tissue equivalent phantom in over 40 hospitals, representing 75% of the radiotherapy centres in the UK. A wide range of planning systems using beam library, beam model and convolution based algorithms have been compared. The limitations of current algorithms as applied to breast radiotherapy have been investigated by analysing the results of the START quality assurance programme, performing further measurements of surface dose and setting up of a Monte Carlo system to calculate dose distributions and superficial doses. Measurements in both 2D and 3D breast phantoms indicated that the average measured dose at the centre of the breast was lower than that calculated on the planning system by approximately 2%. Surface dose measurements showed good agreement between measurements and Monte Carlo calculations with values ranging from 6% of the maximum dose for a small field (5cmx5cm) at normal incidence to 37% for a large field (9cmx20cm) at an angle of 75°. Calculation on CT plans with pixel by pixel correction for the breast density indicated that monitor units are lower by an average 3% compared to a bulk density corrected plan assuming a density of 1g.cm-3. The average dose estimated from TLD in build-up caps placed on the patient surface was 0.99 of the prescribed dose. This shows that the underestimation of dose due to the assumption of unit density tissue is partially cancelled by the overestimation of dose by the algorithms. The work showed that simple calculation algorithms can be used for calculation of dose to the breast, however they are less accurate for patients who have undergone a mastectomy and in regions close to inhomogeneities where more complex algorithms are needed.

  16. TU-F-17A-08: The Relative Accuracy of 4D Dose Accumulation for Lung Radiotherapy Using Rigid Dose Projection Versus Dose Recalculation On Every Breathing Phase

    SciTech Connect

    Lamb, J; Lee, C; Tee, S; Lee, P; Iwamoto, K; Low, D; Valdes, G; Robinson, C

    2014-06-15

    Purpose: To investigate the accuracy of 4D dose accumulation using projection of dose calculated on the end-exhalation, mid-ventilation, or average intensity breathing phase CT scan, versus dose accumulation performed using full Monte Carlo dose recalculation on every breathing phase. Methods: Radiotherapy plans were analyzed for 10 patients with stage I-II lung cancer planned using 4D-CT. SBRT plans were optimized using the dose calculated by a commercially-available Monte Carlo algorithm on the end-exhalation 4D-CT phase. 4D dose accumulations using deformable registration were performed with a commercially available tool that projected the planned dose onto every breathing phase without recalculation, as well as with a Monte Carlo recalculation of the dose on all breathing phases. The 3D planned dose (3D-EX), the 3D dose calculated on the average intensity image (3D-AVE), and the 4D accumulations of the dose calculated on the end-exhalation phase CT (4D-PR-EX), the mid-ventilation phase CT (4D-PR-MID), and the average intensity image (4D-PR-AVE), respectively, were compared against the accumulation of the Monte Carlo dose recalculated on every phase. Plan evaluation metrics relating to target volumes and critical structures relevant for lung SBRT were analyzed. Results: Plan evaluation metrics tabulated using 4D-PR-EX, 4D-PR-MID, and 4D-PR-AVE differed from those tabulated using Monte Carlo recalculation on every phase by an average of 0.14±0.70 Gy, - 0.11±0.51 Gy, and 0.00±0.62 Gy, respectively. Deviations of between 8 and 13 Gy were observed between the 4D-MC calculations and both 3D methods for the proximal bronchial trees of 3 patients. Conclusions: 4D dose accumulation using projection without re-calculation may be sufficiently accurate compared to 4D dose accumulated from Monte Carlo recalculation on every phase, depending on institutional protocols. Use of 4D dose accumulation should be considered when evaluating normal tissue complication

  17. Calculation of Residual Dose Around Small Objects Using Mu2e Target as an Example

    SciTech Connect

    Pronskikh, V.S.; Leveling, A.F.; Mokhov, N.V.; Rakhno, I.L.; Aarnio, P.; /Aalto U.

    2011-09-01

    The MARS15 code provides contact residual dose rates for relatively large accelerator and experimental components for predefined irradiation and cooling times. The dose rate at particular distances from the components, some of which can be rather small in size, is calculated in a post Monte-Carlo stage via special algorithms described elsewhere. The approach is further developed and described in this paper.

  18. Comparison of the Performance of the Warfarin Pharmacogenetics Algorithms in Patients with Surgery of Heart Valve Replacement and Heart Valvuloplasty.

    PubMed

    Xu, Hang; Su, Shi; Tang, Wuji; Wei, Meng; Wang, Tao; Wang, Dongjin; Ge, Weihong

    2015-09-01

    A large number of warfarin pharmacogenetics algorithms have been published. Our research was aimed to evaluate the performance of the selected pharmacogenetic algorithms in patients with surgery of heart valve replacement and heart valvuloplasty during the phase of initial and stable anticoagulation treatment. 10 pharmacogenetic algorithms were selected by searching PubMed. We compared the performance of the selected algorithms in a cohort of 193 patients during the phase of initial and stable anticoagulation therapy. Predicted dose was compared to therapeutic dose by using a predicted dose percentage that falls within 20% threshold of the actual dose (percentage within 20%) and mean absolute error (MAE). The average warfarin dose for patients was 3.05±1.23mg/day for initial treatment and 3.45±1.18mg/day for stable treatment. The percentages of the predicted dose within 20% of the therapeutic dose were 44.0±8.8% and 44.6±9.7% for the initial and stable phases, respectively. The MAEs of the selected algorithms were 0.85±0.18mg/day and 0.93±0.19mg/day, respectively. All algorithms had better performance in the ideal group than in the low dose and high dose groups. The only exception is the Wadelius et al. algorithm, which had better performance in the high dose group. The algorithms had similar performance except for the Wadelius et al. and Miao et al. algorithms, which had poor accuracy in our study cohort. The Gage et al. algorithm had better performance in both phases of initial and stable treatment. Algorithms had relatively higher accuracy in the >50years group of patients on the stable phase.

  19. Algorithms for optimizing CT fluence control

    NASA Astrophysics Data System (ADS)

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-03-01

    The ability to customize the incident x-ray fluence in CT via beam-shaping filters or mA modulation is known to improve image quality and/or reduce radiation dose. Previous work has shown that complete control of x-ray fluence (ray-by-ray fluence modulation) would further improve dose efficiency. While complete control of fluence is not currently possible, emerging concepts such as dynamic attenuators and inverse-geometry CT allow nearly complete control to be realized. Optimally using ray-by-ray fluence modulation requires solving a very high-dimensional optimization problem. Most optimization techniques fail or only provide approximate solutions. We present efficient algorithms for minimizing mean or peak variance given a fixed dose limit. The reductions in variance can easily be translated to reduction in dose, if the original variance met image quality requirements. For mean variance, a closed form solution is derived. The peak variance problem is recast as iterated, weighted mean variance minimization, and at each iteration it is possible to bound the distance to the optimal solution. We apply our algorithms in simulations of scans of the thorax and abdomen. Peak variance reductions of 45% and 65% are demonstrated in the abdomen and thorax, respectively, compared to a bowtie filter alone. Mean variance shows smaller gains (about 15%).

  20. Radiation dose to physicians’ eye lens during interventional radiology

    NASA Astrophysics Data System (ADS)

    Bahruddin, N. A.; Hashim, S.; Karim, M. K. A.; Sabarudin, A.; Ang, W. C.; Salehhon, N.; Bakar, K. A.

    2016-03-01

    The demand of interventional radiology has increased, leading to significant risk of radiation where eye lens dose assessment becomes a major concern. In this study, we investigate physicians' eye lens doses during interventional procedures. Measurement were made using TLD-100 (LiF: Mg, Ti) dosimeters and was recorded in equivalent dose at a depth of 0.07 mm, Hp(0.07). Annual Hp(0.07) and annual effective dose were estimated using workload estimation for a year and Von Boetticher algorithm. Our results showed the mean Hp(0.07) dose of 0.33 mSv and 0.20 mSv for left and right eye lens respectively. The highest estimated annual eye lens dose was 29.33 mSv per year, recorded on left eye lens during fistulogram procedure. Five physicians had exceeded 20 mSv dose limit as recommended by international commission of radiological protection (ICRP). It is suggested that frequent training and education on occupational radiation exposure are necessary to increase knowledge and awareness of the physicians’ thus reducing dose during the interventional procedure.

  1. Hanford Site Annual Report Radiological Dose Calculation Upgrade Evaluation

    SciTech Connect

    Snyder, Sandra F.

    2010-02-28

    Operations at the Hanford Site, Richland, Washington, result in the release of radioactive materials to offsite residents. Site authorities are required to estimate the dose to the maximally exposed offsite resident. Due to the very low levels of exposure at the residence, computer models, rather than environmental samples, are used to estimate exposure, intake, and dose. A DOS-based model has been used in the past (GENII version 1.485). GENII v1.485 has been updated to a Windows®-based software (GENII version 2.08). Use of the updated software will facilitate future dose evaluations, but must be demonstrated to provide results comparable to those of GENII v1.485. This report describes the GENII v1.485 and GENII v2.08 dose exposure, intake, and dose estimates for the maximally exposed offsite resident reported for calendar year 2008. The GENII v2.08 results reflect updates to implemented algorithms. No two environmental models produce the same results, as was again demonstrated in this report. The aggregated dose results from 2008 Hanford Site airborne and surface water exposure scenarios provide comparable dose results. Therefore, the GENII v2.08 software is recommended for future offsite resident dose evaluations.

  2. An active set algorithm for treatment planning optimization.

    PubMed

    Hristov, D H; Fallone, B G

    1997-09-01

    An active set algorithm for optimization of radiation therapy dose planning by intensity modulated beams has been developed. The algorithm employs a conjugate-gradient routine for subspace minimization in order to achieve a higher rate of convergence than the widely used constrained steepest-descent method at the expense of a negligible amount of overhead calculations. The performance of the new algorithm has been compared to that of the constrained steepest-descent method for various treatment geometries and two different objectives. The active set algorithm is found to be superior to the constrained steepest descent, both in terms of its convergence properties and the residual value of the cost functions at termination. Its use can significantly accelerate the design of conformal plans with intensity modulated beams by decreasing the number of time-consuming dose calculations.

  3. Quality assurance for radiotherapy in prostate cancer: Point dose measurements in intensity modulated fields with large dose gradients

    SciTech Connect

    Escude, Lluis . E-mail: lluis.escude@gmx.net; Linero, Dolors; Molla, Meritxell; Miralbell, Raymond

    2006-11-15

    Purpose: We aimed to evaluate an optimization algorithm designed to find the most favorable points to position an ionization chamber (IC) for quality assurance dose measurements of patients treated for prostate cancer with intensity-modulated radiotherapy (IMRT) and fields up to 10 cm x 10 cm. Methods and Materials: Three cylindrical ICs (PTW, Freiburg, Germany) were used with volumes of 0.6 cc, 0.125 cc, and 0.015 cc. Dose measurements were made in a plastic phantom (PMMA) at 287 optimized points. An algorithm was designed to search for points with the lowest dose gradient. Measurements were made also at 39 nonoptimized points. Results were normalized to a reference homogeneous field introducing a dose ratio factor, which allowed us to compare measured vs. calculated values as percentile dose ratio factor deviations {delta}F (%). A tolerance range of {delta}F (%) of {+-}3% was considered. Results: Half of the {delta}F (%) values obtained at nonoptimized points were outside the acceptable range. Values at optimized points were widely spread for the largest IC (i.e., 60% of the results outside the tolerance range), whereas for the two small-volume ICs, only 14.6% of the results were outside the tolerance interval. No differences were observed when comparing the two small ICs. Conclusions: The presented optimization algorithm is a useful tool to determine the best IC in-field position for optimal dose measurement conditions. A good agreement between calculated and measured doses can be obtained by positioning small volume chambers at carefully selected points in the field. Large chambers may be unreliable even in optimized points for IMRT fields {<=}10 cm x 10 cm.

  4. SU-E-T-806: Very Fast GPU-Based IMPT Dose Computation

    SciTech Connect

    Sullivan, A; Brand, M

    2015-06-15

    Purpose: Designing particle therapy treatment plans is a dosimetrist-in-the-loop optimization wherein the conflicting constraints of achieving a desired tumor dose distribution must be balanced against the need to minimize the dose to nearby OARs. IMPT introduces an additional, inner, numerical optimization step in which the dosimetrist’s current set of constraints are used to determine the weighting of beam spots. Very fast dose calculations are needed to enable the dosimetrist to perform many iterations of the outer optimization in a commercially reasonable time. Methods: We have developed a GPU-based convolution-type dose computation algorithm that more accurately handles heterogeneities than earlier algorithms by redistributing energy from dose computed in a water volume. The depth dependence of the beam size is handled by pre-processing Bragg curves using a weighted superposition of Gaussian bases. Additionally, scattering, the orientation of treatment ports, and the non-parallel propagation of beams are handled by large, but sparse, energy-redistribution matrices that implement affine transforms. Results: We tested our algorithm using a brain tumor dataset with 1 mm voxels and a single treatment port from the patient’s anterior through the sinuses. The resulting dose volume is 100 × 100 × 230 mm with 66,200 beam spots on a 3 × 3 × 2 mm grid. The dose computation takes <1 msec on a GeForce GTX Titan GPU with the Gamma passing rate for 2mm/2% criterion of 99.1% compared to dose calculated by an alternative dose algorithm based on pencil beams. We will present comparisons to Monte Carlo dose calculations. Conclusion: Our high-speed dose computation method enables the IMPT spot weights to be optimized in <1 second, resulting in a nearly instantaneous response to user changes to dose constraints. This permits the creation of higher quality plans by allowing the dosimetrist to evaluate more alternatives in a short period of time.

  5. Automated coronary artery calcification detection on low-dose chest CT images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Cham, Matthew D.; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.

    2014-03-01

    Coronary artery calcification (CAC) measurement from low-dose CT images can be used to assess the risk of coronary artery disease. A fully automatic algorithm to detect and measure CAC from low-dose non-contrast, non-ECG-gated chest CT scans is presented. Based on the automatically detected CAC, the Agatston score (AS), mass score and volume score were computed. These were compared with scores obtained manually from standard-dose ECG-gated scans and low-dose un-gated scans of the same patient. The automatic algorithm segments the heart region based on other pre-segmented organs to provide a coronary region mask. The mitral valve and aortic valve calcification is identified and excluded. All remaining voxels greater than 180HU within the mask region are considered as CAC candidates. The heart segmentation algorithm was evaluated on 400 non-contrast cases with both low-dose and regular dose CT scans. By visual inspection, 371 (92.8%) of the segmentations were acceptable. The automated CAC detection algorithm was evaluated on 41 low-dose non-contrast CT scans. Manual markings were performed on both low-dose and standard-dose scans for these cases. Using linear regression, the correlation of the automatic AS with the standard-dose manual scores was 0.86; with the low-dose manual scores the correlation was 0.91. Standard risk categories were also computed. The automated method risk category agreed with manual markings of gated scans for 24 cases while 15 cases were 1 category off. For low-dose scans, the automatic method agreed with 33 cases while 7 cases were 1 category off.

  6. Assessment of the effective dose equivalent for external photon radiation

    SciTech Connect

    Reece, W.D.; Poston, J.W.; Xu, X.G. )

    1993-02-01

    Beginning in January 1994, US nuclear power plants must change the way that they determine the radiation exposure to their workforce. At that time, revisions to Title 10 Part 20 of the Code of Federal Regulations will be in force requiring licensees to evaluate worker radiation exposure using a risk-based methodology termed the effective dose equivalent.'' A research project was undertaken to improve upon the conservative method presently used for assessing effective dose equivalent. In this project effective dose equivalent was calculated using a mathematical model of the human body, and tracking photon interactions for a wide variety of radiation source geometries using Monte Carlo computer code simulations. Algorithms were then developed to relate measurements of the photon flux on the surface of the body (as measured by dosimeters) to effective dose equivalent. This report (Volume I of a two-part study) describes: the concept of effective dose equivalent, the evolution of the concept and its incorporation into regulations, the variations in human organ susceptibility to radiation, the mathematical modeling and calculational techniques used, the results of effective dose equivalent calculations for a broad range of photon energiesand radiation source geometries. The study determined that for beam radiation sources the highest effective dose equivalent occurs for beams striking the front of the torso. Beams striking the rear of the torsoproduce the next highest effective dose equivalent, with effective dose equivalent falling significantly as one departs from these two orientations. For point sources, the highest effective dose equivalent occurs when the sources are in contact with the body on the front of the torso. For females the highest effective dose equivalent occurs when the source is on the sternum, for males when it is on the gonads.

  7. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1992-01-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  8. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1991-01-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source terms; environmental transport environmental monitoring data; demographics, agriculture, food habits; environmental pathways and dose estimates.

  9. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1992-02-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demography, food consumption, and agriculture; environmental pathways and dose estimates.

  10. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Cannon, S.D.; Finch, S.M.

    1992-10-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates):Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  11. License plate detection algorithm

    NASA Astrophysics Data System (ADS)

    Broitman, Michael; Klopovsky, Yuri; Silinskis, Normunds

    2013-12-01

    A novel algorithm for vehicle license plates localization is proposed. The algorithm is based on pixel intensity transition gradient analysis. Near to 2500 natural-scene gray-level vehicle images of different backgrounds and ambient illumination was tested. The best set of algorithm's parameters produces detection rate up to 0.94. Taking into account abnormal camera location during our tests and therefore geometrical distortion and troubles from trees this result could be considered as passable. Correlation between source data, such as license Plate dimensions and texture, cameras location and others, and parameters of algorithm were also defined.

  12. Distributed Minimum Hop Algorithms

    DTIC Science & Technology

    1982-01-01

    acknowledgement), node d starts iteration i+1, and otherwise the algorithm terminates. A detailed description of the algorithm is given in pidgin algol...precise behavior of the algorithm under these circumstances is described by the pidgin algol program in the appendix which is executed by each node. The...l) < N!(2) for each neighbor j, and thus by induction,J -1 N!(2-1) < n-i + (Z-1) + N!(Z-1), completing the proof. Algorithm Dl in Pidgin Algol It is

  13. Proton dose distribution measurements using a MOSFET detector with a simple dose-weighted correction method for LET effects.

    PubMed

    Kohno, Ryosuke; Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi

    2011-04-04

    We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth-dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high-bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L-shaped bolus. The dose reproducibility, angular dependence and depth-dose response were evaluated using a 190 MeV proton beam. Depth-output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose-weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L-shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors.

  14. Race influences warfarin dose changes associated with genetic factors.

    PubMed

    Limdi, Nita A; Brown, Todd M; Yan, Qi; Thigpen, Jonathan L; Shendre, Aditi; Liu, Nianjun; Hill, Charles E; Arnett, Donna K; Beasley, T Mark

    2015-07-23

    Warfarin dosing algorithms adjust for race, assigning a fixed effect size to each predictor, thereby attenuating the differential effect by race. Attenuation likely occurs in both race groups but may be more pronounced in the less-represented race group. Therefore, we evaluated whether the effect of clinical (age, body surface area [BSA], chronic kidney disease [CKD], and amiodarone use) and genetic factors (CYP2C9*2, *3, *5, *6, *11, rs12777823, VKORC1, and CYP4F2) on warfarin dose differs by race using regression analyses among 1357 patients enrolled in a prospective cohort study and compared predictive ability of race-combined vs race-stratified models. Differential effect of predictors by race was assessed using predictor-race interactions in race-combined analyses. Warfarin dose was influenced by age, BSA, CKD, amiodarone use, and CYP2C9*3 and VKORC1 variants in both races, by CYP2C9*2 and CYP4F2 variants in European Americans, and by rs12777823 in African Americans. CYP2C9*2 was associated with a lower dose only among European Americans (20.6% vs 3.0%, P < .001) and rs12777823 only among African Americans (12.3% vs 2.3%, P = .006). Although VKORC1 was associated with dose decrease in both races, the proportional decrease was higher among European Americans (28.9% vs 19.9%, P = .003) compared with African Americans. Race-stratified analysis improved dose prediction in both race groups compared with race-combined analysis. We demonstrate that the effect of predictors on warfarin dose differs by race, which may explain divergent findings reported by recent warfarin pharmacogenetic trials. We recommend that warfarin dosing algorithms should be stratified by race rather than adjusted for race.

  15. A phantom study on the behavior of Acuros XB algorithm in flattening filter free photon beams.

    PubMed

    Muralidhar, K R; Pangam, Suresh; Srinivas, P; Athar Ali, Mirza; Priya, V Sujana; Komanduri, Krishna

    2015-01-01

    To study the behavior of Acuros XB algorithm for flattening filter free (FFF) photon beams in comparison with the anisotropic analytical algorithm (AAA) when applied to homogeneous and heterogeneous phantoms in conventional and RapidArc techniques. Acuros XB (Eclipse version 10.0, Varian Medical Systems, CA, USA) and AAA algorithms were used to calculate dose distributions for both 6X FFF and 10X FFF energies. RapidArc plans were created on Catphan phantom 504 and conventional plans on virtual homogeneous water phantom 30 × 30 × 30 cm(3), virtual heterogeneous phantom with various inserts and on solid water phantom with air cavity. Dose at various inserts with different densities were measured in both AAA and Acuros algorithms. The maximum % variation in dose was observed in (-944 HU) air insert and minimum in (85 HU) acrylic insert in both 6X FFF and 10X FFF photons. Less than 1% variation observed between -149 HU and 282 HU for both energies. At -40 HU and 765 HU Acuros behaved quite contrarily with 10X FFF. Maximum % variation in dose was observed in less HU values and minimum variation in higher HU values for both FFF energies. Global maximum dose observed at higher depths for Acuros for both energies compared with AAA. Increase in dose was observed with Acuros algorithm in almost all densities and decrease at few densities ranging from 282 to 643 HU values. Field size, depth, beam energy, and material density influenced the dose difference between two algorithms.

  16. A comprehensive study on the relationship between the image quality and imaging dose in low-dose cone beam CT

    NASA Astrophysics Data System (ADS)

    Yan, Hao; Cervino, Laura; Jia, Xun; Jiang, Steve B.

    2012-04-01

    While compressed sensing (CS)-based algorithms have been developed for the low-dose cone beam CT (CBCT) reconstruction, a clear understanding of the relationship between the image quality and imaging dose at low-dose levels is needed. In this paper, we qualitatively investigate this subject in a comprehensive manner with extensive experimental and simulation studies. The basic idea is to plot both the image quality and imaging dose together as functions of the number of projections and mAs per projection over the whole clinically relevant range. On this basis, a clear understanding of the tradeoff between the image quality and imaging dose can be achieved and optimal low-dose CBCT scan protocols can be developed to maximize the dose reduction while minimizing the image quality loss for various imaging tasks in image-guided radiation therapy (IGRT). Main findings of this work include (1) under the CS-based reconstruction framework, image quality has little degradation over a large range of dose variation. Image quality degradation becomes evident when the imaging dose (approximated with the x-ray tube load) is decreased below 100 total mAs. An imaging dose lower than 40 total mAs leads to a dramatic image degradation, and thus should be used cautiously. Optimal low-dose CBCT scan protocols likely fall in the dose range of 40-100 total mAs, depending on the specific IGRT applications. (2) Among different scan protocols at a constant low-dose level, the super sparse-view reconstruction with the projection number less than 50 is the most challenging case, even with strong regularization. Better image quality can be acquired with low mAs protocols. (3) The optimal scan protocol is the combination of a medium number of projections and a medium level of mAs/view. This is more evident when the dose is around 72.8 total mAs or below and when the ROI is a low-contrast or high-resolution object. Based on our results, the optimal number of projections is around 90 to 120. (4

  17. A comprehensive study on the relationship between the image quality and imaging dose in low-dose cone beam CT.

    PubMed

    Yan, Hao; Cervino, Laura; Jia, Xun; Jiang, Steve B

    2012-04-07

    While compressed sensing (CS)-based algorithms have been developed for the low-dose cone beam CT (CBCT) reconstruction, a clear understanding of the relationship between the image quality and imaging dose at low-dose levels is needed. In this paper, we qualitatively investigate this subject in a comprehensive manner with extensive experimental and simulation studies. The basic idea is to plot both the image quality and imaging dose together as functions of the number of projections and mAs per projection over the whole clinically relevant range. On this basis, a clear understanding of the tradeoff between the image quality and imaging dose can be achieved and optimal low-dose CBCT scan protocols can be developed to maximize the dose reduction while minimizing the image quality loss for various imaging tasks in image-guided radiation therapy (IGRT). Main findings of this work include (1) under the CS-based reconstruction framework, image quality has little degradation over a large range of dose variation. Image quality degradation becomes evident when the imaging dose (approximated with the x-ray tube load) is decreased below 100 total mAs. An imaging dose lower than 40 total mAs leads to a dramatic image degradation, and thus should be used cautiously. Optimal low-dose CBCT scan protocols likely fall in the dose range of 40-100 total mAs, depending on the specific IGRT applications. (2) Among different scan protocols at a constant low-dose level, the super sparse-view reconstruction with the projection number less than 50 is the most challenging case, even with strong regularization. Better image quality can be acquired with low mAs protocols. (3) The optimal scan protocol is the combination of a medium number of projections and a medium level of mAs/view. This is more evident when the dose is around 72.8 total mAs or below and when the ROI is a low-contrast or high-resolution object. Based on our results, the optimal number of projections is around 90 to 120. (4

  18. A comprehensive study on the relationship between image quality and imaging dose in low-dose cone beam CT

    PubMed Central

    Yan, Hao; Cervino, Laura; Jia, Xun; Jiang, Steve B.

    2012-01-01

    While compressed sensing (CS) based algorithms have been developed for low-dose cone beam CT (CBCT) reconstruction, a clear understanding on the relationship between the image quality and imaging dose at low dose levels is needed. In this paper, we qualitatively investigate this subject in a comprehensive manner with extensive experimental and simulation studies. The basic idea is to plot both the image quality and imaging dose together as functions of number of projections and mAs per projection over the whole clinically relevant range. On this basis, a clear understanding on the tradeoff between image quality and imaging dose can be achieved and optimal low-dose CBCT scan protocols can be developed to maximize the dose reduction while minimizing the image quality loss for various imaging tasks in image guided radiation therapy (IGRT). Main findings of this work include: 1) Under the CS-based reconstruction framework, image quality has little degradation over a large range of dose variation. Image quality degradation becomes evident when the imaging dose (approximated with the x-ray tube load) is decreased below 100 total mAs. An imaging dose lower than 40 total mAs leads to a dramatic image degradation, and thus should be used cautiously. Optimal low-dose CBCT scan protocols likely fall in the dose range of 40–100 total mAs, depending on the specific IGRT applications. 2) Among different scan protocols at a constant low-dose level, the super sparse-view reconstruction with projection number less than 50 is the most challenging case, even with strong regularization. Better image quality can be acquired with low mAs protocols. 3) The optimal scan protocol is the combination of a medium number of projections and a medium level of mAs/view. This is more evident when the dose is around 72.8 total mAs or below and when the ROI is a low-contrast or high-resolution object. Based on our results, the optimal number of projections is around 90 to 120. 4) The clinically

  19. Dosimetric algorithm to reproduce isodose curves obtained from a LINAC.

    PubMed

    Estrada Espinosa, Julio Cesar; Martínez Ovalle, Segundo Agustín; Pereira Benavides, Cinthia Kotzian

    2014-01-01

    In this work isodose curves are obtained by the use of a new dosimetric algorithm using numerical data from percentage depth dose (PDD) and the maximum absorbed dose profile, calculated by Monte Carlo in a 18 MV LINAC. The software allows reproducing the absorbed dose percentage in the whole irradiated volume quickly and with a good approximation. To validate results an 18 MV LINAC with a whole geometry and a water phantom were constructed. On this construction, the distinct simulations were processed by the MCNPX code and then obtained the PDD and profiles for the whole depths of the radiation beam. The results data were used by the code to produce the dose percentages in any point of the irradiated volume. The absorbed dose for any voxel's size was also reproduced at any point of the irradiated volume, even when the voxels are considered to be of a pixel's size. The dosimetric algorithm is able to reproduce the absorbed dose induced by a radiation beam over a water phantom, considering PDD and profiles, whose maximum percent value is in the build-up region. Calculation time for the algorithm is only a few seconds, compared with the days taken when it is carried out by Monte Carlo.

  20. Dosimetric Algorithm to Reproduce Isodose Curves Obtained from a LINAC

    PubMed Central

    Estrada Espinosa, Julio Cesar; Martínez Ovalle, Segundo Agustín; Pereira Benavides, Cinthia Kotzian

    2014-01-01

    In this work isodose curves are obtained by the use of a new dosimetric algorithm using numerical data from percentage depth dose (PDD) and the maximum absorbed dose profile, calculated by Monte Carlo in a 18 MV LINAC. The software allows reproducing the absorbed dose percentage in the whole irradiated volume quickly and with a good approximation. To validate results an 18 MV LINAC with a whole geometry and a water phantom were constructed. On this construction, the distinct simulations were processed by the MCNPX code and then obtained the PDD and profiles for the whole depths of the radiation beam. The results data were used by the code to produce the dose percentages in any point of the irradiated volume. The absorbed dose for any voxel's size was also reproduced at any point of the irradiated volume, even when the voxels are considered to be of a pixel's size. The dosimetric algorithm is able to reproduce the absorbed dose induced by a radiation beam over a water phantom, considering PDD and profiles, whose maximum percent value is in the build-up region. Calculation time for the algorithm is only a few seconds, compared with the days taken when it is carried out by Monte Carlo. PMID:25045398

  1. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  2. Transitional Division Algorithms.

    ERIC Educational Resources Information Center

    Laing, Robert A.; Meyer, Ruth Ann

    1982-01-01

    A survey of general mathematics students whose teachers were taking an inservice workshop revealed that they had not yet mastered division. More direct introduction of the standard division algorithm is favored in elementary grades, with instruction of transitional processes curtailed. Weaknesses in transitional algorithms appear to outweigh…

  3. Ultrametric Hierarchical Clustering Algorithms.

    ERIC Educational Resources Information Center

    Milligan, Glenn W.

    1979-01-01

    Johnson has shown that the single linkage and complete linkage hierarchical clustering algorithms induce a metric on the data known as the ultrametric. Johnson's proof is extended to four other common clustering algorithms. Two additional methods also produce hierarchical structures which can violate the ultrametric inequality. (Author/CTM)

  4. The Training Effectiveness Algorithm.

    ERIC Educational Resources Information Center

    Cantor, Jeffrey A.

    1988-01-01

    Describes the Training Effectiveness Algorithm, a systematic procedure for identifying the cause of reported training problems which was developed for use in the U.S. Navy. A two-step review by subject matter experts is explained, and applications of the algorithm to other organizations and training systems are discussed. (Author/LRW)

  5. Survey of computed tomography scanners in Taiwan: Dose descriptors, dose guidance levels, and effective doses

    SciTech Connect

    Tsai, H. Y.; Tung, C. J.; Yu, C. C.; Tyan, Y. S.

    2007-04-15

    The IAEA and the ICRP recommended dose guidance levels for the most frequent computed tomography (CT) examinations to promote strategies for the optimization of radiation dose to CT patients. A national survey, including on-site measurements and questionnaires, was conducted in Taiwan in order to establish dose guidance levels and evaluate effective doses for CT. The beam quality and output and the phantom doses were measured for nine representative CT scanners. Questionnaire forms were completed by respondents from facilities of 146 CT scanners out of 285 total scanners. Information on patient, procedure, scanner, and technique for the head and body examinations was provided. The weighted computed tomography dose index (CTDI{sub w}), the dose length product (DLP), organ doses and effective dose were calculated using measured data, questionnaire information and Monte Carlo simulation results. A cost-effective analysis was applied to derive the dose guidance levels on CTDI{sub w} and DLP for several CT examinations. The mean effective dose{+-}standard deviation distributes from 1.6{+-}0.9 mSv for the routine head examination to 13{+-}11 mSv for the examination of liver, spleen, and pancreas. The surveyed results and the dose guidance levels were provided to the national authorities to develop quality control standards and protocols for CT examinations.

  6. Collective dose-practical ways to estimate a dose matrix.

    PubMed

    Simmonds, Jane; Sihra, Kamaljit; Bexon, Antony

    2006-06-01

    It has been suggested that collective doses should be presented in the form of a 'dose matrix' giving information on the breakdown of collective dose in space and time and by population group. This paper is an initial attempt to provide such a breakdown by geographic region and time, and to give an idea of associated individual doses for routine discharges to atmosphere. This is done through the use of representative per-caput individual doses but these need to be supplemented by information on the individual doses received by the critical group for a full radiological impact assessment. The results show that it is important to distinguish between the different population groups for up to a few hundred years following the discharge. However, beyond this time the main contribution is from global circulation and this distinction is less important. The majority of the collective dose was found to be delivered at low levels of individual doses; the estimated per-caput dose rates were significantly less than 10(-5) Sv y(-1), a level of dose felt to give rise to a trivial risk to the exposed individual.

  7. History of dose specification in Brachytherapy: From Threshold Erythema Dose to Computational Dosimetry

    NASA Astrophysics Data System (ADS)

    Williamson, Jeffrey F.

    2006-09-01

    This paper briefly reviews the evolution of brachytherapy dosimetry from 1900 to the present. Dosimetric practices in brachytherapy fall into three distinct eras: During the era of biological dosimetry (1900-1938), radium pioneers could only specify Ra-226 and Rn-222 implants in terms of the mass of radium encapsulated within the implanted sources. Due to the high energy of its emitted gamma rays and the long range of its secondary electrons in air, free-air chambers could not be used to quantify the output of Ra-226 sources in terms of exposure. Biological dosimetry, most prominently the threshold erythema dose, gained currency as a means of intercomparing radium treatments with exposure-calibrated orthovoltage x-ray units. The classical dosimetry era (1940-1980) began with successful exposure standardization of Ra-226 sources by Bragg-Gray cavity chambers. Classical dose-computation algorithms, based upon 1-D buildup factor measurements and point-source superposition computational algorithms, were able to accommodate artificial radionuclides such as Co-60, Ir-192, and Cs-137. The quantitative dosimetry era (1980- ) arose in response to the increasing utilization of low energy K-capture radionuclides such as I-125 and Pd-103 for which classical approaches could not be expected to estimate accurate correct doses. This led to intensive development of both experimental (largely TLD-100 dosimetry) and Monte Carlo dosimetry techniques along with more accurate air-kerma strength standards. As a result of extensive benchmarking and intercomparison of these different methods, single-seed low-energy radionuclide dose distributions are now known with a total uncertainty of 3%-5%.

  8. Standardized radiological dose evaluations

    SciTech Connect

    Peterson, V.L.; Stahlnecker, E.

    1996-05-01

    Following the end of the Cold War, the mission of Rocky Flats Environmental Technology Site changed from production of nuclear weapons to cleanup. Authorization baseis documents for the facilities, primarily the Final Safety Analysis Reports, are being replaced with new ones in which accident scenarios are sorted into coarse bins of consequence and frequency, similar to the approach of DOE-STD-3011-94. Because this binning does not require high precision, a standardized approach for radiological dose evaluations is taken for all the facilities at the site. This is done through a standard calculation ``template`` for use by all safety analysts preparing the new documents. This report describes this template and its use.

  9. A software tool for 3D dose verification and analysis

    NASA Astrophysics Data System (ADS)

    Sa'd, M. Al; Graham, J.; Liney, G. P.

    2013-06-01

    The main recent developments in radiotherapy have focused on improved treatment techniques in order to generate further significant improvements in patient prognosis. There is now an internationally recognised need to improve 3D verification of highly conformal radiotherapy treatments. This is because of the very high dose gradients used in modern treatment techniques, which can result in a small error in the spatial dose distribution leading to a serious complication. In order to gain the full benefits of using 3D dosimetric technologies (such as gel dosimetry), it is vital to use 3D evaluation methods and algorithms. We present in this paper a software solution that provides a comprehensive 3D dose evaluation and analysis. The software is applied to gel dosimetry, which is based on magnetic resonance imaging (MRI) as a read-out method. The software can also be used to compare any two dose distributions, such as two distributions planned using different methods of treatment planning systems, or different dose calculation algorithms.

  10. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  11. Quantification of Proton Dose Calculation Accuracy in the Lung

    SciTech Connect

    Grassberger, Clemens; Daartz, Juliane; Dowdell, Stephen; Ruggieri, Thomas; Sharp, Greg; Paganetti, Harald

    2014-06-01

    Purpose: To quantify the accuracy of a clinical proton treatment planning system (TPS) as well as Monte Carlo (MC)–based dose calculation through measurements and to assess the clinical impact in a cohort of patients with tumors located in the lung. Methods and Materials: A lung phantom and ion chamber array were used to measure the dose to a plane through a tumor embedded in the lung, and to determine the distal fall-off of the proton beam. Results were compared with TPS and MC calculations. Dose distributions in 19 patients (54 fields total) were simulated using MC and compared to the TPS algorithm. Results: MC increased dose calculation accuracy in lung tissue compared with the TPS and reproduced dose measurements in the target to within ±2%. The average difference between measured and predicted dose in a plane through the center of the target was 5.6% for the TPS and 1.6% for MC. MC recalculations in patients showed a mean dose to the clinical target volume on average 3.4% lower than the TPS, exceeding 5% for small fields. For large tumors, MC also predicted consistently higher V5 and V10 to the normal lung, because of a wider lateral penumbra, which was also observed experimentally. Critical structures located distal to the target could show large deviations, although this effect was highly patient specific. Range measurements showed that MC can reduce range uncertainty by a factor of ∼2: the average (maximum) difference to the measured range was 3.9 mm (7.5 mm) for MC and 7 mm (17 mm) for the TPS in lung tissue. Conclusion: Integration of Monte Carlo dose calculation techniques into the clinic would improve treatment quality in proton therapy for lung cancer by avoiding systematic overestimation of target dose and underestimation of dose to normal lung. In addition, the ability to confidently reduce range margins would benefit all patients by potentially lowering toxicity.

  12. Radiochromic film based transit dosimetry for verification of dose delivery with intensity modulated radiotherapy

    SciTech Connect

    Chung, Kwangzoo; Lee, Kiho; Shin, Dongho; Kyung Lim, Young; Byeong Lee, Se; Yoon, Myonggeun; Son, Jaeman; Yong Park, Sung

    2013-02-15

    Purpose: To evaluate the transit dose based patient specific quality assurance (QA) of intensity modulated radiation therapy (IMRT) for verification of the accuracy of dose delivered to the patient. Methods: Five IMRT plans were selected and utilized to irradiate a homogeneous plastic water phantom and an inhomogeneous anthropomorphic phantom. The transit dose distribution was measured with radiochromic film and was compared with the computed dose map on the same plane using a gamma index with a 3% dose and a 3 mm distance-to-dose agreement tolerance limit. Results: While the average gamma index for comparisons of dose distributions was less than one for 98.9% of all pixels from the transit dose with the homogeneous phantom, the passing rate was reduced to 95.0% for the transit dose with the inhomogeneous phantom. Transit doses due to a 5 mm setup error may cause up to a 50% failure rate of the gamma index. Conclusions: Transit dose based IMRT QA may be superior to the traditional QA method since the former can show whether the inhomogeneity correction algorithm from TPS is accurate. In addition, transit dose based IMRT QA can be used to verify the accuracy of the dose delivered to the patient during treatment by revealing significant increases in the failure rate of the gamma index resulting from errors in patient positioning during treatment.

  13. Dose refinement. ARAC's role

    SciTech Connect

    Ellis, J. S.; Sullivan, T. J.; Baskett, R. L.

    1998-06-01

    The Atmospheric Release Advisory Capability (ARAC), located at the Lawrence Livermore National Laboratory, since the late 1970's has been involved in assessing consequences from nuclear and other hazardous material releases into the atmosphere. ARAC's primary role has been emergency response. However, after the emergency phase, there is still a significant role for dispersion modeling. This work usually involves refining the source term and, hence, the dose to the populations affected as additional information becomes available in the form of source term estimates release rates, mix of material, and release geometry and any measurements from passage of the plume and deposition on the ground. Many of the ARAC responses have been documented elsewhere. 1 Some of the more notable radiological releases that ARAC has participated in the post-emergency phase have been the 1979 Three Mile Island nuclear power plant (NPP) accident outside Harrisburg, PA, the 1986 Chernobyl NPP accident in the Ukraine, and the 1996 Japan Tokai nuclear processing plant explosion. ARAC has also done post-emergency phase analyses for the 1978 Russian satellite COSMOS 954 reentry and subsequent partial burn up of its on board nuclear reactor depositing radioactive materials on the ground in Canada, the 1986 uranium hexafluoride spill in Gore, OK, the 1993 Russian Tomsk-7 nuclear waste tank explosion, and lesser releases of mostly tritium. In addition, ARAC has performed a key role in the contingency planning for possible accidental releases during the launch of spacecraft with radioisotope thermoelectric generators (RTGs) on board (i.e. Galileo, Ulysses, Mars-Pathfinder, and Cassini), and routinely exercises with the Federal Radiological Monitoring and Assessment Center (FRMAC) in preparation for offsite consequences of radiological releases from NPPs and nuclear weapon accidents or incidents. Several accident post-emergency phase assessments are discussed in this paper in order to illustrate

  14. On the dose to a moving target in stereotactic ablative body radiotherapy to lung tumors

    NASA Astrophysics Data System (ADS)

    Feygelman, V.; Dilling, T. J.; Moros, E. G.; Zhang, G. G.

    2017-01-01

    This review summarizes the hierarchy of potential dose inaccuracies in lung SABR in terms of their expected clinical impact. The two main terms are targeting accuracy and adequacy of the dose calculation algorithm. One can associate dose-errors at the 50-100% (zero order) and 10-20% (first order) levels with the former and the latter, respectively. At the first order level, strong evidence exists that using dose algorithms which do not account for 3D density scaling is associated with diminished local control. On the other hand, the second-order target dose-errors due to either static approximations to full 4D calculations, or interplay during modulated delivery, are rather unlikely to rise above 5% (conservatively, ≤ 1% tumor control probability change).

  15. Dose Calibration of the ISS-RAD Fast Neutron Detector

    NASA Technical Reports Server (NTRS)

    Zeitlin, C.

    2015-01-01

    The ISS-RAD instrument has been fabricated by Southwest Research Institute and delivered to NASA for flight to the ISS in late 2015 or early 2016. ISS-RAD is essentially two instruments that share a common interface to ISS. The two instruments are the Charged Particle Detector (CPD), which is very similar to the MSL-RAD detector on Mars, and the Fast Neutron Detector (FND), which is a boron-loaded plastic scintillator with readout optimized for the 0.5 to 10 MeV energy range. As the FND is completely new, it has been necessary to develop methodology to allow it to be used to measure the neutron dose and dose equivalent. This talk will focus on the methods developed and their implementation using calibration data obtained in quasi-monoenergetic (QMN) neutron fields at the PTB facility in Braunschweig, Germany. The QMN data allow us to determine an approximate response function, from which we estimate dose and dose equivalent contributions per detected neutron as a function of the pulse height. We refer to these as the "pSv per count" curves for dose equivalent and the "pGy per count" curves for dose. The FND is required to provide a dose equivalent measurement with an accuracy of ?10% of the known value in a calibrated AmBe field. Four variants of the analysis method were developed, corresponding to two different approximations of the pSv per count curve, and two different implementations, one for real-time analysis onboard ISS and one for ground analysis. We will show that the preferred method, when applied in either real-time or ground analysis, yields good accuracy for the AmBe field. We find that the real-time algorithm is more susceptible to chance-coincidence background than is the algorithm used in ground analysis, so that the best estimates will come from the latter.

  16. An automated fitting procedure and software for dose-response curves with multiphasic features

    PubMed Central

    Veroli, Giovanni Y. Di; Fornari, Chiara; Goldlust, Ian; Mills, Graham; Koh, Siang Boon; Bramhall, Jo L; Richards, Frances M.; Jodrell, Duncan I.

    2015-01-01

    In cancer pharmacology (and many other areas), most dose-response curves are satisfactorily described by a classical Hill equation (i.e. 4 parameters logistical). Nevertheless, there are instances where the marked presence of more than one point of inflection, or the presence of combined agonist and antagonist effects, prevents straight-forward modelling of the data via a standard Hill equation. Here we propose a modified model and automated fitting procedure to describe dose-response curves with multiphasic features. The resulting general model enables interpreting each phase of the dose-response as an independent dose-dependent process. We developed an algorithm which automatically generates and ranks dose-response models with varying degrees of multiphasic features. The algorithm was implemented in new freely available Dr Fit software (sourceforge.net/projects/drfit/). We show how our approach is successful in describing dose-response curves with multiphasic features. Additionally, we analysed a large cancer cell viability screen involving 11650 dose-response curves. Based on our algorithm, we found that 28% of cases were better described by a multiphasic model than by the Hill model. We thus provide a robust approach to fit dose-response curves with various degrees of complexity, which, together with the provided software implementation, should enable a wide audience to easily process their own data. PMID:26424192

  17. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  18. Limitations of analytical dose calculations for small field proton radiosurgery

    NASA Astrophysics Data System (ADS)

    Geng, Changran; Daartz, Juliane; Lam-Tin-Cheung, Kimberley; Bussiere, Marc; Shih, Helen A.; Paganetti, Harald; Schuemann, Jan

    2017-01-01

    The purpose of the work was to evaluate the dosimetric uncertainties of an analytical dose calculation engine and the impact on treatment plans using small fields in intracranial proton stereotactic radiosurgery (PSRS) for a gantry based double scattering system. 50 patients were evaluated including 10 patients for each of 5 diagnostic indications of: arteriovenous malformation (AVM), acoustic neuroma (AN), meningioma (MGM), metastasis (METS), and pituitary adenoma (PIT). Treatment plans followed standard prescription and optimization procedures for PSRS. We performed comparisons between delivered dose distributions, determined by Monte Carlo (MC) simulations, and those calculated with the analytical dose calculation algorithm (ADC) used in our current treatment planning system in terms of dose volume histogram parameters and beam range distributions. Results show that the difference in the dose to 95% of the target (D95) is within 6% when applying measured field size output corrections for AN, MGM, and PIT. However, for AVM and METS, the differences can be as great as 10% and 12%, respectively. Normalizing the MC dose to the ADC dose based on the dose of voxels in a central area of the target reduces the difference of the D95 to within 6% for all sites. The generally applied margin to cover uncertainties in range (3.5% of the prescribed range  +  1 mm) is not sufficient to cover the range uncertainty for ADC in all cases, especially for patients with high tissue heterogeneity. The root mean square of the R90 difference, the difference in the position of distal falloff to 90% of the prescribed dose, is affected by several factors, especially the patient geometry heterogeneity, modulation and field diameter. In conclusion, implementation of Monte Carlo dose calculation techniques into the clinic can reduce the uncertainty of the target dose for proton stereotactic radiosurgery. If MC is not available for treatment planning, using MC dose distributions to

  19. In defence of collective dose.

    PubMed

    Fairlie, I; Sumner, D

    2000-03-01

    Recent proposals for a new scheme of radiation protection leave little room for collective dose estimations. This article discusses the history and present use of collective doses for occupational, ALARA, EIS and other purposes with reference to practical industry papers and government reports. The linear no-threshold (LNT) hypothesis suggests that collective doses which consist of very small doses added together should be used. Moral and ethical questions are discussed, particularly the emphasis on individual doses to the exclusion of societal risks, uncertainty over effects into the distant future and hesitation over calculating collective detriments. It is concluded that for moral, practical and legal reasons, collective dose is a valid parameter which should continue to be used.

  20. Dose from slow negative muons.

    PubMed

    Siiskonen, T

    2008-01-01

    Conversion coefficients from fluence to ambient dose equivalent, from fluence to maximum dose equivalent and quality factors for slow negative muons are examined in detail. Negative muons, when stopped, produce energetic photons, electrons and a variety of high-LET particles. Contribution from each particle type to the dose equivalent is calculated. The results show that for the high-LET particles the details of energy spectra and decay yields are important for accurate dose estimates. For slow negative muons the ambient dose equivalent does not always yield a conservative estimate for the protection quantities. Especially, the skin equivalent dose is strongly underestimated if the radiation-weighting factor of unity for slow muons is used. Comparisons to earlier studies are presented.

  1. Implementation of spot scanning dose optimization and dose calculation for helium ions in Hyperion

    SciTech Connect

    Fuchs, Hermann; Schreiner, Thomas; Georg, Dietmar

    2015-09-15

    Purpose: Helium ions ({sup 4}He) may supplement current particle beam therapy strategies as they possess advantages in physical dose distribution over protons. To assess potential clinical advantages, a dose calculation module accounting for relative biological effectiveness (RBE) was developed and integrated into the treatment planning system Hyperion. Methods: Current knowledge on RBE of {sup 4}He together with linear energy transfer considerations motivated an empirical depth-dependent “zonal” RBE model. In the plateau region, a RBE of 1.0 was assumed, followed by an increasing RBE up to 2.8 at the Bragg-peak region, which was then kept constant over the fragmentation tail. To account for a variable proton RBE, the same model concept was also applied to protons with a maximum RBE of 1.6. Both RBE models were added to a previously developed pencil beam algorithm for physical dose calculation and included into the treatment planning system Hyperion. The implementation was validated against Monte Carlo simulations within a water phantom using γ-index evaluation. The potential benefits of {sup 4}He based treatment plans were explored in a preliminary treatment planning comparison (against protons) for four treatment sites, i.e., a prostate, a base-of-skull, a pediatric, and a head-and-neck tumor case. Separate treatment plans taking into account physical dose calculation only or using biological modeling were created for protons and {sup 4}He. Results: Comparison of Monte Carlo and Hyperion calculated doses resulted in a γ{sub mean} of 0.3, with 3.4% of the values above 1 and γ{sub 1%} of 1.5 and better. Treatment plan evaluation showed comparable planning target volume coverage for both particles, with slightly increased coverage for {sup 4}He. Organ at risk (OAR) doses were generally reduced using {sup 4}He, some by more than to 30%. Improvements of {sup 4}He over protons were more pronounced for treatment plans taking biological effects into account. All

  2. Dose to medium versus dose to water as an estimator of dose to sensitive skeletal tissue

    NASA Astrophysics Data System (ADS)

    Walters, B. R. B.; Kramer, R.; Kawrakow, I.

    2010-08-01

    The purpose of this study is to determine whether dose to medium, Dm, or dose to water, Dw, provides a better estimate of the dose to the radiosensitive red bone marrow (RBM) and bone surface cells (BSC) in spongiosa, or cancellous bone. This is addressed in the larger context of the ongoing debate over whether Dm or Dw should be specified in Monte Carlo calculated radiotherapy treatment plans. The study uses voxelized, virtual human phantoms, FAX06/MAX06 (female/male), incorporated into an EGSnrc Monte Carlo code to perform Monte Carlo dose calculations during simulated irradiation by a 6 MV photon beam from an Elekta SL25 accelerator. Head and neck, chest and pelvis irradiations are studied. FAX06/MAX06 include precise modelling of spongiosa based on µCT images, allowing dose to RBM and BSC to be resolved from the dose to bone. Modifications to the FAX06/MAX06 user codes are required to score Dw and Dm in spongiosa. Dose uncertainties of ~1% (BSC, RBM) or ~0.5% (Dm, Dw) are obtained after up to 5 days of simulations on 88 CPUs. Clinically significant differences (>5%) between Dm and Dw are found only in cranial spongiosa, where the volume fraction of trabecular bone (TBVF) is high (55%). However, for spongiosa locations where there is any significant difference between Dm and Dw, comparisons of differential dose volume histograms (DVHs) and average doses show that Dw provides a better overall estimate of dose to RBM and BSC. For example, in cranial spongiosa the average Dm underestimates the average dose to sensitive tissue by at least 5%, while average Dw is within ~1% of the average dose to sensitive tissue. Thus, it is better to specify Dw than Dm in Monte Carlo treatment plans, since Dw provides a better estimate of dose to sensitive tissue in bone, the only location where the difference is likely to be clinically significant.

  3. REMEDIATION FACILITY WORKER DOSE ASSESSMENT

    SciTech Connect

    V. Arakali; E. Faillace

    2004-02-27

    The purpose of this design calculation is to estimate radiation doses received by personnel in the Remediation Facility performing operations to receive, prepare, open, repair, recover, disposition, and correct off-normal and non-standard conditions with casks, canisters, spent nuclear fuel (SNF) assemblies, and waste packages (WP). The specific scope of work contained in this calculation covers both collective doses and individual worker group doses on an annual basis, and includes the contributions due to external and internal radiation. The results of this calculation will be used to support the design of the Remediation Facility and provide occupational dose estimates for the License Application.

  4. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1992-06-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is being managed and conducted by the Battelle Pacific Northwest Laboratories under contract with the Centers for Disease Control. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demography, food consumption, and agriculture; environmental pathways and dose estimates.

  5. Psychotropic dose equivalence in Japan.

    PubMed

    Inada, Toshiya; Inagaki, Ataru

    2015-08-01

    Psychotropic dose equivalence is an important concept when estimating the approximate psychotropic doses patients receive, and deciding on the approximate titration dose when switching from one psychotropic agent to another. It is also useful from a research viewpoint when defining and extracting specific subgroups of subjects. Unification of various agents into a single standard agent facilitates easier analytical comparisons. On the basis of differences in psychopharmacological prescription features, those of available psychotropic agents and their approved doses, and racial differences between Japan and other countries, psychotropic dose equivalency tables designed specifically for Japanese patients have been widely used in Japan since 1998. Here we introduce dose equivalency tables for: (i) antipsychotics; (ii) antiparkinsonian agents; (iii) antidepressants; and (iv) anxiolytics, sedatives and hypnotics available in Japan. Equivalent doses for the therapeutic effects of individual psychotropic compounds were determined principally on the basis of randomized controlled trials conducted in Japan and consensus among dose equivalency tables reported previously by psychopharmacological experts. As these tables are intended to merely suggest approximate standard values, physicians should use them with discretion. Updated information of psychotropic dose equivalence in Japan is available at http://www.jsprs.org/en/equivalence.tables/. [Correction added on 8 July 2015, after first online publication: A link to the updated information has been added.].

  6. SU-E-T-373: Evaluation and Reduction of Contralateral Skin /subcutaneous Dose for Tangential Breast Irradiation

    SciTech Connect

    Butson, M; Carroll, S; Whitaker, M; Odgers, D; Martin, D; Hinds, S; Kader, J; Ho, K; Amos, S; Toohey, J

    2015-06-15

    Purpose: Tangential breast irradiation is a standard treatment technique for breast cancer therapy. One aspect of dose delivery includes dose delivered to the skin caused by electron contamination. This effect is especially important for highly oblique beams used on the medical tangent where the electron contamination deposits dose on the contralateral breast side. This work aims to investigate and predict as well as define a method to reduce this dose during tangential breast radiotherapy. Methods: Analysis and calculation of breast skin and subcutaneous dose is performed using a Varian Eclipse planning system, AAA algorithm for 6MV x-ray treatments. Measurements were made using EBT3 Gafchromic film to verify the accuracy of planning data. Various materials were tested to assess their ability to remove electron contamination on the contralateral breast. Results: Results showed that the Varian Eclipse AAA algorithm could accurately estimate contralateral breast dose in the build-up region at depths of 2mm or deeper. Surface dose was underestimated by the AAA algorithm. Doses up to 12% of applied dose were seen on the contralateral breast surface and up to 9 % at 2mm depth. Due to the nature of this radiation, being mainly low energy electron contamination, a bolus material could be used to reduce this dose to less than 3%. This is accomplished by 10 mm of superflab bolus or by 1 mm of lead. Conclusion: Contralateral breast skin and subcutaneous dose is present for tangential breast treatment and has been measured to be up to 12% of applied dose from the medial tangent beam. This dose is deposited at shallow depths and is accurately calculated by the Eclipse AAA algorithm at depths of 2mm or greater. Bolus material placed over the contralateral can be used to effectively reduce this skin dose.

  7. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  8. Inclusive Flavour Tagging Algorithm

    NASA Astrophysics Data System (ADS)

    Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex

    2016-10-01

    Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment.

  9. Analytical modelling of regional radiotherapy dose response of lung

    NASA Astrophysics Data System (ADS)

    Lee, Sangkyu; Stroian, Gabriela; Kopek, Neil; AlBahhar, Mahmood; Seuntjens, Jan; El Naqa, Issam

    2012-06-01

    Knowledge of the dose-response of radiation-induced lung disease (RILD) is necessary for optimization of radiotherapy (RT) treatment plans involving thoracic cavity irradiation. This study models the time-dependent relationship between local radiation dose and post-treatment lung tissue damage measured by computed tomography (CT) imaging. Fifty-eight follow-up diagnostic CT scans from 21 non-small-cell lung cancer patients were examined. The extent of RILD was segmented on the follow-up CT images based on the increase of physical density relative to the pre-treatment CT image. The segmented RILD was locally correlated with dose distribution calculated by analytical anisotropic algorithm and the Monte Carlo method to generate the corresponding dose-response curves. The Lyman-Kutcher-Burman (LKB) model was fit to the dose-response curves at six post-RT time periods, and temporal change in the LKB parameters was recorded. In this study, we observed significant correlation between the probability of lung tissue damage and the local dose for 96% of the follow-up studies. Dose-injury correlation at the first three months after RT was significantly different from later follow-up periods in terms of steepness and threshold dose as estimated from the LKB model. Dependence of dose response on superior-inferior tumour position was also observed. The time-dependent analytical modelling of RILD might provide better understanding of the long-term behaviour of the disease and could potentially be applied to improve inverse treatment planning optimization.

  10. OpenEIS Algorithms

    SciTech Connect

    2013-07-29

    The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.

  11. Implementation of Parallel Algorithms

    DTIC Science & Technology

    1993-06-30

    their socia ’ relations or to achieve some goals. For example, we define a pair-wise force law of i epulsion and attraction for a group of identical...quantization based compression schemes. Photo-refractive crystals, which provide high density recording in real time, are used as our holographic media . The...of Parallel Algorithms (J. Reif, ed.). Kluwer Academic Pu’ ishers, 1993. (4) "A Dynamic Separator Algorithm", D. Armon and J. Reif. To appear in

  12. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  13. Parallel Wolff Cluster Algorithms

    NASA Astrophysics Data System (ADS)

    Bae, S.; Ko, S. H.; Coddington, P. D.

    The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.

  14. An improved technique for comparing Gamma Knife dose-volume distributions in stereotactic radiosurgery

    NASA Astrophysics Data System (ADS)

    Tozer-Loft, Stephen M.; Walton, Lee; Forster, David M. C.; Kemeny, Andras A.

    1999-08-01

    A function derived from the geometry of brachytherapy dose distributions is applied to stereotactic radiosurgery and an algorithm for the production of a novel dose-volume histogram, the Anderson inverse-square shifted dose-volume histogram (DVH), is proposed. The expected form of the function to be plotted is checked by calculating its value for single focus exposures, and its application to clinical examples of Gamma Knife treatments described. The technique is shown to provide a valuable tool for assessing the adequacy of radiosurgical plans and comparing and reporting dose distributions.

  15. SU-F-19A-10: Recalculation and Reporting Clinical HDR 192-Ir Head and Neck Dose Distributions Using Model Based Dose Calculation

    SciTech Connect

    Carlsson Tedgren, A; Persson, M; Nilsson, J

    2014-06-15

    Purpose: To retrospectively re-calculate dose distributions for selected head and neck cancer patients, earlier treated with HDR 192Ir brachytherapy, using Monte Carlo (MC) simulations and compare results to distributions from the planning system derived using TG43 formalism. To study differences between dose to medium (as obtained with the MC code) and dose to water in medium as obtained through (1) ratios of stopping powers and (2) ratios of mass energy absorption coefficients between water and medium. Methods: The MC code Algebra was used to calculate dose distributions according to earlier actual treatment plans using anonymized plan data and CT images in DICOM format. Ratios of stopping power and mass energy absorption coefficients for water with various media obtained from 192-Ir spectra were used in toggling between dose to water and dose to media. Results: Differences between initial planned TG43 dose distributions and the doses to media calculated by MC are insignificant in the target volume. Differences are moderate (within 4–5 % at distances of 3–4 cm) but increase with distance and are most notable in bone and at the patient surface. Differences between dose to water and dose to medium are within 1-2% when using mass energy absorption coefficients to toggle between the two quantities but increase to above 10% for bone using stopping power ratios. Conclusion: MC predicts target doses for head and neck cancer patients in close agreement with TG43. MC yields improved dose estimations outside the target where a larger fraction of dose is from scattered photons. It is important with awareness and a clear reporting of absorbed dose values in using model based algorithms. Differences in bone media can exceed 10% depending on how dose to water in medium is defined.

  16. Characterisation of mega-voltage electron pencil beam dose distributions: viability of a measurement-based approach.

    PubMed

    Barnes, M P; Ebert, M A

    2008-03-01

    The concept of electron pencil-beam dose distributions is central to pencil-beam algorithms used in electron beam radiotherapy treatment planning. The Hogstrom algorithm, which is a common algorithm for electron treatment planning, models large electron field dose distributions by the superposition of a series of pencil beam dose distributions. This means that the accurate characterisation of an electron pencil beam is essential for the accuracy of the dose algorithm. The aim of this study was to evaluate a measurement based approach for obtaining electron pencil-beam dose distributions. The primary incentive for the study was the accurate calculation of dose distributions for narrow fields as traditional electron algorithms are generally inaccurate for such geometries. Kodak X-Omat radiographic film was used in a solid water phantom to measure the dose distribution of circular 12 MeV beams from a Varian 21EX linear accelerator. Measurements were made for beams of diameter, 1.5, 2, 4, 8, 16 and 32 mm. A blocked-field technique was used to subtract photon contamination in the beam. The "error function" derived from Fermi-Eyges Multiple Coulomb Scattering (MCS) theory for corresponding square fields was used to fit resulting dose distributions so that extrapolation down to a pencil beam distribution could be made. The Monte Carlo codes, BEAM and EGSnrc were used to simulate the experimental arrangement. The 8 mm beam dose distribution was also measured with TLD-100 microcubes. Agreement between film, TLD and Monte Carlo simulation results were found to be consistent with the spatial resolution used. The study has shown that it is possible to extrapolate narrow electron beam dose distributions down to a pencil beam dose distribution using the error function. However, due to experimental uncertainties and measurement difficulties, Monte Carlo is recommended as the method of choice for characterising electron pencil-beam dose distributions.

  17. A pencil beam algorithm for helium ion beam therapy

    SciTech Connect

    Fuchs, Hermann; Stroebele, Julia; Schreiner, Thomas; Hirtl, Albert; Georg, Dietmar

    2012-11-15

    Purpose: To develop a flexible pencil beam algorithm for helium ion beam therapy. Dose distributions were calculated using the newly developed pencil beam algorithm and validated using Monte Carlo (MC) methods. Methods: The algorithm was based on the established theory of fluence weighted elemental pencil beam (PB) kernels. Using a new real-time splitting approach, a minimization routine selects the optimal shape for each sub-beam. Dose depositions along the beam path were determined using a look-up table (LUT). Data for LUT generation were derived from MC simulations in water using GATE 6.1. For materials other than water, dose depositions were calculated by the algorithm using water-equivalent depth scaling. Lateral beam spreading caused by multiple scattering has been accounted for by implementing a non-local scattering formula developed by Gottschalk. A new nuclear correction was modelled using a Voigt function and implemented by a LUT approach. Validation simulations have been performed using a phantom filled with homogeneous materials or heterogeneous slabs of up to 3 cm. The beams were incident perpendicular to the phantoms surface with initial particle energies ranging from 50 to 250 MeV/A with a total number of 10{sup 7} ions per beam. For comparison a special evaluation software was developed calculating the gamma indices for dose distributions. Results: In homogeneous phantoms, maximum range deviations between PB and MC of less than 1.1% and differences in the width of the distal energy falloff of the Bragg-Peak from 80% to 20% of less than 0.1 mm were found. Heterogeneous phantoms using layered slabs satisfied a {gamma}-index criterion of 2%/2mm of the local value except for some single voxels. For more complex phantoms using laterally arranged bone-air slabs, the {gamma}-index criterion was exceeded in some areas giving a maximum {gamma}-index of 1.75 and 4.9% of the voxels showed {gamma}-index values larger than one. The calculation precision of the

  18. Helical tomotherapy superficial dose measurements

    SciTech Connect

    Ramsey, Chester R.; Seibert, Rebecca M.; Robison, Benjamin; Mitchell, Martha

    2007-08-15

    Helical tomotherapy is a treatment technique that is delivered from a 6 MV fan beam that traces a helical path while the couch moves linearly into the bore. In order to increase the treatment delivery dose rate, helical tomotherapy systems do not have a flattening filter. As such, the dose distributions near the surface of the patient may be considerably different from other forms of intensity-modulated delivery. The purpose of this study was to measure the dose distributions near the surface for helical tomotherapy plans with a varying separation between the target volume and the surface of an anthropomorphic phantom. A hypothetical planning target volume (PTV) was defined on an anthropomorphic head phantom to simulate a 2.0 Gy per fraction IMRT parotid-sparing head and neck treatment of the upper neck nodes. A total of six target volumes were created with 0, 1, 2, 3, 4, and 5 mm of separation between the surface of the phantom and the outer edge of the PTV. Superficial doses were measured for each of the treatment deliveries using film placed in the head phantom and thermoluminescent dosimeters (TLDs) placed on the phantom's surface underneath an immobilization mask. In the 0 mm test case where the PTV extends to the phantom surface, the mean TLD dose was 1.73{+-}0.10 Gy (or 86.6{+-}5.1% of the prescribed dose). The measured superficial dose decreases to 1.23{+-}0.10 Gy (61.5{+-}5.1% of the prescribed dose) for a PTV-surface separation of 5 mm. The doses measured by the TLDs indicated that the tomotherapy treatment planning system overestimates superficial doses by 8.9{+-}3.2%. The radiographic film dose for the 0 mm test case was 1.73{+-}0.07 Gy, as compared to the calculated dose of 1.78{+-}0.05 Gy. Given the results of the TLD and film measurements, the superficial calculated doses are overestimated between 3% and 13%. Without the use of bolus, tumor volumes that extend to the surface may be underdosed. As such, it is recommended that bolus be added for these

  19. Analytical probabilistic proton dose calculation and range uncertainties

    NASA Astrophysics Data System (ADS)

    Bangert, M.; Hennig, P.; Oelfke, U.

    2014-03-01

    We introduce the concept of analytical probabilistic modeling (APM) to calculate the mean and the standard deviation of intensity-modulated proton dose distributions under the influence of range uncertainties in closed form. For APM, range uncertainties are modeled with a multivariate Normal distribution p(z) over the radiological depths z. A pencil beam algorithm that parameterizes the proton depth dose d(z) with a weighted superposition of ten Gaussians is used. Hence, the integrals ∫ dz p(z) d(z) and ∫ dz p(z) d(z)2 required for the calculation of the expected value and standard deviation of the dose remain analytically tractable and can be efficiently evaluated. The means μk, widths δk, and weights ωk of the Gaussian components parameterizing the depth dose curves are found with least squares fits for all available proton ranges. We observe less than 0.3% average deviation of the Gaussian parameterizations from the original proton depth dose curves. Consequently, APM yields high accuracy estimates for the expected value and standard deviation of intensity-modulated proton dose distributions for two dimensional test cases. APM can accommodate arbitrary correlation models and account for the different nature of random and systematic errors in fractionated radiation therapy. Beneficial applications of APM in robust planning are feasible.

  20. Single daily dosing of aminoglycosides.

    PubMed

    Preston, S L; Briceland, L L

    1995-01-01

    To evaluate the rationale behind dosing aminoglycosides as a single daily dose versus traditional dosing approaches, we conducted a MEDLINE search to identify all pertinent articles, and also reviewed the references of all articles. Single daily dosing of aminoglycosides is not a new concept, having been examined since 1974. The advantages of this regimen include optimum concentration-dependent bactericidal activity, longer dosing intervals due to the postantibiotic effect (PAE), and prevention of bacterial adaptive resistance. Because of longer dosing intervals, toxicity may also be delayed or reduced. Costs may be reduced due to decreased monitoring and administration. Clinically, the regimen has been implemented in various patient populations with reported success. Questions remain, however, about optimum dose, peak and trough serum concentrations, and dose adjustment in patients with renal impairment or neutropenia. More clinical experience with this method in large numbers of patients has to be published. Pharmacists can be instrumental in monitoring patients receiving once-daily therapy and by educating health care professionals as to the rationale behind the therapy.

  1. Bayesian estimation of dose thresholds

    NASA Technical Reports Server (NTRS)

    Groer, P. G.; Carnes, B. A.

    2003-01-01

    An example is described of Bayesian estimation of radiation absorbed dose thresholds (subsequently simply referred to as dose thresholds) using a specific parametric model applied to a data set on mice exposed to 60Co gamma rays and fission neutrons. A Weibull based relative risk model with a dose threshold parameter was used to analyse, as an example, lung cancer mortality and determine the posterior density for the threshold dose after single exposures to 60Co gamma rays or fission neutrons from the JANUS reactor at Argonne National Laboratory. The data consisted of survival, censoring times and cause of death information for male B6CF1 unexposed and exposed mice. The 60Co gamma whole-body doses for the two exposed groups were 0.86 and 1.37 Gy. The neutron whole-body doses were 0.19 and 0.38 Gy. Marginal posterior densities for the dose thresholds for neutron and gamma radiation were calculated with numerical integration and found to have quite different shapes. The density of the threshold for 60Co is unimodal with a mode at about 0.50 Gy. The threshold density for fission neutrons declines monotonically from a maximum value at zero with increasing doses. The posterior densities for all other parameters were similar for the two radiation types.

  2. Pseudomonas aeruginosa dose response and bathing water infection.

    PubMed

    Roser, D J; van den Akker, B; Boase, S; Haas, C N; Ashbolt, N J; Rice, S A

    2014-03-01

    Pseudomonas aeruginosa is the opportunistic pathogen mostly implicated in folliculitis and acute otitis externa in pools and hot tubs. Nevertheless, infection risks remain poorly quantified. This paper reviews disease aetiologies and bacterial skin colonization science to advance dose-response theory development. Three model forms are identified for predicting disease likelihood from pathogen density. Two are based on Furumoto & Mickey's exponential 'single-hit' model and predict infection likelihood and severity (lesions/m2), respectively. 'Third-generation', mechanistic, dose-response algorithm development is additionally scoped. The proposed formulation integrates dispersion, epidermal interaction, and follicle invasion. The review also details uncertainties needing consideration which pertain to water quality, outbreaks, exposure time, infection sites, biofilms, cerumen, environmental factors (e.g. skin saturation, hydrodynamics), and whether P. aeruginosa is endogenous or exogenous. The review's findings are used to propose a conceptual infection model and identify research priorities including pool dose-response modelling, epidermis ecology and infection likelihood-based hygiene management.

  3. Quantitative comparison of dose distribution in radiotherapy plans using 2D gamma maps and X-ray computed tomography

    PubMed Central

    Balosso, Jacques

    2016-01-01

    Background The advanced dose calculation algorithms implemented in treatment planning system (TPS) have remarkably improved the accuracy of dose calculation especially the modeling of electrons transport in the low density medium. The purpose of this study is to evaluate the use of 2D gamma (γ) index to quantify and evaluate the impact of the calculation of electrons transport on dose distribution for lung radiotherapy. Methods X-ray computed tomography images were used to calculate the dose for twelve radiotherapy treatment plans. The doses were originally calculated with Modified Batho (MB) 1D density correction method, and recalculated with anisotropic analytical algorithm (AAA), using the same prescribed dose. Dose parameters derived from dose volume histograms (DVH) and target coverage indices were compared. To compare dose distribution, 2D γ-index was applied, ranging from 1%/1 mm to 6%/6 mm. The results were displayed using γ-maps in 2D. Correlation between DVH metrics and γ passing rates was tested using Spearman’s rank test and Wilcoxon paired test to calculate P values. Results the plans generated with AAA predicted more heterogeneous dose distribution inside the target, with P<0.05. However, MB overestimated the dose predicting more coverage of the target by the prescribed dose. The γ analysis showed that the difference between MB and AAA could reach up to ±10%. The 2D γ-maps illustrated that AAA predicted more dose to organs at risks, as well as lower dose to the target compared to MB. Conclusions Taking into account of the electrons transport on radiotherapy plans showed a significant impact on delivered dose and dose distribution. When considering the AAA represent the true cumulative dose, a readjusting of the prescribed dose and an optimization to protect the organs at risks should be taken in consideration in order to obtain the better clinical outcome. PMID:27429908

  4. An estimate of the propagated uncertainty for a dosemeter algorithm used for personnel monitoring.

    PubMed

    Veinot, K G

    2015-03-01

    The Y-12 National Security Complex utilises thermoluminescent dosemeters (TLDs) to monitor personnel for external radiation doses. The TLD consist of four elements positioned behind various filters, and dosemeters are processed on site and input into an algorithm to determine worker dose. When processing dosemeters and determining the dose equivalent to the worker, a number of steps are involved, including TLD reader calibration, TLD element calibration, corrections for fade and background, and inherent sensitivities of the dosemeter algorithm. In order to better understand the total uncertainty in calculated doses, a series of calculations were performed using certain assumptions and measurement data. Individual contributions to the uncertainty were propagated through the process, including final dose calculations for a number of representative source types. Although the uncertainty in a worker's calculated dose is not formally reported, these calculations can be used to verify the adequacy of a facility's dosimetry process.

  5. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.

    1990-09-01

    This monthly report summarizes the technical progress and project status for the Hanford Environmental Dose Reconstruction (HEDR) Project being conducted at the Pacific Northwest Laboratory (PNL) under the direction of a Technical Steering Panel (TSP). The TSP is composed of experts in numerous technical fields related to this project and represents the interests of the public. The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms, environmental transport, environmental monitoring data, demographics, agriculture, food habits, environmental pathways and dose estimates. 3 figs.

  6. Exercise Dose in Clinical Practice.

    PubMed

    Wasfy, Meagan M; Baggish, Aaron L

    2016-06-07

    There is wide variability in the physical activity patterns of the patients in contemporary clinical cardiovascular practice. This review is designed to address the impact of exercise dose on key cardiovascular risk factors and on mortality. We begin by examining the body of literature that supports a dose-response relationship between exercise and cardiovascular disease risk factors, including plasma lipids, hypertension, diabetes mellitus, and obesity. We next explore the relationship between exercise dose and mortality by reviewing the relevant epidemiological literature underlying current physical activity guideline recommendations. We then expand this discussion to critically examine recent data pertaining to the impact of exercise dose at the lowest and highest ends of the spectrum. Finally, we provide a framework for how the key concepts of exercise dose can be integrated into clinical practice.

  7. Nanoparticle-based cancer treatment: can delivered dose and biological dose be reliably modeled and quantified?

    NASA Astrophysics Data System (ADS)

    Hoopes, P. Jack; Petryk, Alicia A.; Giustini, Andrew J.; Stigliano, Robert V.; D'Angelo, Robert N.; Tate, Jennifer A.; Cassim, Shiraz M.; Foreman, Allan; Bischof, John C.; Pearce, John A.; Ryan, Thomas

    2011-03-01

    Essential developments in the reliable and effective use of heat in medicine include: 1) the ability to model energy deposition and the resulting thermal distribution and tissue damage (Arrhenius models) over time in 3D, 2) the development of non-invasive thermometry and imaging for tissue damage monitoring, and 3) the development of clinically relevant algorithms for accurate prediction of the biological effect resulting from a delivered thermal dose in mammalian cells, tissues, and organs. The accuracy and usefulness of this information varies with the type of thermal treatment, sensitivity and accuracy of tissue assessment, and volume, shape, and heterogeneity of the tumor target and normal tissue. That said, without the development of an algorithm that has allowed the comparison and prediction of the effects of hyperthermia in a wide variety of tumor and normal tissues and settings (cumulative equivalent minutes/ CEM), hyperthermia would never have achieved clinical relevance. A new hyperthermia technology, magnetic nanoparticle-based hyperthermia (mNPH), has distinct advantages over the previous techniques: the ability to target the heat to individual cancer cells (with a nontoxic nanoparticle), and to excite the nanoparticles noninvasively with a noninjurious magnetic field, thus sparing associated normal cells and greatly improving the therapeutic ratio. As such, this modality has great potential as a primary and adjuvant cancer therapy. Although the targeted and safe nature of the noninvasive external activation (hysteretic heating) are a tremendous asset, the large number of therapy based variables and the lack of an accurate and useful method for predicting, assessing and quantifying mNP dose and treatment effect is a major obstacle to moving the technology into routine clinical practice. Among other parameters, mNPH will require the accurate determination of specific nanoparticle heating capability, the total nanoparticle content and biodistribution in

  8. Optimal radiotherapy dose schedules under parametric uncertainty

    NASA Astrophysics Data System (ADS)

    Badri, Hamidreza; Watanabe, Yoichi; Leder, Kevin

    2016-01-01

    We consider the effects of parameter uncertainty on the optimal radiation schedule in the context of the linear-quadratic model. Our interest arises from the observation that if inter-patient variability in normal and tumor tissue radiosensitivity or sparing factor of the organs-at-risk (OAR) are not accounted for during radiation scheduling, the performance of the therapy may be strongly degraded or the OAR may receive a substantially larger dose than the allowable threshold. This paper proposes a stochastic radiation scheduling concept to incorporate inter-patient variability into the scheduling optimization problem. Our method is based on a probabilistic approach, where the model parameters are given by a set of random variables. Our probabilistic formulation ensures that our constraints are satisfied with a given probability, and that our objective function achieves a desired level with a stated probability. We used a variable transformation to reduce the resulting optimization problem to two dimensions. We showed that the optimal solution lies on the boundary of the feasible region and we implemented a branch and bound algorithm to find the global optimal solution. We demonstrated how the configuration of optimal schedules in the presence of uncertainty compares to optimal schedules in the absence of uncertainty (conventional schedule). We observed that in order to protect against the possibility of the model parameters falling into a region where the conventional schedule is no longer feasible, it is required to avoid extremal solutions, i.e. a single large dose or very large total dose delivered over a long period. Finally, we performed numerical experiments in the setting of head and neck tumors including several normal tissues to reveal the effect of parameter uncertainty on optimal schedules and to evaluate the sensitivity of the solutions to the choice of key model parameters.

  9. Fast voxel and polygon ray-tracing algorithms in intensity modulated radiation therapy treatment planning.

    PubMed

    Fox, Christopher; Romeijn, H Edwin; Dempsey, James F

    2006-05-01

    We present work on combining three algorithms to improve ray-tracing efficiency in radiation therapy dose computation. The three algorithms include: An improved point-in-polygon algorithm, incremental voxel ray tracing algorithm, and stereographic projection of beamlets for voxel truncation. The point-in-polygon and incremental voxel ray-tracing algorithms have been used in computer graphics and nuclear medicine applications while the stereographic projection algorithm was developed by our group. These algorithms demonstrate significant improvements over the current standard algorithms in peer reviewed literature, i.e., the polygon and voxel ray-tracing algorithms of Siddon for voxel classification (point-in-polygon testing) and dose computation, respectively, and radius testing for voxel truncation. The presented polygon ray-tracing technique was tested on 10 intensity modulated radiation therapy (IMRT) treatment planning cases that required the classification of between 0.58 and 2.0 million voxels on a 2.5 mm isotropic dose grid into 1-4 targets and 5-14 structures represented as extruded polygons (a.k.a. Siddon prisms). Incremental voxel ray tracing and voxel truncation employing virtual stereographic projection was tested on the same IMRT treatment planning cases where voxel dose was required for 230-2400 beamlets using a finite-size pencil-beam algorithm. Between a 100 and 360 fold cpu time improvement over Siddon's method was observed for the polygon ray-tracing algorithm to perform classification of voxels for target and structure membership. Between a 2.6 and 3.1 fold reduction in cpu time over current algorithms was found for the implementation of incremental ray tracing. Additionally, voxel truncation via stereographic projection was observed to be 11-25 times faster than the radial-testing beamlet extent approach and was further improved 1.7-2.0 fold through point-classification using the method of translation over the cross product technique.

  10. Optimization of dosing regimens and dosing in special populations.

    PubMed

    Sime, F B; Roberts, M S; Roberts, J A

    2015-10-01

    Treatment of infectious diseases is becoming increasingly challenging with the emergence of less-susceptible organisms that are poorly responsive to existing antibiotic therapies, and the unpredictable pharmacokinetic alterations arising from complex pathophysiologic changes in some patient populations. In view of this fact, there has been a progressive work on novel dose optimization strategies to renew the utility of forgotten old antibiotics and to improve the efficacy of those currently in use. This review summarizes the different approaches of optimization of antibiotic dosing regimens and the special patient populations which may benefit most from these approaches. The existing methods are based on monitoring of antibiotic concentrations and/or use of clinical covariates. Measured concentrations can be correlated with predefined pharmacokinetic/pharmacodynamic targets to guide clinicians in predicting the necessary dose adjustment. Dosing nomograms are also available to relate observed concentrations or clinical covariates (e.g. creatinine clearance) with optimal dosing. More precise dose prediction based on observed covariates is possible through the application of population pharmacokinetic models. However, the most accurate estimation of individualized dosing requirements is achieved through Bayesian forecasting which utilizes both measured concentration and clinical covariates. Various software programs are emerging to ease clinical application. Whilst more studies are warranted to clarify the clinical outcomes associated with the different dose optimization approaches, severely ill patients in the course of marked infections and/or inflammation including those with sepsis, septic shock, severe trauma, burns injury, major surgery, febrile neutropenia, cystic fibrosis, organ dysfunction and obesity are those groups which may benefit most from individualized dosing.

  11. TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations

    SciTech Connect

    Schuemann, J; Grassberger, C; Paganetti, H; Dowdell, S

    2014-06-15

    Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50) were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend

  12. Dose escalation in permanent brachytherapy for prostate cancer: dosimetric and biological considerations

    NASA Astrophysics Data System (ADS)

    Li, X. Allen; Wang, Jian Z.; Stewart, Robert D.; Di Biase, Steven J.

    2003-09-01

    No prospective dose escalation study for prostate brachytherapy (PB) with permanent implants has been reported. In this work, we have performed a dosimetric and biological analysis to explore the implications of dose escalation in PB using 125I and 103Pd implants. The concept of equivalent uniform dose (EUD), proposed originally for external-beam radiotherapy (EBRT), is applied to low dose rate brachytherapy. For a given 125I or 103Pd PB, the EUD for tumour that corresponds to a dose distribution delivered by EBRT is calculated based on the linear quadratic model. The EUD calculation is based on the dose volume histogram (DVH) obtained retrospectively from representative actual patient data. Tumour control probabilities (TCPs) are also determined in order to compare the relative effectiveness of different dose levels. The EUD for normal tissue is computed using the Lyman model. A commercial inverse treatment planning algorithm is used to investigate the feasibility of escalating the dose to prostate with acceptable dose increases in the rectum and urethra. The dosimetric calculation is performed for five representative patients with different prostate sizes. A series of PB dose levels are considered for each patient using 125I and 103Pd seeds. It is found that the PB prescribed doses (minimum peripheral dose) that give an equivalent EBRT dose of 64.8, 70.2, 75.6 and 81 Gy with a fraction size of 1.8 Gy are 129, 139, 150 and 161 Gy for 125I and 103, 112, 122 and 132 Gy for 103Pd implants, respectively. Estimates of the EUD and TCP for a series of possible prescribed dose levels (e.g., 145, 160, 170 and 180 Gy for 125I and 125, 135, 145 and 155 for 103Pd implants) are tabulated. The EUD calculation was found to depend strongly on DVHs and radiobiological parameters. The dosimetric calculations suggest that the dose to prostate can be escalated without a substantial increase in both rectal and urethral dose. For example, increasing the PB prescribed dose from 145 to

  13. Prediction of standard-dose brain PET image by using MRI and low-dose brain [{sup 18}F]FDG PET images

    SciTech Connect

    Kang, Jiayin; Gao, Yaozong; Shi, Feng; Lalush, David S.; Lin, Weili; Shen, Dinggang

    2015-09-15

    Purpose: Positron emission tomography (PET) is a nuclear medical imaging technology that produces 3D images reflecting tissue metabolic activity in human body. PET has been widely used in various clinical applications, such as in diagnosis of brain disorders. High-quality PET images play an essential role in diagnosing brain diseases/disorders. In practice, in order to obtain high-quality PET images, a standard-dose radionuclide (tracer) needs to be used and injected into a living body. As a result, it will inevitably increase the patient’s exposure to radiation. One solution to solve this problem is predicting standard-dose PET images using low-dose PET images. As yet, no previous studies with this approach have been reported. Accordingly, in this paper, the authors propose a regression forest based framework for predicting a standard-dose brain [{sup 18}F]FDG PET image by using a low-dose brain [{sup 18}F]FDG PET image and its corresponding magnetic resonance imaging (MRI) image. Methods: The authors employ a regression forest for predicting the standard-dose brain [{sup 18}F]FDG PET image by low-dose brain [{sup 18}F]FDG PET and MRI images. Specifically, the proposed method consists of two main steps. First, based on the segmented brain tissues (i.e., cerebrospinal fluid, gray matter, and white matter) in the MRI image, the authors extract features for each patch in the brain image from both low-dose PET and MRI images to build tissue-specific models that can be used to initially predict standard-dose brain [{sup 18}F]FDG PET images. Second, an iterative refinement strategy, via estimating the predicted image difference, is used to further improve the prediction accuracy. Results: The authors evaluated their algorithm on a brain dataset, consisting of 11 subjects with MRI, low-dose PET, and standard-dose PET images, using leave-one-out cross-validations. The proposed algorithm gives promising results with well-estimated standard-dose brain [{sup 18}F]FDG PET

  14. Estimation of internal organ motion-induced variance in radiation dose in non-gated radiotherapy

    NASA Astrophysics Data System (ADS)

    Zhou, Sumin; Zhu, Xiaofeng; Zhang, Mutian; Zheng, Dandan; Lei, Yu; Li, Sicong; Bennion, Nathan; Verma, Vivek; Zhen, Weining; Enke, Charles

    2016-12-01

    In the delivery of non-gated radiotherapy (RT), owing to intra-fraction organ motion, a certain degree of RT dose uncertainty is present. Herein, we propose a novel mathematical algorithm to estimate the mean and variance of RT dose that is delivered without gating. These parameters are specific to individual internal organ motion, dependent on individual treatment plans, and relevant to the RT delivery process. This algorithm uses images from a patient’s 4D simulation study to model the actual patient internal organ motion during RT delivery. All necessary dose rate calculations are performed in fixed patient internal organ motion states. The analytical and deterministic formulae of mean and variance in dose from non-gated RT were derived directly via statistical averaging of the calculated dose rate over possible random internal organ motion initial phases, and did not require constructing relevant histograms. All results are expressed in dose rate Fourier transform coefficients for computational efficiency. Exact solutions are provided to simplified, yet still clinically relevant, cases. Results from a volumetric-modulated arc therapy (VMAT) patient case are also presented. The results obtained from our mathematical algorithm can aid clinical decisions by providing information regarding both mean and variance of radiation dose to non-gated patients prior to RT delivery.

  15. Evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.

  16. Algorithms and Libraries

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our inquiry into algorithms and applications that would benefit by latency tolerant approach to algorithm building, including the construction of new algorithms where appropriate. In a multithreaded execution, when a processor reaches a point where remote memory access is necessary, the request is sent out on the network and a context--switch occurs to a new thread of computation. This effectively masks a long and unpredictable latency due to remote loads, thereby providing tolerance to remote access latency. We began to develop standards to profile various algorithm and application parameters, such as the degree of parallelism, granularity, precision, instruction set mix, interprocessor communication, latency etc. These tools will continue to develop and evolve as the Information Power Grid environment matures. To provide a richer context for this research, the project also focused on issues of fault-tolerance and computation migration of numerical algorithms and software. During the initial phase we tried to increase our understanding of the bottlenecks in single processor performance. Our work began by developing an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. Based on the results we achieved in this study we are planning to study other architectures of interest, including development of cost models, and developing code generators appropriate to these architectures.

  17. Algorithmization in Learning and Instruction.

    ERIC Educational Resources Information Center

    Landa, L. N.

    An introduction to the theory of algorithms reviews the theoretical issues of teaching algorithms, the logical and psychological problems of devising algorithms of identification, and the selection of efficient algorithms; and then relates all of these to the classroom teaching process. It also descirbes some major research on the effectiveness of…

  18. Validation of the Eclipse AAA algorithm at extended SSD.

    PubMed

    Hussain, Amjad; Villarreal-Barajas, Eduardo; Brown, Derek; Dunscombe, Peter

    2010-06-08

    The accuracy of dose calculations at extended SSD is of significant importance in the dosimetric planning of total body irradiation (TBI). In a first step toward the implementation of electronic, multi-leaf collimator compensation for dose inhomogeneities and surface contour in TBI, we have evaluated the ability of the Eclipse AAA to accurately predict dose distributions in water at extended SSD. For this purpose, we use the Eclipse AAA algorithm, commissioned with machine-specific beam data for a 6 MV photon beam, at standard SSD (100 cm). The model was then used for dose distribution calculations at extended SSD (179.5 cm). Two sets of measurements were acquired for a 6 MV beam (from a Varian linear accelerator) in a water tank at extended SSD: i) open beam for 5 x 5, 10 x 10, 20 x 20 and 40 x 40 cm2 field sizes (defined at 179.5 cm SSD), and ii) identical field sizes but with a 1.3 cm thick acrylic spoiler placed 10 cm above the water surface. Dose profiles were acquired at 5 cm, 10 cm and 20 cm depths. Dose distributions for the two setups were calculated using the AAA algorithm in Eclipse. Confidence limits for comparisons between measured and calculated absolute depth dose curves and normalized dose profiles were determined as suggested by Venselaar et al. The confidence limits were within 2% and 2 mm for both setups. Extended SSD calculations were also performed using Eclipse AAA, commissioned with Varian Golden beam data at standard SSD. No significant difference between the custom commissioned and Golden Eclipse AAA was observed. In conclusion, Eclipse AAA commissioned at standard SSD can be used to accurately predict dose distributions in water at extended SSD for 6 MV open beams.

  19. Radiation dose estimates for radiopharmaceuticals

    SciTech Connect

    Stabin, M.G.; Stubbs, J.B.; Toohey, R.E.

    1996-04-01

    Tables of radiation dose estimates based on the Cristy-Eckerman adult male phantom are provided for a number of radiopharmaceuticals commonly used in nuclear medicine. Radiation dose estimates are listed for all major source organs, and several other organs of interest. The dose estimates were calculated using the MIRD Technique as implemented in the MIRDOSE3 computer code, developed by the Oak Ridge Institute for Science and Education, Radiation Internal Dose Information Center. In this code, residence times for source organs are used with decay data from the MIRD Radionuclide Data and Decay Schemes to produce estimates of radiation dose to organs of standardized phantoms representing individuals of different ages. The adult male phantom of the Cristy-Eckerman phantom series is different from the MIRD 5, or Reference Man phantom in several aspects, the most important of which is the difference in the masses and absorbed fractions for the active (red) marrow. The absorbed fractions for flow energy photons striking the marrow are also different. Other minor differences exist, but are not likely to significantly affect dose estimates calculated with the two phantoms. Assumptions which support each of the dose estimates appears at the bottom of the table of estimates for a given radiopharmaceutical. In most cases, the model kinetics or organ residence times are explicitly given. The results presented here can easily be extended to include other radiopharmaceuticals or phantoms.

  20. MO-PIS-Exhibit Hall-01: Imaging: CT Dose Optimization Technologies I

    SciTech Connect

    Denison, K; Smith, S

    2014-06-15

    Partners in Solutions is an exciting new program in which AAPM partners with our vendors to present practical “hands-on” information about the equipment and software systems that we use in our clinics. The imaging topic this year is CT scanner dose optimization capabilities. Note that the sessions are being held in a special purpose room built on the Exhibit Hall Floor, to encourage further interaction with the vendors. Dose Optimization Capabilities of GE Computed Tomography Scanners Presentation Time: 11:15 – 11:45 AM GE Healthcare is dedicated to the delivery of high quality clinical images through the development of technologies, which optimize the application of ionizing radiation. In computed tomography, dose management solutions fall into four categories: employs projection data and statistical modeling to decrease noise in the reconstructed image - creating an opportunity for mA reduction in the acquisition of diagnostic images. Veo represents true Model Based Iterative Reconstruction (MBiR). Using high-level algorithms in tandem with advanced computing power, Veo enables lower pixel noise standard deviation and improved spatial resolution within a single image. Advanced Adaptive Image Filters allow for maintenance of spatial resolution while reducing image noise. Examples of adaptive image space filters include Neuro 3-D filters and Cardiac Noise Reduction Filters. AutomA adjusts mA along the z-axis and is the CT equivalent of auto exposure control in conventional x-ray systems. Dynamic Z-axis Tracking offers an additional opportunity for dose reduction in helical acquisitions while SmartTrack Z-axis Tracking serves to ensure beam, collimator and detector alignment during tube rotation. SmartmA provides angular mA modulation. ECG Helical Modulation reduces mA during the systolic phase of the heart cycle. SmartBeam optimization uses bowtie beam-shaping hardware and software to filter off-axis x-rays - minimizing dose and reducing x-ray scatter. The

  1. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  2. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  3. Optical rate sensor algorithms

    NASA Technical Reports Server (NTRS)

    Uhde-Lacovara, Jo A.

    1989-01-01

    Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.

  4. Kernel Affine Projection Algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Weifeng; Príncipe, José C.

    2008-12-01

    The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.

  5. Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization

    PubMed Central

    Tian, Zhen; Jia, Xun; Yuan, Kehong; Pan, Tinsu; Jiang, Steve B.

    2014-01-01

    High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original total variation norm. During the reconstruction process, the pixels at edges would be gradually identified and given small penalty weight. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low contrast structures and therefore maintain acceptable spatial resolution. PMID:21860076

  6. Low-dose CT reconstruction via edge-preserving total variation regularization

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Jia, Xun; Yuan, Kehong; Pan, Tinsu; Jiang, Steve B.

    2011-09-01

    High radiation dose in computed tomography (CT) scans increases the lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with total variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low-contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV (EPTV) regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing energy consisting of an EPTV norm and a data fidelity term posed by the x-ray projections. The EPTV term is proposed to preferentially perform smoothing only on the non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original TV norm. During the reconstruction process, the pixels at the edges would be gradually identified and given low penalty weight. Our iterative algorithm is implemented on graphics processing unit to improve its speed. We test our reconstruction algorithm on a digital NURBS-based cardiac-troso phantom, a physical chest phantom and a Catphan phantom. Reconstruction results from a conventional filtered backprojection (FBP) algorithm and a TV regularization method without edge-preserving penalty are also presented for comparison purposes. The experimental results illustrate that both the TV-based algorithm and our EPTV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under a low-dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low-contrast structures and therefore maintain acceptable spatial resolution.

  7. SU-E-J-89: Motion Effects On Organ Dose in Respiratory Gated Stereotactic Body Radiation Therapy

    SciTech Connect

    Wang, T; Zhu, L; Khan, M; Landry, J; Rajpara, R; Hawk, N

    2014-06-01

    Purpose: Existing reports on gated radiation therapy focus mainly on optimizing dose delivery to the target structure. This work investigates the motion effects on radiation dose delivered to organs at risk (OAR) in respiratory gated stereotactic body radiation therapy (SBRT). A new algorithmic tool of dose analysis is developed to evaluate the optimality of gating phase for dose sparing on OARs while ensuring adequate target coverage. Methods: Eight patients with pancreatic cancer were treated on a phase I prospective study employing 4DCT-based SBRT. For each patient, 4DCT scans are acquired and sorted into 10 respiratory phases (inhale-exhale- inhale). Treatment planning is performed on the average CT image. The average CT is spatially registered to other phases. The resultant displacement field is then applied on the plan dose map to estimate the actual dose map for each phase. Dose values of each voxel are fitted to a sinusoidal function. Fitting parameters of dose variation, mean delivered dose and optimal gating phase for each voxel over respiration cycle are mapped on the dose volume. Results: The sinusoidal function accurately models the dose change during respiratory motion (mean fitting error 4.6%). In the eight patients, mean dose variation is 3.3 Gy on OARs with maximum of 13.7 Gy. Two patients have about 100cm{sup 3} volumes covered by more than 5 Gy deviation. The mean delivered dose maps are similar to plan dose with slight deformation. The optimal gating phase highly varies across the patient, with phase 5 or 6 on about 60% of the volume, and phase 0 on most of the rest. Conclusion: A new algorithmic tool is developed to conveniently quantify dose deviation on OARs from plan dose during the respiratory cycle. The proposed software facilitates the treatment planning process by providing the optimal respiratory gating phase for dose sparing on each OAR.

  8. ORGAN DOSES AND EFFECTIVE DOSE FOR FIVE PET RADIOPHARMACEUTICALS.

    PubMed

    Andersson, Martin; Johansson, Lennart; Mattsson, Sören; Minarik, David; Leide-Svegborn, Sigrid

    2016-06-01

    Diagnostic investigations with positron-emitting radiopharmaceuticals are dominated by (18)F-fluorodeoxyglucose ((18)F-FDG), but other radiopharmaceuticals are also commercially available or under development. Five of them, which are all clinically important, are (18)F-fluoride, (18)F-fluoroethyltyrosine ((18)F-FET), (18)F-deoxyfluorothymidine ((18)F-FLT), (18)F-fluorocholine ((18)F-choline) and (11)C-raclopride. To estimate the potential risk of stochastic effects (mainly lethal cancer) to a population, organ doses and effective dose values were updated for all five radiopharmaceuticals. Dose calculations were performed using the computer program IDAC2.0, which bases its calculations on the ICRP/ICRU adult reference voxel phantoms and the tissue weighting factors from ICRP publication 103. The biokinetic models were taken from ICRP publication 128. For organ doses, there are substantial changes. The only significant change in effective dose compared with previous estimations was a 46 % reduction for (18)F-fluoride. The estimated effective dose in mSv MBq(-1) was 1.5E-02 for (18)F-FET, 1.5E-02 for (18)F-FLT, 2.0E-02 for (18)F-choline, 9.0E-03 for (18)F-fluoride and 4.4E-03 for (11)C-raclopride.

  9. The Assessment of Effective Dose Equivalent Using Personnel Dosimeters

    NASA Astrophysics Data System (ADS)

    Xu, Xie

    From January 1994, U.S. nuclear plants must develop a technically rigorous approach for determining the effective dose equivalent for their work forces. This dissertation explains concepts associated with effective dose equivalent and describes how to assess effective dose equivalent by using conventional personnel dosimetry measurements. A Monte Carlo computer code, MCNP, was used to calculate photon transport through a model of the human body. Published mathematical phantoms of the human adult male and female were used to simulate irradiation from a variety of external radiation sources in order to calculate organ and tissue doses, as well as effective dose equivalent using weighting factors from ICRP Publication 26. The radiation sources considered were broad parallel photon beams incident on the body from 91 different angles and isotropic point sources located at 234 different locations in contact with or near the body. Monoenergetic photons of 0.08, 0.3, and 1.0 MeV were considered for both sources. Personnel dosimeters were simulated on the surface of the body and exposed to with the same sources. From these data, the influence of dosimeter position on dosimeter response was investigated. Different algorithms for assessing effective dose equivalent from personnel dosimeter responses were proposed and evaluated. The results indicate that the current single-badge approach is satisfactory for most common exposure situations encountered in nuclear plants, but additional conversion factors may be used when more accurate results become desirable. For uncommon exposures involving source situated at the back of the body or source located overhead, the current approach of using multi-badges and assigning the highest dose is overly conservative and unnecessarily expensive. For these uncommon exposures, a new algorithm, based on two dosimeters, one on the front of the body and another one on the back of the body, has been shown to yield conservative assessment of

  10. Parallel Algorithms and Patterns

    SciTech Connect

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  11. Improved Chaff Solution Algorithm

    DTIC Science & Technology

    2009-03-01

    Programme de démonstration de technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré...technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré pour déterminer automatiquement...0Z4 2. SECURITY CLASSIFICATION (Overall security classification of the document including special warning terms if applicable .) UNCLASSIFIED

  12. Automatic design of decision-tree algorithms with evolutionary algorithms.

    PubMed

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  13. Survey of volume CT dose index in Japan in 2014

    PubMed Central

    Kawaguchi, A; Kobayashi, K; Kinomura, Y; Kobayashi, M; Asada, Y; Minami, K; Suzuki, S; Chida, K

    2015-01-01

    Objective: The aims of this study are to propose a new set of Japanese diagnostic reference levels (DRLs) for 2014 and to study the impact of tube voltage and the type of reconstruction algorithm on patient doses. The volume CT dose index (CTDIvol) for adult and paediatric patients is assessed and compared with the results of a 2011 national survey and data from other countries. Methods: Scanning procedures for the head (non-helical and helical), chest and upper abdomen were examined for adults and 5-year-old children. A questionnaire concerning the following items was sent to 3000 facilities: tube voltage, use of reconstruction algorithms and displayed CTDIvol. Results: The mean CTDIvol values for paediatric examinations using voltages ranging from 80 to 100 kV were significantly lower than those for paediatric examinations using 120 kV. For adult examinations, the use of iterative reconstruction algorithms significantly reduced the mean CTDIvol values compared with the use of filtered back projection. Paediatric chest and abdominal scans showed slightly higher mean CTDIvol values in 2014 than in 2011. The proposed DRLs for adult head and abdominal scans were higher than those reported in other countries. Conclusion: The results imply that further optimization of CT examination protocols is required for adult head and abdominal scans as well as paediatric chest and abdominal scans. Advances in knowledge: Low-tube-voltage CT may be useful for reducing radiation doses in paediatric patients. The mean CTDIvol values for paediatric scans showed little difference that could be attributed to the choice of reconstruction algorithm. PMID:26043158

  14. Objective assessment of image quality and dose reduction in CT iterative reconstruction

    SciTech Connect

    Vaishnav, J. Y. Jung, W. C.; Popescu, L. M.; Zeng, R.; Myers, K. J.

    2014-07-15

    Purpose: Iterative reconstruction (IR) algorithms have the potential to reduce radiation dose in CT diagnostic imaging. As these algorithms become available on the market, a standardizable method of quantifying the dose reduction that a particular IR method can achieve would be valuable. Such a method would assist manufacturers in making promotional claims about dose reduction, buyers in comparing different devices, physicists in independently validating the claims, and the United States Food and Drug Administration in regulating the labeling of CT devices. However, the nonlinear nature of commercially available IR algorithms poses challenges to objectively assessing image quality, a necessary step in establishing the amount of dose reduction that a given IR algorithm can achieve without compromising that image quality. This review paper seeks to consolidate information relevant to objectively assessing the quality of CT IR images, and thereby measuring the level of dose reduction that a given IR algorithm can achieve. Methods: The authors discuss task-based methods for assessing the quality of CT IR images and evaluating dose reduction. Results: The authors explain and review recent literature on signal detection and localization tasks in CT IR image quality assessment, the design of an appropriate phantom for these tasks, possible choices of observers (including human and model observers), and methods of evaluating observer performance. Conclusions: Standardizing the measurement of dose reduction is a problem of broad interest to the CT community and to public health. A necessary step in the process is the objective assessment of CT image quality, for which various task-based methods may be suitable. This paper attempts to consolidate recent literature that is relevant to the development and implementation of task-based methods for the assessment of CT IR image quality.

  15. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    McMakin, A.H.; Cannon, S.D.; Finch, S.M.

    1992-07-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The TSP consists of experts in environmental pathways, epidemiology, surface-water transport, ground-water transport, statistics, demography, agriculture, meteorology, nuclear engineering, radiation dosimetry, and cultural anthropology. Included are appointed technical members representing the states of Oregon, Washington, and Idaho, a representative of Native American tribes, and an individual representing the public. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates): Source terms, environmental transport, environmental monitoring data, demography, food consumption, and agriculture, and environmental pathways and dose estimates. Progress is discussed.

  16. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1991-01-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is being managed and conducted by the Pacific Northwest Laboratory (PNL) under the direction of an independent Technical Steering Panel (TSP). The TSP consists of experts in environmental pathways, epidemiology, surface-water transport, ground-water transport, statistics, demography, agriculture, meteorology, nuclear engineering, radiation dosimetry, and cultural anthropology. Included are appointed technical members representing the states of Oregon and Washington, a representative of Native American tribes, and an individual representing the public. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on human (dose estimates): Source Terms; Environmental Transport; Environmental Monitoring Data; Demographics, Agriculture, Food Habits and; Environmental Pathways and Dose Estimates.

  17. Gamma Radiation Doses In Sweden

    SciTech Connect

    Almgren, Sara; Isaksson, Mats; Barregaard, Lars

    2008-08-07

    Gamma dose rate measurements were performed in one urban and one rural area using thermoluminescence dosimeters (TLD) worn by 46 participants and placed in their dwellings. The personal effective dose rates were 0.096{+-}0.019(1 SD) and 0.092{+-}0.016(1 SD){mu}Sv/h in the urban and rural area, respectively. The corresponding dose rates in the dwellings were 0.11{+-}0.042(1 SD) and 0.091{+-}0.026(1 SD){mu}Sv/h. However, the differences between the areas were not significant. The values were higher in buildings made of concrete than of wood and higher in apartments than in detached houses. Also, {sup 222}Rn measurements were performed in each dwelling, which showed no correlation with the gamma dose rates in the dwellings.

  18. PSC algorithm description

    NASA Technical Reports Server (NTRS)

    Nobbs, Steven G.

    1995-01-01

    An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.

  19. Comprehensive eye evaluation algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  20. Quantum gate decomposition algorithms.

    SciTech Connect

    Slepoy, Alexander

    2006-07-01

    Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.

  1. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  2. Algorithm for reaction classification.

    PubMed

    Kraut, Hans; Eiblmaier, Josef; Grethe, Guenter; Löw, Peter; Matuszczyk, Heinz; Saller, Heinz

    2013-11-25

    Reaction classification has important applications, and many approaches to classification have been applied. Our own algorithm tests all maximum common substructures (MCS) between all reactant and product molecules in order to find an atom mapping containing the minimum chemical distance (MCD). Recent publications have concluded that new MCS algorithms need to be compared with existing methods in a reproducible environment, preferably on a generalized test set, yet the number of test sets available is small, and they are not truly representative of the range of reactions that occur in real reaction databases. We have designed a challenging test set of reactions and are making it publicly available and usable with InfoChem's software or other classification algorithms. We supply a representative set of example reactions, grouped into different levels of difficulty, from a large number of reaction databases that chemists actually encounter in practice, in order to demonstrate the basic requirements for a mapping algorithm to detect the reaction centers in a consistent way. We invite the scientific community to contribute to the future extension and improvement of this data set, to achieve the goal of a common standard.

  3. Robotic Follow Algorithm

    SciTech Connect

    2005-03-30

    The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.

  4. Weldon Spring historical dose estimate

    SciTech Connect

    Meshkov, N.; Benioff, P.; Wang, J.; Yuan, Y.

    1986-07-01

    This study was conducted to determine the estimated radiation doses that individuals in five nearby population groups and the general population in the surrounding area may have received as a consequence of activities at a uranium processing plant in Weldon Spring, Missouri. The study is retrospective and encompasses plant operations (1957-1966), cleanup (1967-1969), and maintenance (1969-1982). The dose estimates for members of the nearby population groups are as follows. Of the three periods considered, the largest doses to the general population in the surrounding area would have occurred during the plant operations period (1957-1966). Dose estimates for the cleanup (1967-1969) and maintenance (1969-1982) periods are negligible in comparison. Based on the monitoring data, if there was a person residing continually in a dwelling 1.2 km (0.75 mi) north of the plant, this person is estimated to have received an average of about 96 mrem/yr (ranging from 50 to 160 mrem/yr) above background during plant operations, whereas the dose to a nearby resident during later years is estimated to have been about 0.4 mrem/yr during cleanup and about 0.2 mrem/yr during the maintenance period. These values may be compared with the background dose in Missouri of 120 mrem/yr.

  5. Technical basis for dose reconstruction

    SciTech Connect

    Anspaugh, L.R.

    1996-01-31

    The purpose of this paper is to consider two general topics: technical considerations of why dose-reconstruction studies should or should not be performed and methods of dose reconstruction. The first topic is of general and growing interest as the number of dose-reconstruction studies increases, and one asks the question whether it is necessary to perform a dose reconstruction for virtually every site at which, for example, the Department of Energy (DOE) has operated a nuclear-related facility. And there is the broader question of how one might logically draw the line at performing or not performing dose-reconstruction (radiological and chemical) studies for virtually every industrial complex in the entire country. The second question is also of general interest. There is no single correct way to perform a dose-reconstruction study, and it is important not to follow blindly a single method to the point that cheaper, faster, more accurate, and more transparent methods might not be developed and applied.

  6. BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...

    EPA Pesticide Factsheets

    The U.S. EPA conducts risk assessments for an array of health effects that may result from exposure to environmental agents, and that require an analysis of the relationship between exposure and health-related outcomes. The dose-response assessment is essentially a two-step process, the first being the definition of a point of departure (POD), and the second extrapolation from the POD to low environmentally-relevant exposure levels. The benchmark dose (BMD) approach provides a more quantitative alternative to the first step in the dose-response assessment than the current NOAEL/LOAEL process for noncancer health effects, and is similar to that for determining the POD proposed for cancer endpoints. As the Agency moves toward harmonization of approaches for human health risk assessment, the dichotomy between cancer and noncancer health effects is being replaced by consideration of mode of action and whether the effects of concern are likely to be linear or nonlinear at low doses. Thus, the purpose of this project is to provide guidance for the Agency and the outside community on the application of the BMD approach in determining the POD for all types of health effects data, whether a linear or nonlinear low dose extrapolation is used. A guidance document is being developed under the auspices of EPA's Risk Assessment Forum. The purpose of this project is to provide guidance for the Agency and the outside community on the application of the benchmark dose (BMD) appr

  7. Development of computer algorithms for radiation treatment planning.

    PubMed

    Cunningham, J R

    1989-06-01

    As a result of an analysis of data relating tissue response to radiation absorbed dose the ICRU has recommended a target for accuracy of +/- 5 for dose delivery in radiation therapy. This is a difficult overall objective to achieve because of the many steps that make up a course of radiotherapy. The calculation of absorbed dose is only one of the steps and so to achieve an overall accuracy of better than +/- 5% the accuracy in dose calculation must be better yet. The physics behind the problem is sufficiently complicated so that no exact method of calculation has been found and consequently approximate solutions must be used. The development of computer algorithms for this task involves the search for better and better approximate solutions. To achieve the desired target of accuracy a fairly sophisticated calculation procedure must be used. Only when this is done can we hope to further improve our knowledge of the way in which tissues respond to radiation treatments.

  8. SU-E-T-793: Validation of COMPASS 3D Dosimetry as Pre Treatment Verification with Commercial TPS Algorithms

    SciTech Connect

    Vikraman, S; Ramu, M; Karrthick, Kp; Rajesh, T; Senniandavar, V; Sambasivaselli, R; Maragathaveni, S; Dhivya, N; Tejinder, K; Manigandan, D; Muthukumaran, M

    2015-06-15

    Purpose: The purpose of this study was to validate the advent of COMPASS 3D dosimetry as a routine pre treatment verification tool with commercially available CMS Monaco and Oncentra Masterplan planning system. Methods: Twenty esophagus patients were selected for this study. All these patients underwent radical VMAT treatment in Elekta Linac and plans were generated in Monaco v5.0 with MonteCarlo(MC) dose calculation algorithm. COMPASS 3D dosimetry comprises an advanced dose calculation algorithm of collapsed cone convolution(CCC). To validate CCC algorithm in COMPASS, The DICOM RT Plans generated using Monaco MC algorithm were transferred to Oncentra Masterplan v4.3 TPS. Only final dose calculations were performed using CCC algorithm with out optimization in Masterplan planning system. It is proven that MC algorithm is an accurate algorithm and obvious that there will be a difference with MC and CCC algorithms. Hence CCC in COMPASS should be validated with other commercially available CCC algorithm. To use the CCC as pretreatment verification tool with reference to MC generated treatment plans, CCC in OMP and CCC in COMPASS were validated using dose volume based indices such as D98, D95 for target volumes and OAR doses. Results: The point doses for open beams were observed <1% with reference to Monaco MC algorithms. Comparisons of CCC(OMP) Vs CCC(COMPASS) showed a mean difference of 1.82%±1.12SD and 1.65%±0.67SD for D98 and D95 respectively for Target coverage. Maximum point dose of −2.15%±0.60SD difference was observed in target volume. The mean lung dose of −2.68%±1.67SD was noticed between OMP and COMPASS. The maximum point doses for spinal cord were −1.82%±0.287SD. Conclusion: In this study, the accuracy of CCC algorithm in COMPASS 3D dosimetry was validated by compared with CCC algorithm in OMP TPS. Dose calculation in COMPASS is feasible within < 2% in comparison with commercially available TPS algorithms.

  9. Fast autodidactic adaptive equalization algorithms

    NASA Astrophysics Data System (ADS)

    Hilal, Katia

    Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.

  10. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  11. Dose impact in radiographic lung injury following lung SBRT: Statistical analysis and geometric interpretation

    SciTech Connect

    Yu, Victoria; Kishan, Amar U.; Cao, Minsong; Low, Daniel; Lee, Percy; Ruan, Dan

    2014-03-15

    Purpose: To demonstrate a new method of evaluating dose response of treatment-induced lung radiographic injury post-SBRT (stereotactic body radiotherapy) treatment and the discovery of bimodal dose behavior within clinically identified injury volumes. Methods: Follow-up CT scans at 3, 6, and 12 months were acquired from 24 patients treated with SBRT for stage-1 primary lung cancers or oligometastic lesions. Injury regions in these scans were propagated to the planning CT coordinates by performing deformable registration of the follow-ups to the planning CTs. A bimodal behavior was repeatedly observed from the probability distribution for dose values within the deformed injury regions. Based on a mixture-Gaussian assumption, an Expectation-Maximization (EM) algorithm was used to obtain characteristic parameters for such distribution. Geometric analysis was performed to interpret such parameters and infer the critical dose level that is potentially inductive of post-SBRT lung injury. Results: The Gaussian mixture obtained from the EM algorithm closely approximates the empirical dose histogram within the injury volume with good consistency. The average Kullback-Leibler divergence values between the empirical differential dose volume histogram and the EM-obtained Gaussian mixture distribution were calculated to be 0.069, 0.063, and 0.092 for the 3, 6, and 12 month follow-up groups, respectively. The lower Gaussian component was located at approximately 70% prescription dose (35 Gy) for all three follow-up time points. The higher Gaussian component, contributed by the dose received by planning target volume, was located at around 107% of the prescription dose. Geometrical analysis suggests the mean of the lower Gaussian component, located at 35 Gy, as a possible indicator for a critical dose that induces lung injury after SBRT. Conclusions: An innovative and improved method for analyzing the correspondence between lung radiographic injury and SBRT treatment dose has

  12. SU-E-T-371: Evaluating the Convolution Algorithm of a Commercially Available Radiosurgery Irradiator Using a Novel Phantom

    SciTech Connect

    Cates, J; Drzymala, R

    2015-06-15

    Purpose: The purpose of this study was to develop and use a novel phantom to evaluate the accuracy and usefulness of the Leskell Gamma Plan convolution-based dose calculation algorithm compared with the current TMR10 algorithm. Methods: A novel phantom was designed to fit the Leskell Gamma Knife G Frame which could accommodate various materials in the form of one inch diameter, cylindrical plugs. The plugs were split axially to allow EBT2 film placement. Film measurements were made during two experiments. The first utilized plans generated on a homogeneous acrylic phantom setup using the TMR10 algorithm, with various materials inserted into the phantom during film irradiation to assess the effect on delivered dose due to unplanned heterogeneities upstream in the beam path. The second experiment utilized plans made on CT scans of different heterogeneous setups, with one plan using the TMR10 dose calculation algorithm and the second using the convolution-based algorithm. Materials used to introduce heterogeneities included air, LDPE, polystyrene, Delrin, Teflon, and aluminum. Results: The data shows that, as would be expected, having heterogeneities in the beam path does induce dose delivery error when using the TMR10 algorithm, with the largest errors being due to the heterogeneities with electron densities most different from that of water, i.e. air, Teflon, and aluminum. Additionally, the Convolution algorithm did account for the heterogeneous material and provided a more accurate predicted dose, in extreme cases up to a 7–12% improvement over the TMR10 algorithm. The convolution algorithm expected dose was accurate to within 3% in all cases. Conclusion: This study proves that the convolution algorithm is an improvement over the TMR10 algorithm when heterogeneities are present. More work is needed to determine what the heterogeneity size/volume limits are where this improvement exists, and in what clinical and/or research cases this would be relevant.

  13. Poster — Thur Eve — 27: Flattening Filter Free VMAT Quality Assurance: Dose Rate Considerations for Detector Response

    SciTech Connect

    Viel, Francis; Duzenli, Cheryl; Camborde, Marie-Laure; Strgar, Vincent; Horwood, Ron; Atwal, Parmveer; Gete, Ermias; Karan, Tania

    2014-08-15

    Introduction: Radiation detector responses can be affected by dose rate. Due to higher dose per pulse and wider range of mu rates in FFF beams, detector responses should be characterized prior to implementation of QA protocols for FFF beams. During VMAT delivery, the MU rate may also vary dramatically within a treatment fraction. This study looks at the dose per pulse variation throughout a 3D volume for typical VMAT plans and the response characteristics for a variety of detectors, and makes recommendations on the design of QA protocols for FFF VMAT QA. Materials and Methods: Linac log file data and a simplified dose calculation algorithm are used to calculate dose per pulse for a variety of clinical VMAT plans, on a voxel by voxel basis, as a function of time in a cylindrical phantom. Diode and ion chamber array responses are characterized over the relevant range of dose per pulse and dose rate. Results: Dose per pulse ranges from <0.1 mGy/pulse to 1.5 mGy/pulse in a typical VMAT treatment delivery using the 10XFFF beam. Diode detector arrays demonstrate increased sensitivity to dose (+./− 3%) with increasing dose per pulse over this range. Ion chamber arrays demonstrate decreased sensitivity to dose (+/− 1%) with increasing dose rate over this range. Conclusions: QA protocols should be designed taking into consideration inherent changes in detector sensitivity with dose rate. Neglecting to account for changes in detector response with dose per pulse can lead to skewed QA results.

  14. A comparison of Monte Carlo dose calculation denoising techniques

    NASA Astrophysics Data System (ADS)

    El Naqa, I.; Kawrakow, I.; Fippel, M.; Siebers, J. V.; Lindsay, P. E.; Wickerhauser, M. V.; Vicic, M.; Zakarian, K.; Kauffmann, N.; Deasy, J. O.

    2005-03-01

    Recent studies have demonstrated that Monte Carlo (MC) denoising techniques can reduce MC radiotherapy dose computation time significantly by preferentially eliminating statistical fluctuations ('noise') through smoothing. In this study, we compare new and previously published approaches to MC denoising, including 3D wavelet threshold denoising with sub-band adaptive thresholding, content adaptive mean-median-hybrid (CAMH) filtering, locally adaptive Savitzky-Golay curve-fitting (LASG), anisotropic diffusion (AD) and an iterative reduction of noise (IRON) method formulated as an optimization problem. Several challenging phantom and computed-tomography-based MC dose distributions with varying levels of noise formed the test set. Denoising effectiveness was measured in three ways: by improvements in the mean-square-error (MSE) with respect to a reference (low noise) dose distribution; by the maximum difference from the reference distribution and by the 'Van Dyk' pass/fail criteria of either adequate agreement with the reference image in low-gradient regions (within 2% in our case) or, in high-gradient regions, a distance-to-agreement-within-2% of less than 2 mm. Results varied significantly based on the dose test case: greater reductions in MSE were observed for the relatively smoother phantom-based dose distribution (up to a factor of 16 for the LASG algorithm); smaller reductions were seen for an intensity modulated radiation therapy (IMRT) head and neck case (typically, factors of 2-4). Although several algorithms reduced statistical noise for all test geometries, the LASG method had the best MSE reduction for three of the four test geometries, and performed the best for the Van Dyk criteria. However, the wavelet thresholding method performed better for the head and neck IMRT geometry and also decreased the maximum error more effectively than LASG. In almost all cases, the evaluated methods provided acceleration of MC results towards statistically more accurate

  15. Monte Carlo dose calculations for phantoms with hip prostheses

    NASA Astrophysics Data System (ADS)

    Bazalova, M.; Coolens, C.; Cury, F.; Childs, P.; Beaulieu, L.; Verhaegen, F.

    2008-02-01

    Computed tomography (CT) images of patients with hip prostheses are severely degraded by metal streaking artefacts. The low image quality makes organ contouring more difficult and can result in large dose calculation errors when Monte Carlo (MC) techniques are used. In this work, the extent of streaking artefacts produced by three common hip prosthesis materials (Ti-alloy, stainless steel, and Co-Cr-Mo alloy) was studied. The prostheses were tested in a hypothetical prostate treatment with five 18 MV photon beams. The dose distributions for unilateral and bilateral prosthesis phantoms were calculated with the EGSnrc/DOSXYZnrc MC code. This was done in three phantom geometries: in the exact geometry, in the original CT geometry, and in an artefact-corrected geometry. The artefact-corrected geometry was created using a modified filtered back-projection correction technique. It was found that unilateral prosthesis phantoms do not show large dose calculation errors, as long as the beams miss the artefact-affected volume. This is possible to achieve in the case of unilateral prosthesis phantoms (except for the Co-Cr-Mo prosthesis which gives a 3% error) but not in the case of bilateral prosthesis phantoms. The largest dose discrepancies were obtained for the bilateral Co-Cr-Mo hip prosthesis phantom, up to 11% in some voxels within the prostate. The artefact correction algorithm worked well for all phantoms and resulted in dose calculation errors below 2%. In conclusion, a MC treatment plan should include an artefact correction algorithm when treating patients with hip prostheses.

  16. Superresolution direction finding algorithms for the characterisation of multi-moded HF signals

    NASA Astrophysics Data System (ADS)

    Zatman, M. A.; Strangeways, H. J.

    1994-07-01

    In this work three superresolution direction finding algorithms for use with arbitrary array geometries are described, all of which are capable of dealing with coherent signals, as typically found in the multi-path environment. DOSE is a quick single snapshot algorithm developed by the authors which operates on the received data vector. The IMP and Maximum Likelihood (ML) algorithms are used to process time averaged data covariance matrices. All three algorithms attempt in different ways to find a multi-dimensional solution to the inverse problem of direction finding. It is shown that calibration of arrays of identical elements is simplified by the use of these multi-dimensional algorithms when compared to 1 dimensional methods such as MUSIC. A common methodology for the implementation of these multi-dimensional algorithms is presented, and a quick method of implementing the ML algorithm and determining the number of signals present is described.

  17. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  18. A Generalized QMRA Beta-Poisson Dose-Response Model.

    PubMed

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2016-10-01

    Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single-hit dose-response models are the most commonly used dose-response models in QMRA. Denoting PI(d) as the probability of infection at a given mean dose d, a three-parameter generalized QMRA beta-Poisson dose-response model, PI(d|α,β,r*), is proposed in which the minimum number of organisms required for causing infection, Kmin , is not fixed, but a random variable following a geometric distribution with parameter 0dose-response mechanism. Since a maximum likelihood solution is not easily available, a likelihood-free approximate Bayesian computation (ABC) algorithm is employed for parameter estimation. By fitting the generalized model to four experimental data sets from the literature, this study reveals that the posterior median r* estimates produced fall short of meeting the required condition of r* = 1 for single-hit assumption. However, three out of four data sets fitted by the generalized models could not achieve an improvement in goodness of fit. These combined results imply that, at least in some cases, a single-hit assumption for characterizing the dose-response process may not be appropriate, but that the more complex models may be difficult to support especially if the sample size is small. The three-parameter generalized model provides a possibility to investigate the mechanism of a dose-response process in greater detail than is possible under a single-hit model.

  19. Generation of Composite Dose and Biological Effective Dose (BED) Over Multiple Treatment Modalities and Multistage Planning Using Deformable Image Registration

    SciTech Connect

    Zhang, Geoffrey Huang, T-C; Feygelman, Vladimir; Stevens, Craig; Forster, Kenneth

    2010-07-01

    Currently there are no commercially available tools to generate composite plans across different treatment modalities and/or different planning image sets. Without a composite plan, it may be difficult to perform a meaningful dosimetric evaluation of the overall treatment course. In this paper, we introduce a method to generate composite biological effective dose (BED) plans over multiple radiotherapy treatment modalities and/or multistage plans, using deformable image registration. Two cases were used to demonstrate the method. Case I was prostate cancer treated with intensity-modulated radiation therapy (IMRT) and a permanent seed implant. Case II involved lung cancer treated with two treatment plans generated on two separate computed tomography image sets. Thin-plate spline or optical flow methods were used as appropriate to generate deformation matrices. The deformation matrices were then applied to the dose matrices and the resulting physical doses were converted to BED and added to yield the composite plan. Cell proliferation and sublethal repair were considered in the BED calculations. The difference in BED between normal tissues and tumor volumes was accounted for by using different BED models, {alpha}/{beta} values, and cell potential doubling times. The method to generate composite BED plans presented in this paper provides information not available with the traditional simple dose summation or physical dose summation. With the understanding of limitations and uncertainties of the algorithms involved, it may be valuable for the overall treatment plan evaluation.

  20. Extrapolation of the dna fragment-size distribution after high-dose irradiation to predict effects at low doses

    NASA Technical Reports Server (NTRS)

    Ponomarev, A. L.; Cucinotta, F. A.; Sachs, R. K.; Brenner, D. J.; Peterson, L. E.

    2001-01-01

    The patterns of DSBs induced in the genome are different for sparsely and densely ionizing radiations: In the former case, the patterns are well described by a random-breakage model; in the latter, a more sophisticated tool is needed. We used a Monte Carlo algorithm with a random-walk geometry of chromatin, and a track structure defined by the radial distribution of energy deposition from an incident ion, to fit the PFGE data for fragment-size distribution after high-dose irradiation. These fits determined the unknown parameters of the model, enabling the extrapolation of data for high-dose irradiation to the low doses that are relevant for NASA space radiation research. The randomly-located-clusters formalism was used to speed the simulations. It was shown that only one adjustable parameter, Q, the track efficiency parameter, was necessary to predict DNA fragment sizes for wide ranges of doses. This parameter was determined for a variety of radiations and LETs and was used to predict the DSB patterns at the HPRT locus of the human X chromosome after low-dose irradiation. It was found that high-LET radiation would be more likely than low-LET radiation to induce additional DSBs within the HPRT gene if this gene already contained one DSB.

  1. A constrained tracking algorithm to optimize plug patterns in multiple isocenter Gamma Knife radiosurgery planning

    SciTech Connect

    Li Kaile; Ma Lijun

    2005-10-15

    We developed a source blocking optimization algorithm for Gamma Knife radiosurgery, which is based on tracking individual source contributions to arbitrarily shaped target and critical structure volumes. A scalar objective function and a direct search algorithm were used to produce near real-time calculation results. The algorithm allows the user to set and vary the total number of plugs for each shot to limit the total beam-on time. We implemented and tested the algorithm for several multiple-isocenter Gamma Knife cases. It was found that the use of limited number of plugs significantly lowered the integral dose to the critical structures such as an optical chiasm in pituitary adenoma cases. The main effect of the source blocking is the faster dose falloff in the junction area between the target and the critical structure. In summary, we demonstrated a useful source-plugging algorithm for improving complex multi-isocenter Gamma Knife treatment planning cases.

  2. Efficient iterative image reconstruction algorithm for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  3. Selective source blocking for Gamma Knife radiosurgery of trigeminal neuralgia based on analytical dose modelling

    NASA Astrophysics Data System (ADS)

    Li, Kaile; Ma, Lijun

    2004-08-01

    We have developed an automatic critical region shielding (ACRS) algorithm for Gamma Knife radiosurgery of trigeminal neuralgia. The algorithm selectively blocks 201 Gamma Knife sources to minimize the dose to the brainstem while irradiating the root entry area of the trigeminal nerve with 70-90 Gy. An independent dose model was developed to implement the algorithm. The accuracy of the dose model was tested and validated via comparison with the Leksell GammaPlan (LGP) calculations. Agreements of 3% or 3 mm in isodose distributions were found for both single-shot and multiple-shot treatment plans. After the optimized blocking patterns are obtained via the independent dose model, they are imported into the LGP for final dose calculations and treatment planning analyses. We found that the use of a moderate number of source plugs (30-50 plugs) significantly lowered (~40%) the dose to the brainstem for trigeminal neuralgia treatments. Considering the small effort involved in using these plugs, we recommend source blocking for all trigeminal neuralgia treatments with Gamma Knife radiosurgery.

  4. Effects of energy spectrum on dose distribution calculations for high energy electron beams.

    PubMed

    Toutaoui, Abdelkader; Khelassi-Toutaoui, Nadia; Brahimi, Zakia; Chami, Ahmed Chafik

    2009-01-01

    In an early work we have demonstrated the possibility of using Monte Carlo generated pencil beams for 3D electron beam dose calculations. However, in this model the electron beam was considered as monoenergetic and the effects of the energy spectrum were taken into account by correction factors, derived from measuring central-axis depth dose curves. In the present model, the electron beam is considered as polyenergetic and the pencil beam distribution of a clinical electron beam, of a given nominal energy, is represented as a linear combination of Monte Carlo monoenergetic pencil beams. The coefficients of the linear combination describe the energy spectrum of the clinical electron beam, and are chosen to provide the best-fit between the calculated and measured central axis depth dose, in water. The energy spectrum is determined by the constrained least square method. The angular distribution of the clinical electron beam is determined by in-air penumbra measurements. The predictions of this algorithm agree very well with the measurements in the region near the surface, and the discrepancies between the measured and calculated dose distributions, behind 3D heterogeneities, are reduced to less than 10%. We have demonstrated a new algorithm for 3D electron beam dose calculations, which takes into account the energy spectra. Results indicate that the use of this algorithm leads to a better modeling of dose distributions downstream, from complex heterogeneities.

  5. Effects of energy spectrum on dose distribution calculations for high energy electron beams

    PubMed Central

    Toutaoui, Abdelkader; Khelassi-Toutaoui, Nadia; Brahimi, Zakia; Chami, Ahmed Chafik

    2009-01-01

    In an early work we have demonstrated the possibility of using Monte Carlo generated pencil beams for 3D electron beam dose calculations. However, in this model the electron beam was considered as monoenergetic and the effects of the energy spectrum were taken into account by correction factors, derived from measuring central-axis depth dose curves. In the present model, the electron beam is considered as polyenergetic and the pencil beam distribution of a clinical electron beam, of a given nominal energy, is represented as a linear combination of Monte Carlo monoenergetic pencil beams. The coefficients of the linear combination describe the energy spectrum of the clinical electron beam, and are chosen to provide the best-fit between the calculated and measured central axis depth dose, in water. The energy spectrum is determined by the constrained least square method. The angular distribution of the clinical electron beam is determined by in-air penumbra measurements. The predictions of this algorithm agree very well with the measurements in the region near the surface, and the discrepancies between the measured and calculated dose distributions, behind 3D heterogeneities, are reduced to less than 10%. We have demonstrated a new algorithm for 3D electron beam dose calculations, which takes into account the energy spectra. Results indicate that the use of this algorithm leads to a better modeling of dose distributions downstream, from complex heterogeneities. PMID:20126560

  6. A MEDLINE categorization algorithm

    PubMed Central

    Darmoni, Stefan J; Névéol, Aurelie; Renard, Jean-Marie; Gehanno, Jean-Francois; Soualmia, Lina F; Dahamna, Badisse; Thirion, Benoit

    2006-01-01

    Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA) is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources with MeSH (terms

  7. Reactive Collision Avoidance Algorithm

    NASA Technical Reports Server (NTRS)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  8. Algorithm Visualization System for Teaching Spatial Data Algorithms

    ERIC Educational Resources Information Center

    Nikander, Jussi; Helminen, Juha; Korhonen, Ari

    2010-01-01

    TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…

  9. Peripheral doses from pediatric IMRT

    SciTech Connect

    Klein, Eric E.; Maserang, Beth; Wood, Roy; Mansur, David

    2006-07-15

    Peripheral dose (PD) data exist for conventional fields ({>=}10 cm) and intensity-modulated radiotherapy (IMRT) delivery to standard adult-sized phantoms. Pediatric peripheral dose reports are limited to conventional therapy and are model based. Our goal was to ascertain whether data acquired from full phantom studies and/or pediatric models, with IMRT treatment times, could predict Organ at Risk (OAR) dose for pediatric IMRT. As monitor units (MUs) are greater for IMRT, it is expected IMRT PD will be higher; potentially compounded by decreased patient size (absorption). Baseline slab phantom peripheral dose measurements were conducted for very small field sizes (from 2 to 10 cm). Data were collected at distances ranging from 5 to 72 cm away from the field edges. Collimation was either with the collimating jaws or the multileaf collimator (MLC) oriented either perpendicular or along the peripheral dose measurement plane. For the clinical tests, five patients with intracranial or base of skull lesions were chosen. IMRT and conventional three-dimensional (3D) plans for the same patient/target/dose (180 cGy), were optimized without limitation to the number of fields or wedge use. Six MV, 120-leaf MLC Varian axial beams were used. A phantom mimicking a 3-year-old was configured per Center for Disease Control data. Micro (0.125 cc) and cylindrical (0.6 cc) ionization chambers were appropriated for the thyroid, breast, ovaries, and testes. The PD was recorded by electrometers set to the 10{sup -10} scale. Each system set was uniquely calibrated. For the slab phantom studies, close peripheral points were found to have a higher dose for low energy and larger field size and when MLC was not deployed. For points more distant from the field edge, the PD was higher for high-energy beams. MLC orientation was found to be inconsequential for the small fields tested. The thyroid dose was lower for IMRT delivery than that predicted for conventional (ratio of IMRT/cnventional ranged

  10. Effect of Breathing Motion on Radiotherapy Dose Accumulation in the Abdomen Using Deformable Registration

    SciTech Connect

    Velec, Michael; Moseley, Joanne L.; Eccles, Cynthia L.; Craig, Tim; Sharpe, Michael B.; Dawson, Laura A.; Brock, Kristy K.

    2011-05-01

    Purpose: To investigate the effect of breathing motion and dose accumulation on the planned radiotherapy dose to liver tumors and normal tissues using deformable image registration. Methods and Materials: Twenty-one free-breathing stereotactic liver cancer radiotherapy patients, planned on static exhale computed tomography (CT) for 27-60 Gy in six fractions, were included. A biomechanical model-based deformable image registration algorithm retrospectively deformed each exhale CT to inhale CT. This deformation map was combined with exhale and inhale dose grids from the treatment planning system to accumulate dose over the breathing cycle. Accumulation was also investigated using a simple rigid liver-to-liver registration. Changes to tumor and normal tissue dose were quantified. Results: Relative to static plans, mean dose change (range) after deformable dose accumulation (as % of prescription dose) was -1 (-14 to 8) to minimum tumor, -4 (-15 to 0) to maximum bowel, -4 (-25 to 1) to maximum duodenum, 2 (-1 to 9) to maximum esophagus, -2 (-13 to 4) to maximum stomach, 0 (-3 to 4) to mean liver, and -1 (-5 to 1) and -2 (-7 to 1) to mean left and right kidneys. Compared to deformable registration, rigid modeling had changes up to 8% to minimum tumor and 7% to maximum normal tissues. Conclusion: Deformable registration and dose accumulation revealed potentially significant dose changes to either a tumor or normal tissue in the majority of cases as a result of breathing motion. These changes may not be accurately accounted for with rigid motion.

  11. A design of a DICOM-RT-based tool box for nonrigid 4D dose calculation.

    PubMed

    Wong, Victy Y W; Baker, Colin R; Leung, T W; Tung, Stewart Y

    2016-03-08

    The study was aimed to introduce a design of a DICOM-RT-based tool box to facilitate 4D dose calculation based on deformable voxel-dose registration. The computational structure and the calculation algorithm of the tool box were explicitly discussed in the study. The tool box was written in MATLAB in conjunction with CERR. It consists of five main functions which allow a) importation of DICOM-RT-based 3D dose plan, b) deformable image registration, c) tracking voxel doses along breathing cycle, d) presentation of temporal dose distribution at different time phase, and e) derivation of 4D dose. The efficacy of using the tool box for clinical application had been verified with nine clinical cases on retrospective-study basis. The logistic and the robustness of the tool box were tested with 27 applications and the results were shown successful with no computational errors encountered. In the study, the accumulated dose coverage as a function of planning CT taken at end-inhale, end-exhale, and mean tumor position were assessed. The results indicated that the majority of the cases (67%) achieved maximum target coverage, while the planning CT was taken at the temporal mean tumor position and 56% at the end-exhale position. The comparable results to the literature imply that the studied tool box can be reliable for 4D dose calculation. The authors suggest that, with proper application, 4D dose calculation using deformable registration can provide better dose evaluation for treatment with moving target.

  12. MO-G-18A-01: Radiation Dose Reducing Strategies in CT, Fluoroscopy and Radiography

    SciTech Connect

    Mahesh, M; Gingold, E; Jones, A

    2014-06-15

    Advances in medical x-ray imaging have provided significant benefits to patient care. According to NCRP 160, there are more than 400 million x-ray procedures performed annually in the United States alone that contributes to nearly half of all the radiation exposure to the US population. Similar growth trends in medical x-ray imaging are observed worldwide. Apparent increase in number of medical x-ray imaging procedures, new protocols and the associated radiation dose and risk has drawn considerable attention. This has led to a number of technological innovations such as tube current modulation, iterative reconstruction algorithms, dose alerts, dose displays, flat panel digital detectors, high efficient digital detectors, storage phosphor radiography, variable filters, etc. that are enabling users to acquire medical x-ray images at a much lower radiation dose. Along with these, there are number of radiation dose optimization strategies that users can adapt to effectively lower radiation dose in medical x-ray procedures. The main objectives of this SAM course are to provide information and how to implement the various radiation dose optimization strategies in CT, Fluoroscopy and Radiography. Learning Objectives: To update impact of technological advances on dose optimization in medical imaging. To identify radiation optimization strategies in computed tomography. To describe strategies for configuring fluoroscopic equipment that yields optimal images at reasonable radiation dose. To assess ways to configure digital radiography systems and recommend ways to improve image quality at optimal dose.

  13. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.

    1990-01-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that populations could have received from nuclear operations at Hanford since 1944. The project is being managed and conducted by the Pacific Northwest Laboratory (PNL) under the direction of an independent Technical Steering Panel (TSP). The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates). The Source Terms Task develops estimates of radioactive emissions from Hanford facilities since 1944. The Environmental Transport Task reconstructs the movement of radioactive materials from the areas of release to populations. The Environmental Monitoring Data Task assembles, evaluates, and reports historical environmental monitoring data. The Demographics, Agriculture, Food Habits Task develops the data needed to identify the populations that could have been affected by the releases. In addition to population and demographic data, the food and water resources and consumption patterns for populations are estimated because they provide a primary pathway for the intake of radionuclides. The Environmental Pathways and Dose Estimates Task use the information produced by the other tasks to estimate the radiation doses populations could have received from Hanford radiation. Project progress is documented in this monthly report, which is available to the public. 3 figs., 3 tabs.

  14. AGING FACILITY WORKER DOSE ASSESSMENT

    SciTech Connect

    R.L. Thacker

    2005-03-24

    The purpose of this calculation is to estimate radiation doses received by personnel working in the Aging Facility performing operations to transfer aging casks to the aging pads for thermal and logistical management, stage empty aging casks, and retrieve aging casks from the aging pads for further processing in other site facilities. Doses received by workers due to aging cask surveillance and maintenance operations are also included. The specific scope of work contained in this calculation covers both collective doses and individual worker group doses on an annual basis, and includes the contributions due to external and internal radiation from normal operation. There are no Category 1 event sequences associated with the Aging Facility (BSC 2004 [DIRS 167268], Section 7.2.1). The results of this calculation will be used to support the design of the Aging Facility and to provide occupational dose estimates for the License Application. The calculations contained in this document were developed by Environmental and Nuclear Engineering of the Design and Engineering Organization and are intended solely for the use of the Design and Engineering Organization in its work regarding facility operation. Yucca Mountain Project personnel from the Environmental and Nuclear Engineering should be consulted before use of the calculations for purposes other than those stated herein or use by individuals other than authorized personnel in Environmental and Nuclear Engineering.

  15. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S. M.; McMakin, A. H.

    1991-09-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation dose that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into five technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (i.e., dose estimates). The Source Terms Task develops estimates of radioactive emissions from Hanford facilities since 1944. The Environmental Transport Task reconstructs the movements of radioactive particles from the areas of release to populations. The Environmental Monitoring Data Task assemblies, evaluates and reports historical environmental monitoring data. The Demographics, Agriculture and Food Habits Task develops the data needed to identify the populations that could have been affected by the releases. The Environmental Pathways and Dose Estimates Task used the information derived from the other Tasks to estimate the radiation doses individuals could have received from Hanford radiation. This document lists the progress on this project as of September 1991. 3 figs., 2 tabs.

  16. Comparison of pencil-beam, collapsed-cone and Monte-Carlo algorithms in radiotherapy treatment planning for 6-MV photons

    NASA Astrophysics Data System (ADS)

    Kim, Sung Jin; Kim, Sung Kyu; Kim, Dong Ho

    2015-07-01

    Treatment planning system calculations in inhomogeneous regions may present significant inaccuracies due to loss of electronic equilibrium. In this study, three different dose calculation algorithms, pencil beam (PB), collapsed cone (CC), and Monte-Carlo (MC), provided by our planning system were compared to assess their impact on the three-dimensional planning of lung and breast cases. A total of five breast and five lung cases were calculated by using the PB, CC, and MC algorithms. Planning treatment volume (PTV) and organs at risk (OARs) delineations were performed according to our institution's protocols on the Oncentra MasterPlan image registration module, on 0.3-0.5 cm computed tomography (CT) slices taken under normal respiration conditions. Intensitymodulated radiation therapy (IMRT) plans were calculated for the three algorithm for each patient. The plans were conducted on the Oncentra MasterPlan (PB and CC) and CMS Monaco (MC) treatment planning systems for 6 MV. The plans were compared in terms of the dose distribution in target, the OAR volumes, and the monitor units (MUs). Furthermore, absolute dosimetry was measured using a three-dimensional diode array detector (ArcCHECK) to evaluate the dose differences in a homogeneous phantom. Comparing the dose distributions planned by using the PB, CC, and MC algorithms, the PB algorithm provided adequate coverage of the PTV. The MUs calculated using the PB algorithm were less than those calculated by using. The MC algorithm showed the highest accuracy in terms of the absolute dosimetry. Differences were found when comparing the calculation algorithms. The PB algorithm estimated higher doses for the target than the CC and the MC algorithms. The PB algorithm actually overestimated the dose compared with those calculated by using the CC and the MC algorithms. The MC algorithm showed better accuracy than the other algorithms.

  17. SU-E-T-616: Plan Quality Assessment of Both Treatment Planning System Dose and Measurement-Based 3D Reconstructed Dose in the Patient

    SciTech Connect

    Olch, A

    2015-06-15

    Purpose: Systematic radiotherapy plan quality assessment promotes quality improvement. Software tools can perform this analysis by applying site-specific structure dose metrics. The next step is to similarly evaluate the quality of the dose delivery. This study defines metrics for acceptable doses to targets and normal organs for a particular treatment site and scores each plan accordingly. The input can be the TPS or the measurement-based 3D patient dose. From this analysis, one can determine whether the delivered dose distribution to the patient receives a score which is comparable to the TPS plan score, otherwise replanning may be indicated. Methods: Eleven neuroblastoma patient plans were exported from Eclipse to the Quality Reports program. A scoring algorithm defined a score for each normal and target structure based on dose-volume parameters. Each plan was scored by this algorithm and the percentage of total possible points was obtained. Each plan also underwent IMRT QA measurements with a Mapcheck2 or ArcCheck. These measurements were input into the 3DVH program to compute the patient 3D dose distribution which was analyzed using the same scoring algorithm as the TPS plan. Results: The mean quality score for the TPS plans was 75.37% (std dev=14.15%) compared to 71.95% (std dev=13.45%) for the 3DVH dose distribution. For 3/11 plans, the 3DVH-based quality score was higher than the TPS score, by between 0.5 to 8.4 percentage points. Eight/11 plans scores decreased based on IMRT QA measurements by 1.2 to 18.6 points. Conclusion: Software was used to determine the degree to which the plan quality score differed between the TPS and measurement-based dose. Although the delivery score was generally in good agreement with the planned dose score, there were some that improved while there was one plan whose delivered dose quality was significantly less than planned. This methodology helps evaluate both planned and delivered dose quality. Sun Nuclear Corporation has

  18. Algorithms, games, and evolution

    PubMed Central

    Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh

    2014-01-01

    Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: “What algorithm could possibly achieve all this in a mere three and a half billion years?” In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution. PMID:24979793

  19. CAVITY CONTROL ALGORITHM

    SciTech Connect

    Tomasz Plawski, J. Hovater

    2010-09-01

    A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.

  20. Irregular Applications: Architectures & Algorithms

    SciTech Connect

    Feo, John T.; Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    2012-02-06

    Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.

  1. Basic cluster compression algorithm

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.

    1980-01-01

    Feature extraction and data compression of LANDSAT data is accomplished by BCCA program which reduces costs associated with transmitting, storing, distributing, and interpreting multispectral image data. Algorithm uses spatially local clustering to extract features from image data to describe spectral characteristics of data set. Approach requires only simple repetitive computations, and parallel processing can be used for very high data rates. Program is written in FORTRAN IV for batch execution and has been implemented on SEL 32/55.

  2. SPA: Solar Position Algorithm

    NASA Astrophysics Data System (ADS)

    Reda, Ibrahim; Andreas, Afshin

    2015-04-01

    The Solar Position Algorithm (SPA) calculates the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of +/- 0.0003 degrees based on the date, time, and location on Earth. SPA is implemented in C; in addition to being available for download, an online calculator using this code is available at http://www.nrel.gov/midc/solpos/spa.html.

  3. Algorithmic Complexity. Volume II.

    DTIC Science & Technology

    1982-06-01

    works, give an example, and discuss the inherent weaknesses and their causes. Electrical Network Analysis Knuth mentions the applicability of...of these 3 products of 2-coefficient 2 1 polynomials can be found by a repeated application of the 3 multiplication W Ascheme, only 3.3-9 scalar...see another application of this paradigm later. We now investigate the efficiency of the divide-and-conquer polynomial multiplication algorithm. Let M(n

  4. ARPANET Routing Algorithm Improvements

    DTIC Science & Technology

    1978-10-01

    IMPROVEMENTS . .PFOnINI ORG. REPORT MUNDER -- ) _ .. .... 3940 7, AUT񓂏(c) .. .. .. CONTRACT Of GRANT NUMSlet e) SJ. M. /Mc~uillan E. C./Rosen I...8217), this problem may persist for a very long time, causing extremely bad performance throughout the whole network (for instance, if w’ reports that one of...algorithm may naturally tend to oscillate between bad routing paths and become itself a major contributor to network congestion. These examples show

  5. Signal Processing Algorithms.

    DTIC Science & Technology

    1983-10-13

    determining the solu- tion using the Moore - Penrose inverse . An expression for the mean square error is derived [8,9]. The expression indicates that...Proc. 10. "An Iterative Algorithm for Finding the Minimum Eigenvalue of a Class of Symmetric Matrices," D. Fuhrmann and B. Liu, submitted to 1984 IEEE...Int. Conf. Acous. Sp. 5V. Proc. 11. "Approximating the Eigenvectors of a Symmetric Toeplitz Matrix," by D. Fuhrmann and B. Liu, 1983 Allerton Conf. an

  6. SIMAS ADM XBT Algorithm

    DTIC Science & Technology

    2016-06-07

    XBT’s sound speed values instead of temperature values. Studies show that the sound speed at the surface in a specific location varies less than...be entered at the terminal in metric or English temperatures or sound speeds. The algorithm automatically determines which form each data point was... sound speeds. Leroy’s equation is used to derive sound speed from temperature or temperature from sound speed. The previous, current, and next months

  7. Adaptive continuous twisting algorithm

    NASA Astrophysics Data System (ADS)

    Moreno, Jaime A.; Negrete, Daniel Y.; Torres-González, Victor; Fridman, Leonid

    2016-09-01

    In this paper, an adaptive continuous twisting algorithm (ACTA) is presented. For double integrator, ACTA produces a continuous control signal ensuring finite time convergence of the states to zero. Moreover, the control signal generated by ACTA compensates the Lipschitz perturbation in finite time, i.e. its value converges to the opposite value of the perturbation. ACTA also keeps its convergence properties, even in the case that the upper bound of the derivative of the perturbation exists, but it is unknown.

  8. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  9. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  10. The Hip Restoration Algorithm

    PubMed Central

    Stubbs, Allston Julius; Atilla, Halis Atil

    2016-01-01

    Summary Background Despite the rapid advancement of imaging and arthroscopic techniques about the hip joint, missed diagnoses are still common. As a deep joint and compared to the shoulder and knee joints, localization of hip symptoms is difficult. Hip pathology is not easily isolated and is often related to intra and extra-articular abnormalities. In light of these diagnostic challenges, we recommend an algorithmic approach to effectively diagnoses and treat hip pain. Methods In this review, hip pain is evaluated from diagnosis to treatment in a clear decision model. First we discuss emergency hip situations followed by the differentiation of intra and extra-articular causes of the hip pain. We differentiate the intra-articular hip as arthritic and non-arthritic and extra-articular pain as surrounding or remote tissue generated. Further, extra-articular hip pain is evaluated according to pain location. Finally we summarize the surgical treatment approach with an algorithmic diagram. Conclusion Diagnosis of hip pathology is difficult because the etiologies of pain may be various. An algorithmic approach to hip restoration from diagnosis to rehabilitation is crucial to successfully identify and manage hip pathologies. Level of evidence: V. PMID:28066734

  11. Algorithm for navigated ESS.

    PubMed

    Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L

    2013-12-01

    ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically.

  12. Large scale tracking algorithms

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  13. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  14. Radiation Dose from Reentrant Electrons

    NASA Technical Reports Server (NTRS)

    Badhwar, G.D.; Cleghorn, T. E.; Watts, J.

    2003-01-01

    In estimating the crew exposures during an EVA, the contribution of reentrant electrons has always been neglected. Although the flux of these electrons is small compared to the flux of trapped electrons, their energy spectrum extends to several GeV compared to about 7 MeV for trapped electrons. This is also true of splash electrons. Using the measured reentrant electron energy spectra, it is shown that the dose contribution of these electrons to the blood forming organs (BFO) is more than 10 times greater than that from the trapped electrons. The calculations also show that the dose-depth response is a very slowly changing function of depth, and thus adding reasonable amounts of additional shielding would not significantly lower the dose to BFO.

  15. Parameterization of solar flare dose

    SciTech Connect

    Lamarche, A.H.; Poston, J.W.

    1996-12-31

    A critical aspect of missions to the moon or Mars will be the safety and health of the crew. Radiation in space is a hazard for astronauts, especially high-energy radiation following certain types of solar flares. A solar flare event can be very dangerous if astronauts are not adequately shielded because flares can deliver a very high dose in a short period of time. The goal of this research was to parameterize solar flare dose as a function of time to see if it was possible to predict solar flare occurrence, thus providing a warning time. This would allow astronauts to take corrective action and avoid receiving a dose greater than the recommended limit set by the National Council on Radiation Protection and Measurements (NCRP).

  16. Algorithm for Constructing Contour Plots

    NASA Technical Reports Server (NTRS)

    Johnson, W.; Silva, F.

    1984-01-01

    General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.

  17. Two Meanings of Algorithmic Mathematics.

    ERIC Educational Resources Information Center

    Maurer, Stephen B.

    1984-01-01

    Two mathematical topics are interpreted from the viewpoints of traditional (performing algorithms) and contemporary (creating algorithms and thinking in terms of them for solving problems and developing theory) algorithmic mathematics. The two topics are Horner's method for evaluating polynomials and Gauss's method for solving systems of linear…

  18. Greedy algorithms in disordered systems

    NASA Astrophysics Data System (ADS)

    Duxbury, P. M.; Dobrin, R.

    1999-08-01

    We discuss search, minimal path and minimal spanning tree algorithms and their applications to disordered systems. Greedy algorithms solve these problems exactly, and are related to extremal dynamics in physics. Minimal cost path (Dijkstra) and minimal cost spanning tree (Prim) algorithms provide extremal dynamics for a polymer in a random medium (the KPZ universality class) and invasion percolation (without trapping) respectively.

  19. Grammar Rules as Computer Algorithms.

    ERIC Educational Resources Information Center

    Rieber, Lloyd

    1992-01-01

    One college writing teacher engaged his class in the revision of a computer program to check grammar, focusing on improvement of the algorithms for identifying inappropriate uses of the passive voice. Process and problems of constructing new algorithms, effects on student writing, and other algorithm applications are discussed. (MSE)

  20. Verifying a Computer Algorithm Mathematically.

    ERIC Educational Resources Information Center

    Olson, Alton T.

    1986-01-01

    Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)

  1. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    NASA Astrophysics Data System (ADS)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  2. Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning

    NASA Astrophysics Data System (ADS)

    Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.

    2008-02-01

    Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.

  3. A phantom study on the behavior of Acuros XB algorithm in flattening filter free photon beams

    PubMed Central

    Muralidhar, K. R.; Pangam, Suresh; Srinivas, P.; Athar Ali, Mirza; Priya, V. Sujana; Komanduri, Krishna

    2015-01-01

    To study the behavior of Acuros XB algorithm for flattening filter free (FFF) photon beams in comparison with the anisotropic analytical algorithm (AAA) when applied to homogeneous and heterogeneous phantoms in conventional and RapidArc techniques. Acuros XB (Eclipse version 10.0, Varian Medical Systems, CA, USA) and AAA algorithms were used to calculate dose distributions for both 6X FFF and 10X FFF energies. RapidArc plans were created on Catphan phantom 504 and conventional plans on virtual homogeneous water phantom 30 × 30 × 30 cm3, virtual heterogeneous phantom with various inserts and on solid water phantom with air cavity. Dose at various inserts with different densities were measured in both AAA and Acuros algorithms. The maximum % variation in dose was observed in (−944 HU) air insert and minimum in (85 HU) acrylic insert in both 6X FFF and 10X FFF photons. Less than 1% variation observed between −149 HU and 282 HU for both energies. At −40 HU and 765 HU Acuros behaved quite contrarily with 10X FFF. Maximum % variation in dose was observed in less HU values and minimum variation in higher HU values for both FFF energies. Global maximum dose observed at higher depths for Acuros for both energies compared with AAA. Increase in dose was observed with Acuros algorithm in almost all densities and decrease at few densities ranging from 282 to 643 HU values. Field size, depth, beam energy, and material density influenced the dose difference between two algorithms. PMID:26500400

  4. Monte Carlo fast dose calculator for proton radiotherapy: application to a voxelized geometry representing a patient with prostate cancer.

    PubMed

    Yepes, Pablo; Randeniya, Sharmalee; Taddei, Phillip J; Newhauser, Wayne D

    2009-01-07

    The Monte Carlo method is used to provide accurate dose estimates in proton radiation therapy research. While it is more accurate than commonly used analytical dose calculations, it is computationally intense. The aim of this work was to characterize for a clinical setup the fast dose calculator (FDC), a Monte Carlo track-repeating algorithm based on GEANT4. FDC was developed to increase computation speed without diminishing dosimetric accuracy. The algorithm used a database of proton trajectories in water to calculate the dose of protons in heterogeneous media. The extrapolation from water to 41 materials was achieved by scaling the proton range and the scattering angles. The scaling parameters were obtained by comparing GEANT4 dose distributions with those calculated with FDC for homogeneous phantoms. The FDC algorithm was tested by comparing dose distributions in a voxelized prostate cancer patient as calculated with well-known Monte Carlo codes (GEANT4 and MCNPX). The track-repeating approach reduced the CPU time required for a complete dose calculation in a voxelized patient anatomy by more than two orders of magnitude, while on average reproducing the results from the Monte Carlo predictions within 2% in terms of dose and within 1 mm in terms of distance.

  5. NOTE: Monte Carlo fast dose calculator for proton radiotherapy: application to a voxelized geometry representing a patient with prostate cancer

    NASA Astrophysics Data System (ADS)

    Yepes, Pablo; Randeniya, Sharmalee; Taddei, Phillip J.; Newhauser, Wayne D.

    2009-01-01

    The Monte Carlo method is used to provide accurate dose estimates in proton radiation therapy research. While it is more accurate than commonly used analytical dose calculations, it is computationally intense. The aim of this work was to characterize for a clinical setup the fast dose calculator (FDC), a Monte Carlo track-repeating algorithm based on GEANT4. FDC was developed to increase computation speed without diminishing dosimetric accuracy. The algorithm used a database of proton trajectories in water to calculate the dose of protons in heterogeneous media. The extrapolation from water to 41 materials was achieved by scaling the proton range and the scattering angles. The scaling parameters were obtained by comparing GEANT4 dose distributions with those calculated with FDC for homogeneous phantoms. The FDC algorithm was tested by comparing dose distributions in a voxelized prostate cancer patient as calculated with well-known Monte Carlo codes (GEANT4 and MCNPX). The track-repeating approach reduced the CPU time required for a complete dose calculation in a voxelized patient anatomy by more than two orders of magnitude, while on average reproducing the results from the Monte Carlo predictions within 2% in terms of dose and within 1 mm in terms of distance.

  6. Monte Carlo fast dose calculator for proton radiotherapy: application to a voxelized geometry representing a patient with prostate cancer

    PubMed Central

    Yepes, Pablo; Randeniya, Sharmalee; Taddei, Phillip J; Newhauser, Wayne D

    2014-01-01

    The Monte Carlo method is used to provide accurate dose estimates in proton radiation therapy research. While it is more accurate than commonly used analytical dose calculations, it is computationally intense. The aim of this work was to characterize for a clinical setup the fast dose calculator (FDC), a Monte Carlo track-repeating algorithm based on GEANT4. FDC was developed to increase computation speed without diminishing dosimetric accuracy. The algorithm used a database of proton trajectories in water to calculate the dose of protons in heterogeneous media. The extrapolation from water to 41 materials was achieved by scaling the proton range and the scattering angles. The scaling parameters were obtained by comparing GEANT4 dose distributions with those calculated with FDC for homogeneous phantoms. The FDC algorithm was tested by comparing dose distributions in a voxelized prostate cancer patient as calculated with well-known Monte Carlo codes (GEANT4 and MCNPX). The track-repeating approach reduced the CPU time required for a complete dose calculation in a voxelized patient anatomy by more than two orders of magnitude, while on average reproducing the results from the Monte Carlo predictions within 2% in terms of dose and within 1 mm in terms of distance. PMID:19075361

  7. Model-based Iterative Reconstruction: Effect on Patient Radiation Dose and Image Quality in Pediatric Body CT

    PubMed Central

    Dillman, Jonathan R.; Goodsitt, Mitchell M.; Christodoulou, Emmanuel G.; Keshavarzi, Nahid; Strouse, Peter J.

    2014-01-01

    Purpose To retrospectively compare image quality and radiation dose between a reduced-dose computed tomographic (CT) protocol that uses model-based iterative reconstruction (MBIR) and a standard-dose CT protocol that uses 30% adaptive statistical iterative reconstruction (ASIR) with filtered back projection. Materials and Methods Institutional review board approval was obtained. Clinical CT images of the chest, abdomen, and pelvis obtained with a reduced-dose protocol were identified. Images were reconstructed with two algorithms: MBIR and 100% ASIR. All subjects had undergone standard-dose CT within the prior year, and the images were reconstructed with 30% ASIR. Reduced- and standard-dose images were evaluated objectively and subjectively. Reduced-dose images were evaluated for lesion detectability. Spatial resolution was assessed in a phantom. Radiation dose was estimated by using volumetric CT dose index (CTDIvol) and calculated size-specific dose estimates (SSDE). A combination of descriptive statistics, analysis of variance, and t tests was used for statistical analysis. Results In the 25 patients who underwent the reduced-dose protocol, mean decrease in CTDIvol was 46% (range, 19%–65%) and mean decrease in SSDE was 44% (range, 19%–64%). Reduced-dose MBIR images had less noise (P > .004). Spatial resolution was superior for reduced-dose MBIR images. Reduced-dose MBIR images were equivalent to standard-dose images for lungs and soft tissues (P > .05) but were inferior for bones (P = .004). Reduced-dose 100% ASIR images were inferior for soft tissues (P < .002), lungs (P < .001), and bones (P < .001). By using the same reduced-dose acquisition, lesion detectability was better (38% [32 of 84 rated lesions]) or the same (62% [52 of 84 rated lesions]) with MBIR as compared with 100% ASIR. Conclusion CT performed with a reduced-dose protocol and MBIR is feasible in the pediatric population, and it maintains diagnostic quality. © RSNA, 2013 Online supplemental

  8. Improved calibration of mass stopping power in low density tissue for a proton pencil beam algorithm

    NASA Astrophysics Data System (ADS)

    Warren, Daniel R.; Partridge, Mike; Hill, Mark A.; Peach, Ken

    2015-06-01

    Dose distributions for proton therapy treatments are almost exclusively calculated using pencil beam algorithms. An essential input to these algorithms is the patient model, derived from x-ray computed tomography (CT), which is used to estimate proton stopping power along the pencil beam paths. This study highlights a potential inaccuracy in the mapping between mass density and proton stopping power used by a clinical pencil beam algorithm in materials less dense than water. It proposes an alternative physically-motivated function (the mass average, or MA, formula) for use in this region. Comparisons are made between dose-depth curves calculated by the pencil beam method and those calculated by the Monte Carlo particle transport code MCNPX in a one-dimensional lung model. Proton range differences of up to 3% are observed between the methods, reduced to  <1% when using the MA function. The impact of these range errors on clinical dose distributions is demonstrated using treatment plans for a non-small cell lung cancer patient. The change in stopping power calculation methodology results in relatively minor differences in dose when plans use three fields, but differences are observed at the 2%-2 mm level when a single field uniform dose technique is adopted. It is therefore suggested that the MA formula is adopted by users of the pencil beam algorithm for optimal dose calculation in lung, and that a similar approach is considered when beams traverse other low density regions such as the paranasal sinuses and mastoid process.

  9. Improved calibration of mass stopping power in low density tissue for a proton pencil beam algorithm.

    PubMed

    Warren, Daniel R; Partridge, Mike; Hill, Mark A; Peach, Ken

    2015-06-07

    Dose distributions for proton therapy treatments are almost exclusively calculated using pencil beam algorithms. An essential input to these algorithms is the patient model, derived from x-ray computed tomography (CT), which is used to estimate proton stopping power along the pencil beam paths. This study highlights a potential inaccuracy in the mapping between mass density and proton stopping power used by a clinical pencil beam algorithm in materials less dense than water. It proposes an alternative physically-motivated function (the mass average, or MA, formula) for use in this region. Comparisons are made between dose-depth curves calculated by the pencil beam method and those calculated by the Monte Carlo particle transport code MCNPX in a one-dimensional lung model. Proton range differences of up to 3% are observed between the methods, reduced to  <1% when using the MA function. The impact of these range errors on clinical dose distributions is demonstrated using treatment plans for a non-small cell lung cancer patient. The change in stopping power calculation methodology results in relatively minor differences in dose when plans use three fields, but differences are observed at the 2%-2 mm level when a single field uniform dose technique is adopted. It is therefore suggested that the MA formula is adopted by users of the pencil beam algorithm for optimal dose calculation in lung, and that a similar approach is considered when beams traverse other low density regions such as the paranasal sinuses and mastoid process.

  10. Effect of Embolization Material in the Calculation of Dose Deposition in Arteriovenous Malformations

    NASA Astrophysics Data System (ADS)

    De la Cruz, O. O. Galván; Lárraga-Gutiérrez, J. M.; Moreno-Jiménez, S.; Célis-López, M. A.

    2010-12-01

    In this work it is studied the impact of the incorporation of high Z materials (embolization material) in the dose calculation for stereotactic radiosurgery treatment for arteriovenous malformations. A statistical analysis is done to establish the variables that may impact in the dose calculation. To perform the comparison pencil beam (PB) and Monte Carlo (MC) calculation algorithms were used. The comparison between both dose calculations shows that PB overestimates the dose deposited. The statistical analysis, for the quantity of patients of the study (20), shows that the variable that may impact in the dose calculation is the volume of the high Z material in the arteriovenous malformation. Further studies have to be done to establish the clinical impact with the radiosurgery result.